content
stringlengths 73
1.12M
| license
stringclasses 3
values | path
stringlengths 9
197
| repo_name
stringlengths 7
106
| chain_length
int64 1
144
|
---|---|---|---|---|
<jupyter_start><jupyter_text>___
___
Note-
1) First make the copy of the project in your drive then start with the solution.
2) Dont run the cell directly, first add a call above it then run the cell so that you dont miss the solution.
# Logistic Regression Project
In this project we will be working with a fake advertising data set, indicating whether or not a particular internet user clicked on an Advertisement. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user.
This data set contains the following features:
* 'Daily Time Spent on Site': consumer time on site in minutes
* 'Age': cutomer age in years
* 'Area Income': Avg. Income of geographical area of consumer
* 'Daily Internet Usage': Avg. minutes a day consumer is on the internet
* 'Ad Topic Line': Headline of the advertisement
* 'City': City of consumer
* 'Male': Whether or not consumer was male
* 'Country': Country of consumer
* 'Timestamp': Time at which consumer clicked on Ad or closed window
* 'Clicked on Ad': 0 or 1 indicated clicking on Ad
## Import Libraries
**Import a few libraries you think you'll need (Or just import them as you go along!)**<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns<jupyter_output><empty_output><jupyter_text>## Get the Data
**Read in the advertising.csv file and set it to a data frame called ad_data.**<jupyter_code>df = pd.read_csv('advertising.csv')<jupyter_output><empty_output><jupyter_text>**Check the head of ad_data**<jupyter_code>df.head()
df.tail()
df['Ad Topic Line']
ad = pd.get_dummies(df['Ad Topic Line'])
ad<jupyter_output><empty_output><jupyter_text>** Use info and describe() on ad_data**<jupyter_code>df.info()
df.describe()
<jupyter_output><empty_output><jupyter_text>## Exploratory Data Analysis
Let's use seaborn to explore the data!
Try recreating the plots shown below!
** Create a histogram of the Age**<jupyter_code>sns.histplot(df['Age'] , bins= 30)
<jupyter_output><empty_output><jupyter_text>**Create a jointplot showing Area Income versus Age.**<jupyter_code>sns.jointplot(x = 'Age' , y = 'Area Income' , data = df)
<jupyter_output><empty_output><jupyter_text>** Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'**<jupyter_code>sns.jointplot(x = 'Daily Time Spent on Site' , y = 'Daily Internet Usage' , data = df , color = 'Green')
<jupyter_output><empty_output><jupyter_text>** Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.**<jupyter_code>sns.pairplot(data = df ,palette = 'BuGn_r' , hue = 'Clicked on Ad')
<jupyter_output><empty_output><jupyter_text># Logistic Regression
Now it's time to do a train test split, and train our model!
You'll have the freedom here to choose columns that you want to train on!** Split the data into training set and testing set using train_test_split**<jupyter_code>sns.heatmap(df.corr() , annot = True)
from sklearn.model_selection import train_test_split
df.info()
x1 = df[['Age','Area Income','Daily Time Spent on Site','Daily Internet Usage','Male']]
y1 = df['Clicked on Ad']
x1_train , x1_test , y1_train , y1_test = train_test_split(x1,y1,test_size = 0.3)<jupyter_output><empty_output><jupyter_text>** Train and fit a logistic regression model on the training set.**<jupyter_code>from sklearn.linear_model import LogisticRegression
log_model = LogisticRegression()
log_model.fit(x1_train , y1_train)<jupyter_output><empty_output><jupyter_text>## Predictions and Evaluations
** Now predict values for the testing data.**<jupyter_code>predict1 = log_model.predict(x1_test)<jupyter_output><empty_output><jupyter_text>** Create a classification report for the model.**<jupyter_code>from sklearn.metrics import confusion_matrix , classification_report
confusion_matrix( y1_test , predict1)
print(classification_report(y1_test, predict1))
<jupyter_output><empty_output>
|
no_license
|
/Logistic_Regression_Assignment.ipynb
|
young-ai-expert/Data-Science-Projects
| 12 |
<jupyter_start><jupyter_text># Vehicle detection and tracking project*picture by Udacity*<jupyter_code>import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import os
import matplotlib.image as mpimg
%matplotlib qt
%matplotlib inline
vehs = []
for image in os.listdir(os.getcwd() + "/vehicles/GTI_Right"):
vehs.append(os.getcwd() + "/vehicles/GTI_Right/" + image)
for image in os.listdir(os.getcwd() + "/vehicles/GTI_MiddleClose"):
vehs.append(os.getcwd() + "/vehicles/GTI_MiddleClose/" + image)
for image in os.listdir(os.getcwd() + "/vehicles/GTI_Left"):
vehs.append(os.getcwd() + "/vehicles/GTI_Left/" + image)
for image in os.listdir(os.getcwd() + "/vehicles/GTI_Far"):
vehs.append(os.getcwd() + "/vehicles/GTI_Far/" + image)
for image in os.listdir(os.getcwd() + "/vehicles/KITTI_extracted"):
vehs.append(os.getcwd() + "/vehicles/KITTI_extracted/" + image)
non_vehs = []
for image in os.listdir(os.getcwd() + "/non-vehicles/GTI"):
non_vehs.append(os.getcwd() + "/non-vehicles/GTI/" + image)
for image in os.listdir(os.getcwd() + "/non-vehicles/Extras"):
non_vehs.append(os.getcwd() + "/non-vehicles/Extras/" + image)
print(len(vehs))
print(len(non_vehs))
print(non_vehs[0])
testimg1 = mpimg.imread("test_images/test1.jpg")
plt.imshow(testimg1)
plt.show()<jupyter_output><empty_output><jupyter_text>## 1. Functions to extract the features### 1.1 get color features<jupyter_code>def color_hist(img, nbins=32, bins_range=(0, 256)):
# Compute the histogram of the color channels separately
channel1_hist = np.histogram(img[:, :, 0], bins=nbins, range=bins_range)
channel2_hist = np.histogram(img[:, :, 1], bins=nbins, range=bins_range)
channel3_hist = np.histogram(img[:, :, 2], bins=nbins, range=bins_range)
# Concatenate the histograms into a single feature vector
hist_features = np.concatenate((channel1_hist[0], channel2_hist[0], channel3_hist[0]))
# Return the individual histograms, bin_centers and feature vector
return hist_features
def bin_spatial(img, size=(32, 32)):
features = cv2.resize(img, size).ravel()
# Return the feature vector
return features<jupyter_output><empty_output><jupyter_text>### 1.2 get hog features<jupyter_code>from skimage.feature import hog
def get_hog_features(img, orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True):
if vis == True:
features, hog_image = hog(img, orientations=orient,
pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block),
transform_sqrt=True,
visualise=vis, feature_vector=feature_vec)
return features, hog_image
else:
features = hog(img, orientations=orient,
pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block),
transform_sqrt=True,
visualise=vis, feature_vector=feature_vec)
return features
# example
exp_img = cv2.cvtColor(mpimg.imread(vehs[0]), cv2.COLOR_RGB2GRAY)
feats, hogImg = get_hog_features(exp_img, 8, 8, 2, vis=True, feature_vec=True)
fig1 = plt.figure(figsize = (10,10))
ax1 = fig1.add_subplot(121)
ax1.imshow(mpimg.imread(vehs[0]), interpolation='none')
ax1.set_title('Initical picture')
ax3 = fig1.add_subplot(122)
ax3.imshow(hogImg, interpolation='none', cmap="gray")
ax3.set_title('Image with HOG features')
plt.show()<jupyter_output>C:\Anaconda3\envs\tensorflow-with-gpu-10\lib\site-packages\skimage\feature\_hog.py:119: skimage_deprecation: Default value of `block_norm`==`L1` is deprecated and will be changed to `L2-Hys` in v0.15
'be changed to `L2-Hys` in v0.15', skimage_deprecation)
<jupyter_text>### 1.3 combine and normalize<jupyter_code>def extract_features(imgs, color_space='RGB', spatial_size=(32, 32),
hist_bins=32, orient=9,
pix_per_cell=8, cell_per_block=2, hog_channel=0,
spatial_feat=True, hist_feat=True, hog_feat=True):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
file_features = []
# Read in each one by one
image = mpimg.imread(file)
# Apply color conversion
if color_space != 'RGB':
if color_space == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
elif color_space == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)
elif color_space == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
elif color_space == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
elif color_space == 'YCrCb':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YCrCb)
else:
feature_image = np.copy(image)
if spatial_feat == True:
spatial_features = bin_spatial(feature_image, size=spatial_size)
file_features.append(spatial_features)
if hist_feat == True:
hist_features = color_hist(feature_image, nbins=hist_bins)
file_features.append(hist_features)
if hog_feat == True:
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.append(get_hog_features(feature_image[:, :, channel],
orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True))
hog_features = np.ravel(hog_features)
else:
hog_features = get_hog_features(feature_image[:, :, hog_channel], orient,
pix_per_cell, cell_per_block, vis=False, feature_vec=True)
# Append the new feature vector to the list of features
file_features.append(hog_features)
features.append(np.concatenate(file_features))
# Return list of feature vectors
return features<jupyter_output><empty_output><jupyter_text>## 2. Search frames and find boxes**The following functions were built to evaluate and test the performance on single images, later on the functions are implemented in a class object with enhanced performance**### 2.1 sliding windows function<jupyter_code>def slide_window(img, x_start_stop=[None, None], y_start_stop=[None, None],
xy_window=(64, 64), xy_overlap=(0.5, 0.5)):
# If x and/or y start/stop positions not defined, set to image size
if x_start_stop[0] == None:
x_start_stop[0] = 0
if x_start_stop[1] == None:
x_start_stop[1] = img.shape[1]
if y_start_stop[0] == None:
y_start_stop[0] = 0
if y_start_stop[1] == None:
y_start_stop[1] = img.shape[0]
# Region to be searched
xspan = x_start_stop[1] - x_start_stop[0]
yspan = y_start_stop[1] - y_start_stop[0]
# Number of pixels per step in x/y
nx_pix_per_step = np.int(xy_window[0] * (1 - xy_overlap[0]))
ny_pix_per_step = np.int(xy_window[1] * (1 - xy_overlap[1]))
# Number of windows in x/y
nx_buffer = np.int(xy_window[0] * (xy_overlap[0]))
ny_buffer = np.int(xy_window[1] * (xy_overlap[1]))
nx_windows = np.int((xspan - nx_buffer) / nx_pix_per_step)
ny_windows = np.int((yspan - ny_buffer) / ny_pix_per_step)
# Initialize a list to append window parameters to
window_list = []
for ys in range(ny_windows):
for xs in range(nx_windows):
# Calculate window position
startx = xs * nx_pix_per_step + x_start_stop[0]
endx = startx + xy_window[0]
starty = ys * ny_pix_per_step + y_start_stop[0]
endy = starty + xy_window[1]
# Append window position to list
window_list.append(((startx, starty), (endx, endy)))
# Return the list of windows
return window_list<jupyter_output><empty_output><jupyter_text>### 2.2 draw boxes function<jupyter_code>def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Copy the image
imcopy = np.copy(img)
# Iterate through the boxes
for bbox in bboxes:
# Draw a rectangle given bbox coordinates
cv2.rectangle(imcopy, bbox[0], bbox[1], color, thick)
# Return the image copy with boxes drawn
return imcopy<jupyter_output><empty_output><jupyter_text>### 2.3 single image features extraction<jupyter_code>def single_img_features(img, color_space='RGB', spatial_size=(32, 32),
hist_bins=32, orient=9,
pix_per_cell=8, cell_per_block=2, hog_channel=0,
spatial_feat=True, hist_feat=True, hog_feat=True):
# Empty list
img_features = []
# Apply color conversion
if color_space != 'RGB':
if color_space == 'HSV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
elif color_space == 'LUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
elif color_space == 'HLS':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
elif color_space == 'YUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
elif color_space == 'YCrCb':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
else: feature_image = np.copy(img)
# Compute spatial features
if spatial_feat == True:
spatial_features = bin_spatial(feature_image, size=spatial_size)
# Append features to list
img_features.append(spatial_features)
# Compute histogram features
if hist_feat == True:
hist_features = color_hist(feature_image, nbins=hist_bins)
# Append features to list
img_features.append(hist_features)
# Compute HOG features
if hog_feat == True:
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.extend(get_hog_features(feature_image[:,:,channel],
orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True))
else:
hog_features = get_hog_features(feature_image[:,:,hog_channel], orient,
pix_per_cell, cell_per_block, vis=False, feature_vec=True)
# Append features to list
img_features.append(hog_features)
# Return concatenated array of features
return np.concatenate(img_features)<jupyter_output><empty_output><jupyter_text>### Search windows function<jupyter_code>
def search_windows(img, windows, clf, scaler, color_space='RGB',
spatial_size=(32, 32), hist_bins=32,
hist_range=(0, 256), orient=9,
pix_per_cell=8, cell_per_block=2,
hog_channel=0):
# Create an empty list for the found windows
on_windows = []
# Iterate over all windows in the windows list
for window in windows:
# Extract the test window from original image
test_img = cv2.resize(img[window[0][1]:window[1][1], window[0][0]:window[1][0]], (64, 64))
# Extract features for that window using single_img_features()
features = single_img_features(test_img, color_space=color_space,
spatial_size=spatial_size, hist_bins=hist_bins,
orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block,
hog_channel=hog_channel, spatial_feat=spatial_feat,
hist_feat=hist_feat, hog_feat=hog_feat)
test_features = scaler.transform(np.array(features).reshape(1, -1))
# Predict using your classifier
prediction = clf.predict(test_features)
#If positive (prediction == 1) then save the window
if prediction == 1:
on_windows.append(window)
# Return windows for positive detections
return on_windows
# RGB, HSV, LUV, HLS, YUV, YCrCb
color_space = 'YCrCb'
# HOG orientations
orient = 9
# Pix per cell
pix_per_cell = 8
# Cells per block
cell_per_block = 2
# Hog channel (1,2,3 or ALL)
hog_channel = "ALL"
# Dimension for spatial binning
spatial_size = (32, 32)
# Number of histogram bins
hist_bins = 32
spatial_feat = True
hist_feat = True
hog_feat = True
y_start_stop = [350, None]
hist_range = (0, 256)<jupyter_output><empty_output><jupyter_text>## 3. Training the classifier<jupyter_code># Get the features of cars and noncars
car_features = extract_features(vehs, color_space, spatial_size, hist_bins, orient, pix_per_cell,
cell_per_block, hog_channel)
noncar_features = extract_features(non_vehs, color_space, spatial_size, hist_bins, orient, pix_per_cell,
cell_per_block, hog_channel)
# Check length of extracted image paths
print(len(car_features))
print(len(noncar_features))
print(np.shape(car_features))
print(np.shape(noncar_features))
y = np.hstack((np.ones(len(car_features)), np.zeros(len(noncar_features))))
print(y.shape)
X = np.vstack((car_features, noncar_features)).astype(np.float64)
print(X.shape)<jupyter_output>(17760, 8460)
<jupyter_text>#### Data normalization and classifier training<jupyter_code># Normalize training data
from sklearn.preprocessing import StandardScaler
X_Scaler = StandardScaler().fit(X)
X_scaled = X_Scaler.transform(X)
from sklearn.model_selection import train_test_split
rand_state = np.random.randint(0, 100)
# Split data in train/test data and shuffle it
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size = 0.2, random_state = rand_state)
from sklearn.svm import LinearSVC
# Train support vector machine
svc = LinearSVC()
svc.fit(X_train, y_train)<jupyter_output><empty_output><jupyter_text>### Get the accuarcy of the classifier<jupyter_code># Check the test accuarcy of the linear support vector machine
svc.score(X_test, y_test)<jupyter_output><empty_output><jupyter_text>## 4. Implementation of sliding windows search (exemplary)<jupyter_code>image = mpimg.imread('test_images/test6.jpg')
draw_image = np.copy(image)
# Scale the image since its a .jpg
image = image.astype(np.float32)/255
# Search with three different window sizes
windows = slide_window(image, x_start_stop=[None, None], y_start_stop=y_start_stop,
xy_window=(100, 100), xy_overlap=(0.5, 0.5))
windows2 = slide_window(image, x_start_stop=[None, None], y_start_stop=y_start_stop,
xy_window=(200, 200), xy_overlap=(0.3, 0.3))
windows3 = slide_window(image, x_start_stop=[None, None], y_start_stop=y_start_stop,
xy_window=(64, 64), xy_overlap=(0.3, 0.3))
# Get the found windows that match the features as list
hot_windows = search_windows(image, windows, svc, X_Scaler, color_space=color_space,
spatial_size=spatial_size, hist_bins=hist_bins,
orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block,
hog_channel=hog_channel)
hot_windows2 = search_windows(image, windows2, svc, X_Scaler, color_space=color_space,
spatial_size=spatial_size, hist_bins=hist_bins,
orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block,
hog_channel=hog_channel)
hot_windows3 = search_windows(image, windows3, svc, X_Scaler, color_space=color_space,
spatial_size=spatial_size, hist_bins=hist_bins,
orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block,
hog_channel=hog_channel)
# Draw the found windows that match the features in boxes
window_img = draw_boxes(draw_image, hot_windows, color=(0, 0, 255), thick=6)
window_img2 = draw_boxes(window_img, hot_windows2, color=(0, 0, 255), thick=6)
window_img3 = draw_boxes(window_img2, hot_windows2, color=(0, 0, 255), thick=6)
plt.imshow(window_img3)
# helper function for color conversion
def convert_color(img, conv='RGB2YCrCb'):
if conv == 'RGB2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
if conv == 'BGR2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
if conv == 'RGB2LUV':
return cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
<jupyter_output><empty_output><jupyter_text>### 4.1 find cars function<jupyter_code># find cars function as shown in the lession
# function uses a scale-factor to search with different window sizes
# function as well replaces the overlapping with cells_per_steps
def find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block,
spatial_size, hist_bins):
draw_img = np.copy(img)
img = img.astype(np.float32)/255
img_tosearch = img[ystart:ystop,:,:]
ctrans_tosearch = convert_color(img_tosearch, conv='RGB2YCrCb')
boxes = []
if scale != 1:
imshape = ctrans_tosearch.shape
ctrans_tosearch = cv2.resize(ctrans_tosearch,(np.int(imshape[1]/scale),(np.int(imshape[0]/scale))))
ch1 = ctrans_tosearch[:,:,0]
ch2 = ctrans_tosearch[:,:,1]
ch3 = ctrans_tosearch[:,:,2]
#Blocks and steps
nxblocks = (ch1.shape[1] // pix_per_cell) - cell_per_block + 1
nyblocks = (ch1.shape[0] // pix_per_cell) - cell_per_block + 1
nfeat_per_block = orient * cell_per_block**2
window = 64
nblocks_per_window = (window // pix_per_cell) - cell_per_block + 1
# Replacing overlapping with cells_per_step
cells_per_step = 2
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step
nysteps = (nyblocks - nblocks_per_window) // cells_per_step
#get hog features
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
for xb in range(nxsteps):
for yb in range(nysteps):
ypos = yb * cells_per_step
xpos = xb * cells_per_step
# Extract hog features
hog_feat1 = hog1[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat2 = hog2[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat3 = hog3[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))
x_left = xpos * pix_per_cell
y_top = ypos * pix_per_cell
# Extract img patch
subimg = cv2.resize(ctrans_tosearch[y_top:y_top+window, x_left:x_left+window],(64,64))
spatial_features = bin_spatial(subimg, size=spatial_size)
hist_features = color_hist(subimg, nbins=hist_bins)
#test_features2 = np.concatenate((spatial_features, hist_features, hog_features))
#test_features = X_scaler.transform(np.array(test_features2)).reshape(1, -1)
test_features = X_scaler.transform(np.hstack((spatial_features, hist_features,
hog_features)).reshape(1, -1))
test_prediction = svc.predict(test_features)
if test_prediction == 1:
xbox_left = np.int(x_left * scale)
ytop_draw = np.int(y_top * scale)
win_draw = np.int(window*scale)
cv2.rectangle(draw_img,(xbox_left, ytop_draw+ystart),(xbox_left+win_draw, ytop_draw+
win_draw+ystart),(0,0,255),6)
boxes.append(((xbox_left, ytop_draw+ystart),(xbox_left+win_draw, ytop_draw+
win_draw+ystart)))
return draw_img, boxes
ystart = 400
ystop = 656
scale = 1.5
img_test = mpimg.imread('test_images/test1.jpg')
out_img, boxes = find_cars(img_test, ystart, ystop, scale, svc, X_Scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
plt.imshow(out_img)
print(boxes)<jupyter_output>[((48, 424), (144, 520)), ((792, 400), (888, 496)), ((816, 400), (912, 496)), ((840, 400), (936, 496)), ((864, 400), (960, 496)), ((1056, 400), (1152, 496)), ((1128, 400), (1224, 496)), ((1128, 424), (1224, 520)), ((1152, 400), (1248, 496)), ((1152, 424), (1248, 520))]
<jupyter_text>### 4.2 Add heatmap to cut out false-positives<jupyter_code># Heatmap functions / False positives
threshold = 1
heat = np.zeros_like(img_test[:,:,0]).astype(np.float)
def add_heat(heatmap, bbox_list):
for box in bbox_list:
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
return heatmap
def apply_threshold(heatmap, threshold):
heatmap[heatmap <= threshold] = 0
return heatmap
def draw_labeled_boxes(img, labels):
for car_number in range(1, labels[1]+1):
nonzero = (labels[0] == car_number).nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
#print(bbox[0], bbox[1])
#cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
img = draw_boxes(img, [bbox], color=(0, 0, 255), thick=6)
return img
heat = add_heat(heat, boxes)
heat= apply_threshold(heat, threshold)
heatmap = np.clip(heat, 0, 255)
from scipy.ndimage.measurements import label
labels = label(heatmap)
print(labels[1], 'vehicles found')
plt.imshow(labels[0], cmap='hot')
draw_img2 = draw_labeled_boxes(np.copy(img_test), labels)
plt.imshow(draw_img2)
# pulling everything together
threshold = 1
def vehicle_detection(image, ystart, ystop, scale, svc, X_Scaler, orient,pix_per_cell, cell_per_block, spatial_size,
hist_bins, threshold):
#find cars in image
out_img, boxes = find_cars(image, ystart, ystop, scale, svc, X_Scaler, orient,
pix_per_cell, cell_per_block, spatial_size, hist_bins)
heat = np.zeros_like(img_test[:,:,0]).astype(np.float)
box_list = boxes
heat = add_heat(heat, box_list)
heat= apply_threshold(heat, threshold)
heatmap = np.clip(heat, 0, 255)
labels = label(heatmap)
draw_img = draw_labeled_boxes(np.copy(image), labels)
return draw_img
draw_img = vehicle_detection(np.copy(img_test), ystart, ystop, scale, svc, X_Scaler, orient,pix_per_cell, cell_per_block, spatial_size,
hist_bins, threshold)
plt.imshow(draw_img)
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
#image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
processed_img = vehicle_detection(image, ystart, ystop, scale, svc, X_Scaler, orient,pix_per_cell, cell_per_block, spatial_size,
hist_bins, threshold)
return processed_img
#smooth the pipeline outcome
def find_cars_multiscale(img, ystart_ystop_scale, svc, X_scaler, orient, pix_per_cell, cell_per_block,
spatial_size, hist_bins):
draw_img = np.copy(img)
img = img.astype(np.float32)/255
for (ystart, ystop, scale) in ystart_ystop_scale:
img_tosearch = img[ystart:ystop,:,:]
ctrans_tosearch = convert_color(img_tosearch, conv='RGB2YCrCb')
boxes = []
if scale != 1:
imshape = ctrans_tosearch.shape
ctrans_tosearch = cv2.resize(ctrans_tosearch,(np.int(imshape[1]/scale),(np.int(imshape[0]/scale))))
ch1 = ctrans_tosearch[:,:,0]
ch2 = ctrans_tosearch[:,:,1]
ch3 = ctrans_tosearch[:,:,2]
# Blocks and steps
nxblocks = (ch1.shape[1] // pix_per_cell) - cell_per_block + 1
nyblocks = (ch1.shape[0] // pix_per_cell) - cell_per_block + 1
nfeat_per_block = orient * cell_per_block**2
window = 64
nblocks_per_window = (window // pix_per_cell) - cell_per_block + 1
cells_per_step = 2
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step
nysteps = (nyblocks - nblocks_per_window) // cells_per_step
# Extract hog features
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
for xb in range(nxsteps):
for yb in range(nysteps):
ypos = yb * cells_per_step
xpos = xb * cells_per_step
# Extract hog features
hog_feat1 = hog1[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat2 = hog2[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat3 = hog3[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))
x_left = xpos * pix_per_cell
y_top = ypos * pix_per_cell
#extract img patch
subimg = cv2.resize(ctrans_tosearch[y_top:y_top+window, x_left:x_left+window],(64,64))
spatial_features = bin_spatial(subimg, size=spatial_size)
hist_features = color_hist(subimg, nbins=hist_bins)
#test_features2 = np.concatenate((spatial_features, hist_features, hog_features))
#test_features = X_scaler.transform(np.array(test_features2)).reshape(1, -1)
test_features = X_scaler.transform(np.hstack((spatial_features, hist_features,
hog_features)).reshape(1, -1))
test_prediction = svc.predict(test_features)
if test_prediction == 1:
xbox_left = np.int(x_left * scale)
ytop_draw = np.int(y_top * scale)
win_draw = np.int(window*scale)
cv2.rectangle(draw_img,(xbox_left, ytop_draw+ystart),(xbox_left+win_draw, ytop_draw+
win_draw+ystart),(0,0,255),6)
boxes.append(((xbox_left, ytop_draw+ystart),(xbox_left+win_draw, ytop_draw+
win_draw+ystart)))
return draw_img, boxes
from collections import deque
heatmaps = deque(maxlen=3)
threshold = 2
ystart_ystop_scale = [(400, 560, 1.5), (450, 650, 1.8), (500, 700, 2.5)]
def vehicle_detection_smooth(image, ystart_ystop_scale, svc, X_Scaler, orient,pix_per_cell,
cell_per_block, spatial_size, hist_bins, threshold):
# find cars in image
out_img, boxes = find_cars_multiscale(image, ystart_ystop_scale, svc, X_Scaler, orient,
pix_per_cell, cell_per_block, spatial_size, hist_bins)
heat = np.zeros_like(img_test[:,:,0]).astype(np.float)
box_list = boxes
current_heatmap = add_heat(image, box_list)
heatmaps.append(current_heatmap)
heatmap_sum = sum(heatmaps)
heat = apply_threshold(heatmap_sum, threshold)
heatmap = np.clip(heat, 0, 255)
labels = label(heatmap)
draw_img = draw_labeled_boxes(np.copy(image), labels)
return draw_img
draw_img = vehicle_detection_smooth(np.copy(img_test), ystart_ystop_scale, svc, X_Scaler, orient,pix_per_cell, cell_per_block, spatial_size,
hist_bins, threshold)
plt.imshow(draw_img)<jupyter_output><empty_output><jupyter_text>## 5. VehicleDetector Class Object**This class object combines the prework described above in one class object**<jupyter_code># To make thing easier the following class object, comprising the relevant functions to detect vehicles, is created.
class VehicleDetector:
def __init__(self):
# Init the class using the parameters as for the single functions before
self.color_space = 'YCrCb'
self.orient = 9
self.pix_per_cell = 8
self.cell_per_block = 2
self.hog_channel = "ALL"
self.spatial_size = (32, 32)
self.hist_bins = 32
self.spatial_feat = True
self.hist_feat = True
self.hog_feat = True
# Current heatmap
self.heatmap = None
# Heatmaps of last 3 frames
self.heat_images = deque(maxlen=3)
self.frame_count = 0
self.full_frame_processing_interval = 10
# Xstart
self.xstart = 600
# Various Scales
self.ystart_ystop_scale = None
# Kernal For Dilation
self.kernel = np.ones((50, 50))
# Threshold for Heatmap
self.threshold = None
self.X_scaler = X_Scaler
self.svc = svc
def find_cars_smooth(self, img):
X_scaler = self.X_scaler
orient = self.orient
pix_per_cell = self.pix_per_cell
cell_per_block = self.cell_per_block
spatial_size = self.spatial_size
hist_bins = self.hist_bins
svc = self.svc
box_list = []
draw_img = np.copy(img)
img = img.astype(np.float32) / 255
if self.frame_count % self.full_frame_processing_interval == 0:
mask = np.ones_like(img[:, :, 0])
else:
mask = np.sum(np.array(self.heat_images), axis=0)
mask[(mask > 0)] = 1
mask = cv2.dilate(mask, self.kernel, iterations=1)
self.frame_count += 1
for (ystart, ystop, scale) in self.ystart_ystop_scale:
nonzero = mask.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
if len(nonzeroy) != 0:
ystart = max(np.min(nonzeroy), ystart)
ystop = min(np.max(nonzeroy), ystop)
if len(nonzeroy) != 0:
xstart = max(np.min(nonzerox), self.xstart)
xstop = np.max(nonzerox)
else:
continue
if xstop <= xstart or ystop <= ystart:
continue
img_tosearch = img[ystart:ystop, xstart:xstop, :]
ctrans_tosearch = convert_color(img_tosearch, conv='RGB2YCrCb')
if scale != 1:
imshape = ctrans_tosearch.shape
ys = np.int(imshape[1] / scale)
xs = np.int(imshape[0] / scale)
if (ys < 1 or xs < 1):
continue
ctrans_tosearch = cv2.resize(ctrans_tosearch, (np.int(imshape[1] / scale), np.int(imshape[0] / scale)))
if ctrans_tosearch.shape[0] < 64 or ctrans_tosearch.shape[1] < 64:
continue
ch1 = ctrans_tosearch[:, :, 0]
ch2 = ctrans_tosearch[:, :, 1]
ch3 = ctrans_tosearch[:, :, 2]
nxblocks = (ch1.shape[1] // pix_per_cell) - 1
nyblocks = (ch1.shape[0] // pix_per_cell) - 1
nfeat_per_block = orient * cell_per_block ** 2
window = 64
nblocks_per_window = (window // pix_per_cell) - 1
cells_per_step = 2
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step
nysteps = (nyblocks - nblocks_per_window) // cells_per_step
# Compute individual channel HOG features for the entire image
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, feature_vec=False)
for xb in range(nxsteps + 1):
for yb in range(nysteps + 1):
ypos = yb * cells_per_step
xpos = xb * cells_per_step
# Extract HOG for this patch
hog_feat1 = hog1[ypos:ypos + nblocks_per_window, xpos:xpos + nblocks_per_window].ravel()
hog_feat2 = hog2[ypos:ypos + nblocks_per_window, xpos:xpos + nblocks_per_window].ravel()
hog_feat3 = hog3[ypos:ypos + nblocks_per_window, xpos:xpos + nblocks_per_window].ravel()
hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))
xleft = xpos * pix_per_cell
ytop = ypos * pix_per_cell
# Extract the image patch
subimg = ctrans_tosearch[ytop:ytop + window, xleft:xleft + window]
# Get color features
spatial_features = bin_spatial(subimg, size=spatial_size)
hist_features = color_hist(subimg, nbins=hist_bins)
# Scale features and make a prediction
test_features = X_scaler.transform(
np.hstack((spatial_features, hist_features, hog_features)).reshape(1, -1))
# test_features = X_scaler.transform(np.hstack((shape_feat, hist_feat)).reshape(1, -1))
test_prediction = svc.predict(test_features)
if test_prediction == 1:
xbox_left = xstart + np.int(xleft * scale)
ytop_draw = np.int(ytop * scale)
win_draw = np.int(window * scale)
box_list.append(
((xbox_left, ytop_draw + ystart), (xbox_left + win_draw, ytop_draw + win_draw + ystart)))
# Add heat to each box in box list
self.add_heatmap_and_threshold(draw_img, box_list, self.threshold)
# Find final boxes from heatmap using label function
labels = label(self.heatmap)
VehicleDetector.draw_labeled_bboxes_smooth(draw_img, labels)
return draw_img
def add_heatmap_and_threshold(self, draw_img, bbox_list, threshold):
heatmap = np.zeros_like(draw_img[:, :, 0]).astype(np.float)
for box in bbox_list:
# Add += 1 for all pixels inside each bbox
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
self.heat_images.append(heatmap)
self.heatmap = np.sum(np.array(self.heat_images), axis=0)
self.heatmap[self.heatmap <= threshold] = 0
@staticmethod
def draw_labeled_bboxes_smooth(img, labels):
# Iterate through all detected cars
for car_number in range(1, labels[1] + 1):
# Find pixels with each car_number label value
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# Draw the box on the image
cv2.rectangle(img, bbox[0], bbox[1], (0, 0, 255), 6)<jupyter_output><empty_output><jupyter_text>## 6. Combining it with my last project submission<jupyter_code>import pickle
with open('objs.pkl', 'rb') as f:
mtx, dist = pickle.load(f)
def undistort_img(img, mtx, dist):
img_undist = cv2.undistort(img, mtx, dist, None, mtx)
return img_undist
def grayscale(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
return img_gray
def SobelOp(img, kernelSize, thresh_min, thresh_max):
sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize = kernelSize)
sobely = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize = kernelSize)
abs_sobelx = np.absolute(sobelx)
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel > thresh_min) & (scaled_sobel <= thresh_max)] = 1
return scaled_sobel, sxbinary
def HLSconv(img, s_thresh_min, s_thresh_max):
# convert to hls
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_chan = hls[:,:,2]
s_binary = np.zeros_like(s_chan)
s_binary[(s_chan > s_thresh_min) & (s_chan <= s_thresh_max)] = 1
return s_binary
def StackFilters(sxbinary, s_binary):
color_binary = np.dstack((np.zeros_like(sxbinary), sxbinary, s_binary))*255
comb_binary = np.zeros_like(sxbinary)
comb_binary[(s_binary == 1) | (sxbinary == 1)] = 1
return comb_binary
def IMGwarp(img):
src = np.float32([[740, 460], [1050, 680], [280, 680], [580, 460]])
dst = np.float32([[950, 0], [950, 720], [370, 720], [370, 0]])
img_size = (img.shape[1], img.shape[0])
M = cv2.getPerspectiveTransform(src, dst)
Minv = cv2.getPerspectiveTransform(dst, src)
img_warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
return M, Minv, img_warped
def maskImage(img_w):
mask = np.zeros_like(img_w)
ignore_mask_color = 255
ignore_mask_color2 = 0
vertices = np.array([[(200, 720),(300, 200), (1100, 200), (1250,720)]], dtype=np.int32)
vertices2 = np.array([[(600, 720),(675, 450), (725, 450), (800,720)]], dtype=np.int32)
cv2.fillPoly(mask, vertices, ignore_mask_color)
cv2.fillPoly(mask, vertices2, ignore_mask_color2)
img_masked = cv2.bitwise_and(img_w, mask)
return img_masked
def FindLanes(img_warped):
histogram = np.sum(img_warped[img_warped.shape[0]//2:,:], axis=0)
out_img = np.dstack((img_warped, img_warped, img_warped))*255
midpoint = np.int(histogram.shape[0]/2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:])+midpoint
offset_shadow = 100
detected_left = False
detected_right = False
if (leftx_base > 300) & (leftx_base < 500):
detected_left = True
if (rightx_base > 800) & (rightx_base < 1000):
detected_right = True
if detected_left == False:
leftx_base = np.argmax(histogram[:midpoint-offset_shadow])
if detected_right == False:
rightx_base = np.argmax(histogram[midpoint+offset_shadow:])+midpoint
return midpoint, leftx_base, rightx_base, detected_left, out_img
def distanceMid(img2, right_fitx, left_fitx):
camera_position = img2.shape[1]/2
lane_center = (right_fitx[719] + left_fitx[719])/2
center_offset_pixels = abs(camera_position - lane_center)
center_offset = center_offset_pixels*xm_per_pix
return center_offset
def WindowSearch(img_warped, n_win, marg, minpix, leftx_base, rightx_base, out_img):
nwindows = n_win #9
window_height = np.int(img_warped.shape[0]/nwindows)
nonzero = img_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
leftx_current = leftx_base
rightx_current = rightx_base
margin = marg
minpix = minpix
left_lane_inds = []
right_lane_inds = []
for window in range(nwindows):
win_y_low = img_warped.shape[0] - (window + 1) * window_height
win_y_high = img_warped.shape[0] - window * window_height
win_x_left_low = leftx_current - margin
win_x_left_high = leftx_current + margin
win_x_right_low = rightx_current - margin
win_x_right_high = rightx_current + margin
cv2.rectangle(out_img, (win_x_left_low, win_y_low),(win_x_left_high, win_y_high), (0, 255, 0), 2)
cv2.rectangle(out_img, (win_x_right_low, win_y_low),(win_x_right_high, win_y_high), (0, 255, 0), 2)
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_x_left_low) &
(nonzerox < win_x_left_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_x_right_low) &
(nonzerox < win_x_right_high)).nonzero()[0]
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
if (len(good_left_inds) > minpix) & (len(good_right_inds) > minpix):
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# concatenate found lane indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# extract the pixel positions
left_x = nonzerox[left_lane_inds]
left_y = nonzeroy[left_lane_inds]
right_x = nonzerox[right_lane_inds]
right_y = nonzeroy[right_lane_inds]
# fit second order polynomial
left_fit = np.polyfit(left_y, left_x, 2)
right_fit = np.polyfit(right_y, right_x, 2)
# Visualize sliding windows
ploty = np.linspace(0, img_warped.shape[0]-1, img_warped.shape[0])
left_fitx = left_fit[0]*ploty**2 + left_fit[1] * ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1] * ploty + right_fit[2]
return left_lane_inds, right_lane_inds, left_fitx, right_fitx, ploty, left_x, left_y, right_x, right_y
def curvature(left_x, left_y, right_x, right_y, ploty):
lane_length = 30
lane_width = 3.7
y_eval = np.max(ploty)
ym_per_pix = lane_length/720
xm_per_pix = lane_width/580
left_fit_cr = np.polyfit(left_y * ym_per_pix, left_x * xm_per_pix, 2)
right_fit_cr = np.polyfit(right_y * ym_per_pix, right_x * xm_per_pix, 2)
left_curverad_cr = ((1 + (2 * left_fit_cr[0] * y_eval*ym_per_pix + left_fit_cr[1]) ** 2) ** 1.5) / np.absolute(2 * left_fit_cr[0])
right_curverad_cr = ((1 + (2 * right_fit_cr[0] * y_eval*ym_per_pix + right_fit_cr[1]) ** 2) ** 1.5) / np.absolute(2 * right_fit_cr[0])
return left_curverad_cr, right_curverad_cr
def lanePosition():
left_pos = (midpoint - leftx_base) * xm_per_pix
right_pos = (rightx_base - midpoint) * xm_per_pix
line_base_pos = np.min([left_pos, right_pos])
return left_pos, right_pos, line_base_pos
def warpBack(img_warped, left_fitx, right_fitx, ploty, Minv, image_undist):
warp_zero = np.zeros_like(img_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the points
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane
cv2.fillPoly(color_warp, np.int_([pts]), (0, 255, 0))
# Warp back to original image
newwarp = cv2.warpPerspective(color_warp, Minv, (image_undist.shape[1], image_undist.shape[0]))
result = cv2.addWeighted(image_undist, 1, newwarp, 0.3, 0)
return result
def select_yellow(image_in):
image_in2 = cv2.cvtColor(image_in, cv2.COLOR_BGR2RGB)
hsv = cv2.cvtColor(image_in2, cv2.COLOR_RGB2HSV)
lower = np.array([20,60,60])
upper = np.array([38,174, 250])
mask = cv2.inRange(hsv, lower, upper)
return mask
def select_white(image_in):
lower = np.array([202,202,202])
upper = np.array([255,255,255])
mask = cv2.inRange(image_in, lower, upper)
return mask
def comb_thresh(image_in):
yellow = select_yellow(image_in)
white = select_white(image_in)
combined_binary = np.zeros_like(yellow)
combined_binary[(yellow >= 1) | (white >= 1)] = 1
return combined_binary
def pipeline3(image, mtx, dist, kernelSize):
out1 = undistort_img(image, mtx, dist)
out2 = comb_thresh(out1)
M, Minv, out3 = IMGwarp(out2)
mid, leftxbase, rightxbase, detected, out_img = FindLanes(out3)
left_inds, right_inds, leftfit, rightfit, plotyy, leftx, lefty, rightx, righty = WindowSearch(out3,
9, 100, 10,
leftxbase, rightxbase,
out_img)
left_curverad_cr, right_curverad_cr = curvature(leftx, lefty, rightx, righty, plotyy)
left_fitx = leftfit
right_fitx = rightfit
global l_fit_buffer
global r_fit_buffer
global old_img_lines
if old_img_lines is None:
old_img_lines = out2
# Compare new frame to previous frame
ret = cv2.matchShapes(old_img_lines, out2, 1, 0.0)
if ret < 50:
old_img_lines = out2
if l_fit_buffer is None:
l_fit_buffer = np.array([left_fitx])
if r_fit_buffer is None:
r_fit_buffer = np.array([right_fitx])
l_fit_buffer = np.append(l_fit_buffer, [left_fitx], axis=0)[-filter_size:]
r_fit_buffer = np.append(r_fit_buffer, [right_fitx], axis=0)[-filter_size:]
# Compute the mean
l_fit_mean = np.mean(l_fit_buffer, axis=0)
r_fit_mean = np.mean(r_fit_buffer, axis=0)
result2 = warpBack(out3, leftfit, rightfit, plotyy, Minv, out1)
return result2
vehicleDetector = VehicleDetector()
vehicleDetector.ystart_ystop_scale = [(380, 480, 1), (400, 600, 1.5), (500, 700, 2.5)]
vehicleDetector.threshold = 2
image3 = mpimg.imread('test_images/test1.jpg')
output3 = vehicleDetector.find_cars_smooth(image3)
plt.figure()
plt.imshow(output3)
plt.title('Car Positions')
filter_size = 10
old_img_lines = None
l_fit_buffer = None
r_fit_buffer = None
def process_image_lanes(image):
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
processed_img = pipeline3(image, mtx, dist, 5)
return processed_img
#combine both
vehicleDetector = VehicleDetector()
vehicleDetector.ystart_ystop_scale = [(380, 480, 1), (400, 600, 1.5), (500, 700, 2.5)]
vehicleDetector.threshold = 2
filter_size = 10
old_img_lines = None
l_fit_buffer = None
r_fit_buffer = None
def process_image_comb(image):
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
processed_img = pipeline3(image, mtx, dist, 5)
processed_img = cv2.cvtColor(processed_img, cv2.COLOR_RGB2BGR)
processed_img2 = vehicleDetector.find_cars_smooth(processed_img)
return processed_img2
project_video_output = './output_videos/project_video_output_comb.mp4'
clip9000 = VideoFileClip("./project_video.mp4")
white_clip9000 = clip9000.fl_image(process_image_comb)
%time white_clip9000.write_videofile(project_video_output, audio=False)<jupyter_output>[MoviePy] >>>> Building video ./output_videos/project_video_output_comb.mp4
[MoviePy] Writing video ./output_videos/project_video_output_comb.mp4
|
no_license
|
/CarND-Vehicle-Detection-master-P5.ipynb
|
azumf/CarND-Vehicle-Detection-P5
| 16 |
<jupyter_start><jupyter_text># 교차검증 그리드 서치 (cross_val_score) p324~<jupyter_code>from sklearn.model_selection import cross_val_score
import numpy as np
best_score = 0
for gamma in [0.001, 0.01, 0.1, 1, 10, 100]:
for C in [0.001, 0.01, 0.1, 1, 10, 100]:
# 매개변수의 각 조합에 대해 SVC를 훈련시킵니다
svm = SVC(gamma=gamma, C=C)
scores = cross_val_score(svm, X_trainval, y_trainval, cv=5)
# print(scores)
score=np.mean(scores)
if score > best_score: # 점수가 더 높으면 매개변수와 함께 기록합니다
best_score = score
best_parameters = {'C': C, 'gamma': gamma}
# 훈련 세트와 검증 세트를 합쳐 모델을 다시 만든 후 테스트 세트를 사용해 평가
svm = SVC(**best_parameters) # **best_parameters : SVC(C=C, gamma=gamma) -> SVC(C=10, gamma=0.001)
svm.fit(X_trainval, y_trainval)
test_score = svm.score(X_test, y_test)
print("검증 세트에서 최고 점수: {:.2f}".format(best_score))
print("최적 파라미터: ", best_parameters)
print("최적 파라미터에서 테스트 세트 점수: {:.2f}".format(test_score))
from sklearn.model_selection import GridSearchCV
param_grid = {'C':[0.001, 0.01, 0.1, 1, 10, 100],'gamma':[0.001, 0.01, 0.1, 1, 10, 100]}
# 반복문 돌려 최적값 찾기
gsc = GridSearchCV(SVC(), param_grid, cv=5, return_train_score=True)
gsc.fit(X_trainval, y_trainval) #(6개 * 6개 * 5번 반복)
print(gsc.best_score_)
print(gsc.best_params_)
print(gsc.score(X_test, y_test))
import pandas as pd
results = pd.DataFrame(gsc.cv_results_)
# results.head()
results.iloc[:, 4:].head()
import mglearn
# results['mean_test_score'] # 36개 값,
scores = results['mean_test_score'].values.reshape(6,6)
# print(scores)
mglearn.tools.heatmap(scores, xlabel='gamma', ylabel='C',
yticklabels=param_grid['C'],
xticklabels=['gamma'])
param_grid = [{'kernel': ['rbf'], 'C': [0.001, 0.01, 0.1, 1, 10, 100],
'gamma': [0.001, 0.01, 0.1, 1, 10, 100]},
{'kernel': ['linear'],
'C': [0.001, 0.01, 0.1, 1, 10, 100]}]
grid_search = GridSearchCV(SVC(), param_grid, cv=5,
return_train_score=True)
grid_search.fit(X_train, y_train)
print("최적 파라미터: {}".format(grid_search.best_params_))
print("최고 교차 검증 점수: {:.2f}".format(grid_search.best_score_))
# grid_search.cv_results_
pd.DataFrame(grid_search.cv_results_).iloc[:, 4:].T.head()
param_grid = {'C':[0.001, 0.01, 0.1, 1, 10, 100],'gamma':[0.001, 0.01, 0.1, 1, 10, 100]}
gsc = GridSearchCV(SVC(), param_grid, cv=5, return_train_score=True)
scores = cross_val_score(gsc, iris.data, iris.target, cv=5)
print(scores, scores.mean())<jupyter_output>[0.96666667 1. 0.96666667 0.96666667 1. ] 0.9800000000000001
|
no_license
|
/머신러닝/l.evalueation(모델평가, 성능 향상).ipynb
|
chayoonjung/yun-python
| 1 |
<jupyter_start><jupyter_text>Create a array<jupyter_code>data = np.arange(10)
print (data.shape)
data
data = data.reshape(2,5)
print (data.shape)
data
data_1 = np.ones([3,3])
print (type(data_1))
print (data_1)
data_1 = np.zeros([3,3])
print (data_1)
data = np.eye(3,3)
data
data =np.diag((1,2,3))
data
data = np.empty([3,3])
data
data = np.random.rand(3,3)
print (data)
data = np.array([[1,2,3],[2,3,4],[3,4,5]])
data
data_list = list(range(9))
data = np.array(data_list).reshape(3,3)
data<jupyter_output><empty_output><jupyter_text>Matrix Operation<jupyter_code>a = np.arange(3)
print (a)
a.shape
b = a.T
print (b)
b.shape
a = a.reshape(1,3) # row first, column second
print (a)
a.shape
b = a.T
print (b)
b.shape
c = np.dot(b,a)
c
c = b @ a
c
c = c + np.ones([3,3])
c
c ** 2
d = c * c
d
a = np.random.rand(3,3)
a
all_max = a.max()
x_max = a.max(axis = 0) # axis means along which axis
y_max = a.max(axis = 1)
print (all_max,x_max,y_max)
all_max_arg = a.argmax()
x_max_arg = a.argmax(axis = 0)
print (all_max_arg, x_max_arg)
a = np.random.rand(12)
a
ind = a.argsort()
ind
a.sort()
a<jupyter_output><empty_output><jupyter_text>Indexing, Slicing and Iterating<jupyter_code>b= a[:,2]
b
for row in a:
print (row)
for element in a.flat:
print (element)
a = np.random.rand(4,4)
a
b = a.reshape(8,-1)
b<jupyter_output><empty_output><jupyter_text>Stacking together different arrays<jupyter_code>a = np.random.rand(2,2)
b = np.random.rand(2,2)
print (a,b)
np.vstack((a,b)) # vertical stack
np.hstack((a,b)) # horizontal stack<jupyter_output><empty_output><jupyter_text>Copies and Views<jupyter_code>a = np.arange(12)
# No copy at all, different names, but share same memory address
# shape of a and data are changable via b
b= a
# Shallow copy: shape of a doesn't change and data of a changes
c = a.view()
# Deep copy: a's shape and data don't change
d= a.copy()<jupyter_output><empty_output><jupyter_text>Indexing with Boolean Arrays<jupyter_code>a = np.random.rand(3,3)
a
a[[0,2],[1,2]]
a[a>0.5]<jupyter_output><empty_output><jupyter_text>Linear Algebra<jupyter_code>a = np.array([[1.0, 2.0], [3.0, 4.0]])
np.linalg.inv(a)
np.linalg.eig(a)
<jupyter_output><empty_output><jupyter_text>Questions<jupyter_code>data = np.eye(3,3)
data
np.nonzero(data)
np.any(data)<jupyter_output><empty_output>
|
permissive
|
/Numpy.ipynb
|
yuzhipeter/Data_Structure_Numpy_Pandas
| 8 |
<jupyter_start><jupyter_text># Read in the data<jupyter_code>import pandas as pd
import numpy
import re
data_files = [
"ap_2010.csv",
"class_size.csv",
"demographics.csv",
"graduation.csv",
"hs_directory.csv",
"sat_results.csv"
]
data = {}
for f in data_files:
d = pd.read_csv("schools/{0}".format(f))
data[f.replace(".csv", "")] = d<jupyter_output><empty_output><jupyter_text># Read in the surveys<jupyter_code>all_survey = pd.read_csv("schools/survey_all.txt", delimiter="\t", encoding='windows-1252')
d75_survey = pd.read_csv("schools/survey_d75.txt", delimiter="\t", encoding='windows-1252')
survey = pd.concat([all_survey, d75_survey], axis=0)
survey["DBN"] = survey["dbn"]
survey_fields = [
"DBN",
"rr_s",
"rr_t",
"rr_p",
"N_s",
"N_t",
"N_p",
"saf_p_11",
"com_p_11",
"eng_p_11",
"aca_p_11",
"saf_t_11",
"com_t_11",
"eng_t_11",
"aca_t_11",
"saf_s_11",
"com_s_11",
"eng_s_11",
"aca_s_11",
"saf_tot_11",
"com_tot_11",
"eng_tot_11",
"aca_tot_11",
]
survey = survey.loc[:,survey_fields]
data["survey"] = survey<jupyter_output><empty_output><jupyter_text># Add DBN columns<jupyter_code>data["hs_directory"]["DBN"] = data["hs_directory"]["dbn"]
def pad_csd(num):
string_representation = str(num)
if len(string_representation) > 1:
return string_representation
else:
return "0" + string_representation
data["class_size"]["padded_csd"] = data["class_size"]["CSD"].apply(pad_csd)
data["class_size"]["DBN"] = data["class_size"]["padded_csd"] + data["class_size"]["SCHOOL CODE"]<jupyter_output><empty_output><jupyter_text># Convert columns to numeric<jupyter_code>cols = ['SAT Math Avg. Score', 'SAT Critical Reading Avg. Score', 'SAT Writing Avg. Score']
for c in cols:
data["sat_results"][c] = pd.to_numeric(data["sat_results"][c], errors="coerce")
data['sat_results']['sat_score'] = data['sat_results'][cols[0]] + data['sat_results'][cols[1]] + data['sat_results'][cols[2]]
def find_lat(loc):
coords = re.findall("\(.+, .+\)", loc)
lat = coords[0].split(",")[0].replace("(", "")
return lat
def find_lon(loc):
coords = re.findall("\(.+, .+\)", loc)
lon = coords[0].split(",")[1].replace(")", "").strip()
return lon
data["hs_directory"]["lat"] = data["hs_directory"]["Location 1"].apply(find_lat)
data["hs_directory"]["lon"] = data["hs_directory"]["Location 1"].apply(find_lon)
data["hs_directory"]["lat"] = pd.to_numeric(data["hs_directory"]["lat"], errors="coerce")
data["hs_directory"]["lon"] = pd.to_numeric(data["hs_directory"]["lon"], errors="coerce")<jupyter_output><empty_output><jupyter_text># Condense datasets<jupyter_code>class_size = data["class_size"]
class_size = class_size[class_size["GRADE "] == "09-12"]
class_size = class_size[class_size["PROGRAM TYPE"] == "GEN ED"]
class_size = class_size.groupby("DBN").agg(numpy.mean)
class_size.reset_index(inplace=True)
data["class_size"] = class_size
data["demographics"] = data["demographics"][data["demographics"]["schoolyear"] == 20112012]
data["graduation"] = data["graduation"][data["graduation"]["Cohort"] == "2006"]
data["graduation"] = data["graduation"][data["graduation"]["Demographic"] == "Total Cohort"]<jupyter_output><empty_output><jupyter_text># Convert AP scores to numeric<jupyter_code>cols = ['AP Test Takers ', 'Total Exams Taken', 'Number of Exams with scores 3 4 or 5']
for col in cols:
data["ap_2010"][col] = pd.to_numeric(data["ap_2010"][col], errors="coerce")<jupyter_output><empty_output><jupyter_text># Combine the datasets<jupyter_code>combined = data["sat_results"]
combined = combined.merge(data["ap_2010"], on="DBN", how="left")
combined = combined.merge(data["graduation"], on="DBN", how="left")
to_merge = ["class_size", "demographics", "survey", "hs_directory"]
for m in to_merge:
combined = combined.merge(data[m], on="DBN", how="inner")
combined = combined.fillna(combined.mean())
combined = combined.fillna(0)<jupyter_output><empty_output><jupyter_text># Add a school district column for mapping<jupyter_code>def get_first_two_chars(dbn):
return dbn[0:2]
combined["school_dist"] = combined["DBN"].apply(get_first_two_chars)<jupyter_output><empty_output><jupyter_text># Find correlations<jupyter_code>correlations = combined.corr()
correlations = correlations["sat_score"]
print(correlations)<jupyter_output>SAT Critical Reading Avg. Score 0.986820
SAT Math Avg. Score 0.972643
SAT Writing Avg. Score 0.987771
sat_score 1.000000
AP Test Takers 0.523140
Total Exams Taken 0.514333
Number of Exams with scores 3 4 or 5 0.463245
Total Cohort 0.325144
CSD 0.042948
NUMBER OF STUDENTS / SEATS FILLED 0.394626
NUMBER OF SECTIONS 0.362673
AVERAGE CLASS SIZE 0.381014
SIZE OF SMALLEST CLASS 0.249949
SIZE OF LARGEST CLASS 0.314434
SCHOOLWIDE PUPIL-TEACHER RATIO NaN
schoolyear NaN
fl_percent NaN
frl_percent -0.722225
total_enrollment 0.367857
ell_num -0.153778
ell_percent [...]<jupyter_text># Plotting survey correlations<jupyter_code># Remove DBN since it's a unique identifier, not a useful numerical value for correlation.
survey_fields.remove("DBN")
%matplotlib inline
combined.corr()["sat_score"][survey_fields].plot.bar()
combined.plot.scatter(x="saf_s_11", y="sat_score")
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
districts = combined.groupby("school_dist").agg(numpy.mean)
districts.reset_index(inplace=True)
m = Basemap(
projection='merc',
llcrnrlat=40.496044,
urcrnrlat=40.915256,
llcrnrlon=-74.255735,
urcrnrlon=-73.700272,
resolution='i'
)
m.drawmapboundary(fill_color='#85A6D9')
m.drawcoastlines(color='#6D5F47', linewidth=.4)
m.drawrivers(color='#6D5F47', linewidth=.4)
longitudes = districts["lon"].tolist()
latitudes = districts["lat"].tolist()
m.scatter(longitudes, latitudes, s=50, zorder=2, latlon=True, c=districts["saf_s_11"], cmap="summer")
plt.show()
race_fields = ["white_per", "asian_per", "black_per", "hispanic_per"]
combined.corr()["sat_score"][race_fields].plot.bar()
combined.plot.scatter(x="hispanic_per", y="sat_score")
print(combined[combined["hispanic_per"] > 95]["SCHOOL NAME"])
low_hispanic = combined[combined["hispanic_per"] < 10]
print(low_hispanic[low_hispanic["sat_score"] > 1800]["SCHOOL NAME"])
print(combined[(combined["hispanic_per"] < 10) & (combined["sat_score"] > 1800)]["SCHOOL NAME"])
gender = ["male_per", "female_per"]
print(combined.corr()["sat_score"][gender].plot.bar())
print(combined.plot.scatter(x="female_per", y="sat_score"))
print(combined[(combined["female_per"] > 60) & (combined["sat_score"] > 170)]["SCHOOL NAME"])
combined["ap_per"] = combined["AP Test Takers "]/combined["total_enrollment"]
print(combined.plot.scatter(x="ap_per", y="sat_score"))<jupyter_output>Axes(0.125,0.125;0.775x0.775)
|
no_license
|
/Guided Project: Analyzing NYC High School Data/Schools.ipynb
|
isabellechiu/Self-Project-Dataquest
| 10 |
<jupyter_start><jupyter_text># Mô tả dữ liệu bằng Arviz
### Bs. Lê Ngọc Khả Nhi
Bài thực hành nảy nhằm hướng dẫn dùng package Arviz để thực hiện các biểu đồ mô tả đữ liệu đơn giản.
Arviz (https://arviz-devs.github.io/arviz/index.html) là một thư viện đồ họa chuyên dụng cho Thống kê Bayes, cho phép vẽ các biểu đồ để diễn giải posterior distribution từ các mô hình Bayes. Tuy nhiên, ta có thể lợi dụng các hàm của Arviz cho mục tiêu thống kê mô tả đơn giản, vì sẽ tiện lợi hơn nhiều so với vẽ thủ công bằng matplotlib.<jupyter_code>import arviz as az
import numpy as np
import pandas as pd
print('Arviz', az.__version__)<jupyter_output>Arviz 0.6.1
<jupyter_text>Trước khi dùng arviz, dữ liệu đầu vào cần được chuyển đổi thành cấu trúc "inference_data" bằng hàm convert_to_inference_data; hàm này chấp nhận những output từ mô hình Bayes (pymc3, pystan, pyro, ...); tuy nhiên nó cũng dung nạp 2 kiểu đữ liệu không phải là mô hình Bayes, gồm numpy array và dictionary## Từ 1D numpy array<jupyter_code>n = 1000
np_1D_dat = az.convert_to_inference_data(np.random.normal(2,2,size = n))
print(np_1D_dat)
np_1D_dat.posterior<jupyter_output>Inference data with groups:
> posterior
<jupyter_text>## Từ pandas Series<jupyter_code>pd_S = pd.Series(np.random.gamma(0.5, size = n))
pd_S_dat = az.convert_to_inference_data(pd_S.values)
print(pd_S_dat)
pd_S_dat.posterior<jupyter_output>Inference data with groups:
> posterior
<jupyter_text>## Từ nD numpy array (ma trận 2D, tensor, array đa chiều)<jupyter_code>k = (1,1000,3)
np_3D_dat = az.convert_to_inference_data(np.random.randn(*k))
print(np_3D_dat)
np_3D_dat.posterior<jupyter_output>Inference data with groups:
> posterior
<jupyter_text>## Từ Dictionary<jupyter_code>datadict = {
'X': np.random.normal(1,2,size = n),
'Y': np.random.normal(1,2,size = n)*2 + np.random.rand(1000),
'Z': np.random.normal(1,2,size = n)*(-2) + np.random.rand(1000),
}
data_dict = az.convert_to_inference_data(datadict)
data_dict.posterior<jupyter_output><empty_output><jupyter_text>## Từ pandas dataframe<jupyter_code>pd_df = pd.DataFrame(datadict)
pd_df.head()
data_df= az.convert_to_inference_data(pd_df.to_numpy())
data_df.posterior
data_df= az.convert_to_inference_data(pd_df.values)
data_df.posterior<jupyter_output><empty_output><jupyter_text>Có nhiều phong cách mỹ thuật khác nhau để chọn:<jupyter_code>az.style.available
az.style.use('arviz-whitegrid')<jupyter_output><empty_output><jupyter_text># Hàm plot_dist
Hàm này cho phép biểu diễn hàm mật độ phân phối (PDF, CDF), biểu đồ 1D KDE, 1D Histogram và 2D KDE
Ưu điểm : arviz tự động nhận ra biến liên tục và dùng KDE, biến rời rạc để dùng histogram. Nó cho phép kết hợp thêm Rugs layer, hỗ trợ hàm CDF lẫn PDF<jupyter_code>az.plot_dist(np_1D_dat.posterior.x,
rug = True,
color ="red",
fill_kwargs={'alpha': 0.3, 'color':'#eb3483'},)
az.plot_dist(np_1D_dat.posterior.x,
rug = True,
cumulative =True,
color ="red",
fill_kwargs={'alpha': 0.3, 'color':'#eb3483'},)
az.plot_dist(np.random.poisson(5, 1000),
color ="#eb3483",)
az.plot_dist(np.random.poisson(5, 1000),
color ="#eb3483",
fill_kwargs={'alpha': 0.3, 'color':'#eb3483'},
kind = 'kde')<jupyter_output><empty_output><jupyter_text>Hơn thế nữa, hàm plot_dist vẽ được 2D KDE rất đẹp:<jupyter_code>az.plot_dist(np_1D_dat.posterior.x,
np_1D_dat.posterior.x * 2 + np.random.normal(0,5,1000),
contour = True,
contour_kwargs={'colors': 'white','alpha':0.8,}
)
az.plot_dist(np_1D_dat.posterior.x,
np_1D_dat.posterior.x * 2 + np.random.normal(0,5,1000),
contour = False,
contour_kwargs={'colors': 'white','alpha':0.8,}
)<jupyter_output><empty_output><jupyter_text>## Hàm plot_posterior: Suy diễn thống kê
Hàm này dùng để suy diễn phân phối hậu nghiệm của các mô hình Bayes, tuy nhiên không gì ngăn cản chúng ta dùng nó cho dữ liệu 1D bất kì.
Hàm này dùng làm thống kê mô tả cực kì hiệu quả, vì nó cho phép kết hợp 1D KDE plot và các trị số thống kê median, mean, mode, khoảng tin cậy, và suy diễn thống kê dựa vào các method Bayes như HDP (tương tự như CI), Khoảng vô nghĩa thực dụng (ROPE), ngưỡng vô hiệu (ref_val):<jupyter_code>az.plot_posterior(np_1D_dat,
var_names = ['x'],
credible_interval = 0.95,
point_estimate = 'median',
rope = (-1.5,7),
ref_val = 0,
**{'alpha': 0.8, 'color':'blue'})
az.plot_posterior(np_1D_dat,
var_names = ['x'],
credible_interval = 0.95,
point_estimate = 'median',
rope = (-1.5,7),
ref_val = 0,
kind = 'hist',)<jupyter_output><empty_output><jupyter_text>Hàm này dùng tốt cho dữ liệu đa biến:<jupyter_code>az.plot_posterior(np_3D_dat)<jupyter_output><empty_output><jupyter_text>## Hàm plot_density
Hàm plot_density có công dụng đơn giản hơn plot_posterior, nó trình bày thêm 1 điểm giá trị Median hoặc mean, mode;
Ưu thế của hàm này đó là cho phép so sánh 2 hay nhiều phân nhóm:<jupyter_code>np_3D_dat2 = az.convert_to_inference_data(np.random.randn(*k)-1)
az.plot_density(np_3D_dat,
figsize = (12,3),
shade = 0.3,
point_estimate = "median",
credible_interval =1,
colors = "red")
az.plot_density([np_3D_dat, np_3D_dat2],
figsize = (12,3),
shade = 0.3,
credible_interval =0.95,
point_estimate = "median")<jupyter_output><empty_output><jupyter_text>## Hàm plot_joint
Hàm plot_joint cho phép biểu diễn hình ảnh Xác suất kết hợp (Joint pribability) và Xác suất biên (Marginal prob) trên cùng 1 biểu đồ.
Xác suất kết hợp có thể biểu diễn bằng 3 kiểu hình họa: tán xạ, hex_bin và 2D KDE
Xác suất biên có 2 kiểu hình họa: 1D KDE và Histogram<jupyter_code>datadict = {
'X': np.random.normal(2,2,size = n),
'Y': np.random.normal(2,2,size = n)* 2 + np.random.normal(0,5,1000),
}
az.plot_joint(datadict)
az.plot_joint(datadict,
joint_kwargs = {'color':"red", 'alpha': 0.3},
marginal_kwargs = {'color':'red',
'fill_kwargs':{'alpha': 0.3, 'color':'#eb3483'}}
)
az.plot_joint(datadict,
figsize=(5.7,5),
kind = 'hexbin',
marginal_kwargs = {'color':'black',
'fill_kwargs':{'alpha': 0.9, 'color':'#ebbd34'}}
)
az.plot_joint(datadict,
figsize=(5.7,5),
kind = 'kde',
marginal_kwargs = {'color':'black',
'fill_kwargs':{'alpha': 0.9, 'color':'#ebbd34'}}
)<jupyter_output><empty_output><jupyter_text>## Hàm plot_pair
Hàm này là sự mở rộng của hàm plot_join, vì nó cho phép khảo sát bắt cặp tuần tự dữ liệu có từ 3 biến trở lên (ma trận tương quan).<jupyter_code>datadict = {
'X': np.random.normal(1,2,size = n),
'Y': np.random.normal(1,2,size = n)*2 + np.random.rand(1000),
'Z': np.random.normal(1,2,size = n)*(-2) + np.random.rand(1000),
}
az.plot_pair(datadict)
az.plot_pair(np_3D_dat,
figsize=(5.7,5),
coords = {'x_dim_0': [0, 1, 2]},
var_names = ['x'],
kind = 'kde',
divergences = True,
)
az.plot_pair(datadict,
figsize=(5.7,5),
colorbar=True,
kind ='hexbin')<jupyter_output><empty_output><jupyter_text>## Hàm plot_forest
Hàm plot_forest cho phép biểu diễn nội dung các posterior distribution của coeficients các mô hình Bayes; tuy nhiên ta có thể dùng nó để so sánh trực quan giữa nhiều phân nhóm hay nhiều biến có cùng thang đo (tương đương ridge_plot, forest_plot)<jupyter_code>datadict = {
'X': np.random.normal(1,2,size = n),
'Y': np.random.normal(1,2,size = n)*2 + np.random.rand(1000),
'Z': np.random.normal(1,2,size = n)*(-2) + np.random.rand(1000),
}
az.plot_forest(datadict,
kind="forestplot",
rope = (-5,5),
credible_interval = 0.95,
colors = "red")
az.plot_forest(datadict,
kind="ridgeplot",
credible_interval = 0.95,
ridgeplot_alpha = 0.5,
ridgeplot_overlap = 3,
colors = "red")<jupyter_output><empty_output>
|
no_license
|
/Arviz demo 1.ipynb
|
kinokoberuji/Statistics-Python-Tutorials
| 15 |
<jupyter_start><jupyter_text># Programming and Data Analysis
> Homework 0
Kuo, Yao-Jen from [DATAINPOINT](https://www.datainpoint.com)## Instructions
- We've imported necessary modules at the beginning of each exercise.
- We've put necessary files(if any) in the working directory of each exercise.
- We've defined the names of functions/inputs/parameters for you.
- Write down your solution between the comments `### BEGIN SOLUTION` and `### END SOLUTION`.
- It is NECESSARY to `return` the answer, tests will fail by just printing out the answer.
- Do not use `input()` function, it will halt the notebook while running tests.
- Running tests to see if your solutions are right: Kernel -> Restart & Run All -> Restart and Run All Cells.
- You can run tests after each question or after finishing all questions.<jupyter_code>import unittest<jupyter_output><empty_output><jupyter_text>## 01. Define a function named `convert_fahrenheit_to_celsius(x)` which converts Fahrenheit degrees to Celsius degrees.
\begin{equation}
Celsius^{\circ} C = (Fahrenheit^{\circ} F - 32) \times \frac{5}{9}
\end{equation}
- Expected inputs:a numeric `x`.
- Expected outputs:a numeric.<jupyter_code>def convert_fahrenheit_to_celsius(x):
"""
>>> convert_fahrenheit_to_celsius(212)
100.0
>>> convert_fahrenheit_to_celsius(32)
0.0
"""
### BEGIN SOLUTION
y = (x - 32) * 5/9
return y
### END SOLUTION<jupyter_output><empty_output><jupyter_text>## 02. Define a function named `calculate_bmi(height, weight)` which calculates BMI according to heights in meters and weights in kilograms.
\begin{equation}
BMI = \frac{weight_{kg}}{height_{m}^2}
\end{equation}
Source:
- Expected inputs:2 numerics `height` and `weight`.
- Expected outputs:a numeric.<jupyter_code>def calculate_bmi(height, weight):
"""
>>> calculate_bmi(216, 147) # Shaquille O'Neal in his prime
31.507201646090532
>>> calculate_bmi(206, 113) # LeBron James
26.628334433028563
>>> calculate_bmi(211, 110) # Giannis Antetokounmpo
24.70744143213315
"""
### BEGIN SOLUTION
BMI = weight / ((0.01 * height) * (0.01 * height))
return BMI
### END SOLUTION<jupyter_output><empty_output><jupyter_text>## 03. Define a function named `show_big_mac_index(country, currency, price)` which returns the Big Mac Index given a country, its currency, and the price of a Big Mac.
- Expected inputs:2 strings and a numeric.
- Expected outputs:a string.<jupyter_code>def show_big_mac_index(country, currency, price):
"""
>>> show_big_mac_index('US', 'USD', 5.65)
A Big Mac costs 5.65 USD in US.
>>> show_big_mac_index('South Korea', 'Won', 6520)
A Big Mac costs 6,520.00 Won in South Korea.
>>> show_big_mac_index('Taiwan', 'NTD', 72)
A Big Mac costs 72.00 NTD in Taiwan.
"""
### BEGIN SOLUTION
ans = 'A Big Mac costs ' + str('{:0,.2f}'.format(price)) + ' ' + currency + ' in ' + country + '.'
return ans
### END SOLUTION<jupyter_output><empty_output><jupyter_text>## 04. Define a function named `is_a_divisor(x, y)` which returns whether `x` is a is_a_divisor of `y` or not.
- Expected inputs:2 integers.
- Expected outputs:a boolean.<jupyter_code>def is_a_divisor(x, y):
"""
>>> is_a_divisor(1, 3)
True
>>> is_a_divisor(2, 3)
False
>>> is_a_divisor(3, 3)
True
>>> is_a_divisor(1, 4)
True
>>> is_a_divisor(2, 4)
True
>>> is_a_divisor(3, 4)
False
>>> is_a_divisor(4, 4)
True
"""
### BEGIN SOLUTION
if y % x == 0:
return True
else:
return False
### END SOLUTION<jupyter_output><empty_output><jupyter_text>## 05. Define a function named `contains_vowels(x)` which returns whether x contains one of the vowels: a, e, i, o, u or not.
- Expected inputs:a string.
- Expected outputs:a boolean.<jupyter_code>def contains_vowels(x):
"""
>>> contains_vowels('pythn')
False
>>> contains_vowels('ncnd')
False
>>> contains_vowels('rtclt')
False
>>> contains_vowels('python')
True
>>> contains_vowels('anaconda')
True
>>> contains_vowels('reticulate')
True
"""
### BEGIN SOLUTION
if ('a' in x) or ('e' in x) or ('i' in x) or ('o' in x) or ('u' in x):
return True
else:
return False
### END SOLUTION<jupyter_output><empty_output><jupyter_text>## Run tests!
Kernel -> Restart & Run All. -> Restart And Run All Cells.<jupyter_code>class TestHomeworkZero(unittest.TestCase):
def test_01_convert_fahrenheit_to_celsius(self):
self.assertAlmostEqual(convert_fahrenheit_to_celsius(212), 100.0)
self.assertAlmostEqual(convert_fahrenheit_to_celsius(32), 0.0)
def test_02_calculate_bmi(self):
self.assertTrue(calculate_bmi(216, 147) > 31)
self.assertTrue(calculate_bmi(216, 147) < 32)
self.assertTrue(calculate_bmi(206, 113) > 26)
self.assertTrue(calculate_bmi(206, 113) < 27)
self.assertTrue(calculate_bmi(211, 110) > 24)
self.assertTrue(calculate_bmi(211, 110) < 25)
def test_03_show_big_mac_index(self):
self.assertEqual(show_big_mac_index('US', 'USD', 5.65), 'A Big Mac costs 5.65 USD in US.')
self.assertEqual(show_big_mac_index('South Korea', 'Won', 6520), 'A Big Mac costs 6,520.00 Won in South Korea.')
self.assertEqual(show_big_mac_index('Taiwan', 'NTD', 72), 'A Big Mac costs 72.00 NTD in Taiwan.')
def test_04_is_a_divisor(self):
self.assertTrue(is_a_divisor(1, 2))
self.assertTrue(is_a_divisor(2, 2))
self.assertTrue(is_a_divisor(1, 3))
self.assertFalse(is_a_divisor(2, 3))
self.assertTrue(is_a_divisor(1, 4))
self.assertTrue(is_a_divisor(2, 4))
self.assertFalse(is_a_divisor(3, 4))
self.assertTrue(is_a_divisor(4, 4))
def test_05_contains_vowels(self):
self.assertFalse(contains_vowels('pythn'))
self.assertFalse(contains_vowels('ncnd'))
self.assertFalse(contains_vowels('rtclt'))
self.assertTrue(contains_vowels('python'))
self.assertTrue(contains_vowels('anaconda'))
self.assertTrue(contains_vowels('reticulate'))
suite = unittest.TestLoader().loadTestsFromTestCase(TestHomeworkZero)
runner = unittest.TextTestRunner(verbosity=2)
test_results = runner.run(suite)
number_of_failures = len(test_results.failures)
number_of_errors = len(test_results.errors)
number_of_test_runs = test_results.testsRun
number_of_successes = number_of_test_runs - (number_of_failures + number_of_errors)
print("You've got {} successes among {} questions.".format(number_of_successes, number_of_test_runs))<jupyter_output>You've got 5 successes among 5 questions.
|
no_license
|
/exercises.ipynb
|
rose020/homework0
| 7 |
<jupyter_start><jupyter_text>#### Plotting of gesture data<jupyter_code>print("Shape of X_train: ", X_train.shape)
print("shape of y_train/labels: ", y_train.shape)
print("Shape of X_test: ", X_test.shape)
print("shape of y_test/labels: ", y_test.shape)
samples = np.random.choice(len(X_train), 8)
def show_images(images, cols = 1, titles = None):
"""Display a list of images in a single figure with matplotlib.
Parameters
---------
images: List of np.arrays compatible with plt.imshow.
cols (Default = 1): Number of columns in figure (number of rows is
set to np.ceil(n_images/float(cols))).
titles: List of titles corresponding to each image. Must have
the same length as titles.
"""
assert((titles is None)or (len(images) == len(titles)))
n_images = len(images)
if titles is None: titles = ['Image (%d)' % i for i in range(1,n_images + 1)]
fig = plt.figure()
for n, (image, title) in enumerate(zip(images, titles)):
a = fig.add_subplot(cols, np.ceil(n_images/float(cols)), n + 1)
if image.ndim == 2:
plt.gray()
plt.imshow(image)
a.set_title(title, fontsize=50)
a.grid(False)
a.axis("off")
fig.set_size_inches(np.array(fig.get_size_inches()) * n_images)
plt.show()
sample_images = []
sample_labels = []
for sample in samples:
sample_images.append(X_train[sample])
for key, val in label_dict.items():
if np.argmax(y_train[sample]) == val:
sample_labels.append(key)
show_images(sample_images, 2, titles=sample_labels)<jupyter_output><empty_output><jupyter_text>### Model<jupyter_code>def create_model():
model = Sequential()
model.add(Conv2D(64, kernel_size = [3,3], padding = 'same', activation = 'relu', input_shape = (200,200,3)))
model.add(Conv2D(64, kernel_size = [3,3], padding = 'same', activation = 'relu'))
model.add(MaxPool2D(pool_size = [3,3]))
model.add(Conv2D(128, kernel_size = [5,5], padding = 'same', activation = 'relu'))
model.add(Conv2D(128, kernel_size = [5,5], padding = 'same', activation = 'relu'))
model.add(MaxPool2D(pool_size = [3,3]))
model.add(Conv2D(256, kernel_size = [3,3], padding = 'same', activation = 'relu'))
model.add(Conv2D(256, kernel_size = [3,3], padding = 'same', activation = 'relu'))
model.add(Conv2D(256, kernel_size = [3,3], padding = 'same', activation = 'relu'))
model.add(MaxPool2D(pool_size = [3,3]))
model.add(Conv2D(512, kernel_size = [3,3], padding = 'same', activation = 'relu'))
model.add(Conv2D(512, kernel_size = [3,3], padding = 'same', activation = 'relu'))
model.add(MaxPool2D(pool_size = [3,3]))
model.add(Conv2D(512, kernel_size = [3,3], padding = 'same', activation = 'relu'))
model.add(MaxPool2D(pool_size = [2,2]))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(1024, activation = 'relu', kernel_regularizer = regularizers.l2(0.001)))
model.add(Dense(512, activation = 'relu', kernel_regularizer = regularizers.l2(0.001)))
model.add(Dense(36, activation = 'softmax'))
print("MODEL CREATED")
return model
model = create_model()
model.summary()
model.compile(optimizer = 'adam', loss = keras.losses.categorical_crossentropy, metrics = ["accuracy"])
model_hist = model.fit(X_train, y_train, batch_size = 32, epochs = 15, validation_data=(X_test, y_test))<jupyter_output>WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 24026 samples, validate on 1265 samples
Epoch 1/15
24026/24026 [==============================] - 84s 3ms/step - loss: 2.1800 - acc: 0.5566 - val_loss: 2.0808 - val_acc: 0.5407
Epoch 2/15
24026/24026 [==============================] - 81s 3ms/step - loss: 0.8187 - acc: 0.8212 - val_loss: 0.5425 - val_acc: 0.8838
Epoch 3/15
24026/24026 [==============================] - 81s 3ms/step - loss: 0.4958 - acc: 0.8852 - val_loss: 0.8439 - val_acc: 0.7739
Epoch 4/15
24026/24026 [==============================] - 81s 3ms/step - loss: 0.3759 - acc: 0.9093 - val_loss: 0.3088 - val_acc: 0.9241
Epoch 5/15
24026/24026 [==============================] - 81s 3ms/step - loss: 0.3117 - acc: 0.9276 - val_loss: 0.3129 - val_acc: 0.9273
Epoch 6/15
24[...]<jupyter_text>### Plotting training metrices<jupyter_code>def plot_accuracy(y):
if(y == True):
plt.plot(model_hist.history['acc'])
plt.plot(model_hist.history['val_acc'])
plt.legend(['train', 'test'], loc='lower right')
plt.title('accuracy plot - train vs test')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
else:
pass
return
def plot_loss(y):
if(y == True):
plt.plot(model_hist.history['loss'])
plt.plot(model_hist.history['val_loss'])
plt.legend(['training loss', 'validation loss'], loc = 'upper right')
plt.title('loss plot - training vs vaidation')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
else:
pass
return
plot_accuracy(True)
plot_loss(True)
model.save("asl_bestsofar.h5")
samples_test = np.random.choice(len(X_test), 8)
samples_test
sample_images = []
sample_labels = []
pred_labels = []
for sample in samples_test:
sample_images.append(X_test[sample])
img = X_test[sample].reshape((1,200,200,3))
pred = model.predict_classes(img)
for key, val in label_dict.items():
if pred[0] == int(val):
pred_labels.append(key)
for key, val in label_dict.items():
if np.argmax(y_test[sample]) == val:
sample_labels.append(key)
def show_test_images(images, cols = 1, true_label = None, pred_label=None):
n_images = len(images)
fig = plt.figure()
for n, (image, label, pred) in enumerate(zip(images, true_label, pred_label)):
a = fig.add_subplot(cols, np.ceil(n_images/float(cols)), n + 1)
if image.ndim == 2:
plt.gray()
plt.imshow(image)
a.set_title("{}\n{}".format(label, pred), fontsize=50)
a.grid(False)
a.axis("off")
fig.set_size_inches(np.array(fig.get_size_inches()) * n_images)
plt.show()
show_test_images(sample_images, 2, sample_labels, pred_labels)<jupyter_output><empty_output>
|
no_license
|
/asl-training.ipynb
|
ayulockin/ASL_Classifier
| 3 |
<jupyter_start><jupyter_text># Pandas basics
In this notebook we will **learn** how to work with the two main data types in `pandas`: `DataFrame` and `Series`.## Data structures (`pandas`)### `Series`
In `pandas`, series are the building blocks of dataframes.
Think of a series as a column in a table. A series collects *observations* about a given *variable*. <jupyter_code>from random import random
import pandas as pd
import numpy as np
from pandas import Series, DataFrame<jupyter_output><empty_output><jupyter_text>#### Numerical series<jupyter_code># let's create a series containing 100 random numbers
# ranging between 0 and 1
s = pd.Series([random() for n in range(0, 10)])<jupyter_output><empty_output><jupyter_text>Each observation in the series has an **index** as well as a set of **values**: they can be accessed via the omonymous properties:<jupyter_code>s.index
list(s.index)
s.values<jupyter_output><empty_output><jupyter_text>The `head()` and `tail()` methods allows for looking at the begininning and end of a series:<jupyter_code>s.head()
s.tail()<jupyter_output><empty_output><jupyter_text>The `value_counts()` method returns a count of distinct values within a series.Is there any number in `s` that occurs twice?<jupyter_code># a `Series` can be easily cast into a list
list(s.value_counts()).count(2)<jupyter_output><empty_output><jupyter_text>Another way of verifying this:<jupyter_code>s.is_unique
s.min()
s.max()
s.mean()
s.median()<jupyter_output><empty_output><jupyter_text>#### Datetime series<jupyter_code>from random import randint
from datetime import date
# let's generate a list of random dates
# in the range 1900-1950
dates = [
date(
year,
randint(1, 12),
randint(1, 28) # try replacing with 31 and see what happens
)
for year in range(1900,1950)
]
s1 = pd.Series(dates)
s1
type(s1[1])
s1 = Series(pd.to_datetime(dates))
type(s1[1])
s1[1].day_name()
s1.min()
s1.max()
s1.mean()<jupyter_output><empty_output><jupyter_text>### `DataFrame`
What is a `pandas.DataFrame`? Think of it as an in-memory spreadsheet that you can analyse and manipulate programmatically.
A `DataFrame` is a collection of `Series` having the same length and whose indexes are in sync. A *collection* means that each column of a dataframe is a seriesLet's create a toy `DataFrame` by hand. <jupyter_code>dates = [
date(
year,
randint(1, 12),
randint(1, 28) # try replacing with 31 and see what happens
)
for year in range(1980,1990)
]
counts = [
randint(0, 10000)
for i in range(0, 10)
]
event_types = ["fire", "flood", "car_crash", "plane_crash"]
events = [
np.random.choice(event_types)
for i in range(0, 10)
]
assert len(events) == len(counts) == len(dates)
toy_df = pd.DataFrame({
"date": dates,
"count": counts,
"event": events
})
toy_df<jupyter_output><empty_output><jupyter_text>**Try out**: what happens if you change the length of either of the two lists? Try e.g. passing 20 dates instead of 10.<jupyter_code># a df is a collection of series
# each column is a series
type(toy_df.date)<jupyter_output><empty_output><jupyter_text>## Data manipulation in `pandas`### Data types
String, datetimes (see above), categorical data.
In `pandas`, categories behave very much like string, yet they lead to better performances (faster operations, optimized storage).Bottom-up approach:<jupyter_code># transforms a Series with strings into categories
toy_df.event.astype('category')<jupyter_output><empty_output><jupyter_text>### Exploring a dataframe
Exploring a dataframe: df.head(), df.tail(), df.info().The method `info()` gives you information about a dataframe:
- how much space does it take in memory?
- what is the datatype of each column?
- how many records are there?
- how many `null` values does each column contain (!)?<jupyter_code>toy_df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 10 non-null object
1 count 10 non-null int64
2 event 10 non-null object
dtypes: int64(1), object(2)
memory usage: 368.0+ bytes
<jupyter_text>Alternatively, if you need to know only the number of columns and rows you can use the `.shape` property.
It returns a tuple with 1) number of rows, 2) number of columns.<jupyter_code>toy_df.shape<jupyter_output><empty_output><jupyter_text>`head()` prints by first five rows of a dataframe:<jupyter_code>toy_df.head()<jupyter_output><empty_output><jupyter_text>But the number of lines displayed is a parameter that can be changed:<jupyter_code>toy_df.head(2)<jupyter_output><empty_output><jupyter_text>`tail()` does the opposite, i.e. prints the last n rows in the dataframe:<jupyter_code>toy_df.tail()<jupyter_output><empty_output><jupyter_text>#### Adding columnsLet's go back to our toy dataframe:<jupyter_code>toy_df.head()<jupyter_output><empty_output><jupyter_text>Using the column selector with the name of a column that does not exist yet will add the effect of setting the values of all rows in that column to the value specified.<jupyter_code>toy_df['country'] = "UK"
toy_df.head(3)<jupyter_output><empty_output><jupyter_text>But if the column already exists, its value is reset:<jupyter_code>toy_df['country'] = "USA"
toy_df.head(3)<jupyter_output><empty_output><jupyter_text>#### Removing columnsThe double square bracket notation ``[[...]]`` returns a dataframe having only the columns specified inside the inner brackets.This said, removing a column is done by unselecting it:<jupyter_code># here we removed the column country
toy_df2 = toy_df[['date', 'count', 'event']]
# it worked!
toy_df2.head()<jupyter_output><empty_output><jupyter_text>#### Setting a column as index<jupyter_code>toy_df.set_index('date')
toy_df.head(3)
toy_df.set_index('date', inplace=True)
toy_df.head(3)<jupyter_output><empty_output><jupyter_text>**Q**: can you explain the effect of the `inplace` parameter by looking at the cells above?### Accessing data
.loc, .iloc, slicing, iteration over rows<jupyter_code>toy_df.head(3)<jupyter_output><empty_output><jupyter_text>#### Label-based indexing<jupyter_code>toy_df.loc[date(1980,1,1):date(1982,1,1)]<jupyter_output><empty_output><jupyter_text>#### Integer-based indexing<jupyter_code># select a single row, the first one
toy_df.iloc[0]
# select a range of rows by index
toy_df.iloc[[1,3,-1]]
# select a range of rows with slicing
toy_df.iloc[0:5]
toy_df.index<jupyter_output><empty_output><jupyter_text>#### Iterating over rows<jupyter_code>for n, row in toy_df.iterrows():
print(n)
for n, row in toy_df.iterrows():
print(n, row.event)<jupyter_output>1980-01-23 plane_crash
1981-04-15 plane_crash
1982-11-12 flood
1983-11-22 plane_crash
1984-11-04 fire
1985-06-19 flood
1986-09-14 plane_crash
1987-01-05 fire
1988-03-11 flood
1989-10-10 car_crash
|
permissive
|
/notebooks/3_PandasBasics.ipynb
|
Giovanni1085/UvA_CDH_2020
| 24 |
<jupyter_start><jupyter_text><jupyter_code>import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
!conda install -c anaconda xlrd --yes
df_can = pd.read_excel('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DV0101EN/labs/Data_Files/Canada.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skipfooter=2)
print ('Data read into a pandas dataframe!')
#Dataset of immigration into canada from other countries - 1980 to 2013 (each year and each country)
df_can.head(10)
df_can.shape
df_can.tail()
df_can.info()
#simplify and remove unnecessary columns
df_can.drop(['AREA','REG','DEV','Type','Coverage'], inplace=True, axis='columns')
df_can.head()
#rename columns to sensible names
df_can.rename(columns={'OdName':'Country', 'AreaName':'Continent', 'RegName':'Continent-Region'}, inplace=True)
df_can.head()
#count total immigration by country for all years
df_can['total_immigration'] = df_can.sum(axis='columns')
df_can.head()
df_can.describe()
df_can.isnull().sum()
df_can.dtypes
#Select immigration of years 1980-1985 from the dataset, for all countries?
#get the list of columns in the dataframe
df_can.columns
df_can[['Country',1980,1981,1982,1983,1984,1985]]
#simplify the dataset by changing the index from numbers to country names
df_can.index.values
df_can.set_index('Country', inplace=True)
df_can.head()
print(df_can.loc['India'])
df_can.iloc[8]
print(df_can.loc['Japan',2000])
print(df_can.iloc[12,12])
#convert columns names to string
#map - syntax(function, collection)
#apply function to each element of the collection - equivalent apply, mapply,sapply,lapply functions of R
df_can.columns = list(map(str,df_can.columns))
df_can.columns
#filtering on dataframe
#Q. find all countries in Asia
condition = df_can['Continent'] == 'Asia'
df_can[condition]
#Q. multiple conditions
df_can[(df_can['Continent']=='Asia')]
df_can[(df_can['1980'] > 1000)]
#syntax - df[(condition)]
df_can[(df_can['Continent']=='Asia')&(df_can['2013']>10000)]
df_can.where(df_can['Continent']=='Asia')
#conditions
#basic stats
import matplotlib.pyplot as plt
import matplotlib as mpl
import plotly as px
mpl.__version__
df_can.columns
years = list(map(str,range(1980,2014)))
years
india = df_can.loc['India',years]
india
india.plot()
india.index
#change the index to integer
india.index = india.index.map(int)
india.plot(kind='line')
plt.title('Immigration from India')
plt.ylabel('Number of immigrants')
plt.xlabel('Years')
#update the plot
plt.show()
india.plot(kind='line')
plt.title('Immigration from India')
plt.ylabel('Number of immigrants')
plt.xlabel('Years')
plt.text(2000, 32000, 'Y2K Revolution')
plt.text(1990,15000,'Recession')
plt.text(1985,10000,'Economic boom')
#update the plot
plt.show()
years = list(map(str,range(1980,2014)))
china = df_can.loc['China', years]
china.plot()
china.index = china.index.map(int)
china.plot(kind='line')
plt.title('Immigration of China')
plt.xlabel('Years')
plt.ylabel('Immigrants')
plt.text(1990,10000,'Marker')
plt.show()
years = list(map(str,range(1980,2014)))
chinaindia = df_can.loc[['China','India'], years]
new_chinaindia = chinaindia.transpose()
new_chinaindia.plot()
new_chinaindia.index = new_chinaindia.index.map(int)
new_chinaindia.plot(kind='line')
plt.title('Immigration of China and India')
plt.xlabel('Years')
plt.ylabel('Immigrants')
#plt.text(1990,10000,'Marker')
plt.show()
#Q. Which two countries have similar immigration trends over the years 1980-2013?
#Q. France and Germany
#top 5 countries that send immigrants to canada
df_can.sort_values(by='total_immigration', ascending=False, axis='index', inplace=True)
top5 = df_can.head(5)
top5
years = list(map(str,range(1980,2014)))
top5_clean = top5[years]
top5_clean = top5_clean.transpose()
top5_clean.plot()<jupyter_output><empty_output><jupyter_text>#Part 2<jupyter_code>#Area plots - Stacked line plot
top5_clean
top5_clean.index = top5_clean.index.map(int)
top5_clean.plot(kind="area", stacked=False, figsize=(20,10))
plt.title('Immigration trends in top five countries (1980-2013)')
plt.xlabel('Years')
plt.ylabel('Immigrants count')
plt.show()
top5_clean.index = top5_clean.index.map(int)
top5_clean.plot(kind="area", stacked=True, figsize=(20,10))
plt.title('Immigration trends in top five countries (1980-2013)')
plt.xlabel('Years')
plt.ylabel('Immigrants count')
plt.show()
top5_clean.index = top5_clean.index.map(int)
#alpha parameter for transparency - default = 0.5 (range is 0 to 1)
top5_clean.plot(kind="area", stacked=False, figsize=(20,10), alpha=0.25)
plt.title('Immigration trends in top five countries (1980-2013)')
plt.xlabel('Years')
plt.ylabel('Immigrants count')
plt.show()
#Bottom five countries - stacked and unstacked area plot
#histogram - at a particular time period / snapshot of the data
df_can['2000']
df_can['2000'].plot(kind='hist', figsize=(20,10))
plt.title('Immigration trends in 195 countries in 2000')
plt.xlabel('Number of Immigrants')
plt.ylabel('Number of Countries')
plt.show()
#histogram
years = list(map(str,range(1980,2014)))
df_can.loc[['India','China','Denmark','Norway','France','Germany'], years].transpose().plot.hist()
df_can.loc[['India','China','Denmark','Norway','France','Germany'], years].transpose().plot.hist()
plt.title('Immigration trends in 6 countries in 1980-2013')
plt.xlabel('Number of Immigrants')
plt.ylabel('Number of Countries')
plt.show()<jupyter_output><empty_output><jupyter_text>#Vertical bar plot<jupyter_code>india = df_can.loc['India', years]
india.plot(kind='bar', figsize=(20,10))
plt.title('Immigration trends India immigrants from 1980 to 2013')
plt.xlabel('Years')
plt.ylabel('India immigrants from 1980 to 2013')
plt.show()
india = df_can.loc['India', years]
india.index = india.index.map(int)
india.plot(kind='bar', figsize=(10,6))
plt.title('Immigration trends India immigrants from 1980 to 2013')
plt.xlabel('Years')
plt.ylabel('India immigrants from 1980 to 2013')
plt.annotate('Increasing trends', xy=(32, 70), xycoords='data',
xytext=(28, 20),
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='left', verticalalignment='top',
)
plt.show()
india = df_can.loc['India', years]
india.plot(kind='barh', figsize=(20,10))
plt.title('Immigration trends India immigrants from 1980 to 2013')
plt.xlabel('India immigrants from 1980 to 2013')
plt.ylabel('Years')
plt.show()<jupyter_output><empty_output>
|
permissive
|
/Lab Experiments/Experiment_3_200720.ipynb
|
rohitsmittal7/J045-ML-Sem-V
| 3 |
<jupyter_start><jupyter_text>**MATH 3332 **
**Section 52 **
# In Who-is-Normal.xslx there are 7 columns representing variables x1-x7, one of which is sample from the normal distribution. Find which one of the variables is normal?
## Reading in Values
The program starts by reading in the values from the Excel spreadsheet after they have been exported to a CSV format.<jupyter_code>import csv
f = open('input.csv', 'r')
reader = csv.reader(f)
values = []
for row in reader:
values.append(row)<jupyter_output><empty_output><jupyter_text>## Manipulating the Values
The program then transposes the matrix that was created when reading in the values from the CSV. The result is an array, where the rows consist of the name of the data (e.g., x1, x2) and the values for that distribution. <jupyter_code>import numpy as np
matrix = np.transpose(np.array(values))<jupyter_output><empty_output><jupyter_text>## Creating the Probability Plot
The program then creates a plot for each distribution using the **scipy.stats** library. The plot contains the **plotted values**, the $R^2$ value, the **mean**, and the **standard deviation **.<jupyter_code>import scipy.stats
import matplotlib.pyplot as plt
import matplotlib.gridspec as grid
%matplotlib inline
# Iterate over each dataset (x1, x2, etc)
for x_column in xrange(len(matrix)):
# Create array of float-type values from the dataset
values = np.array(matrix[x_column][1:])
values = [float(value) for value in values]
# Create a seperate figure to create the probability plot
fig = plt.figure()
fig.text(0, 0, 'Mean %s' % np.mean(values))
fig.text(1, 0, 'Std %s' % np.std(values))
fig.suptitle('x%s' % (x_column + 1))
# Produce the probability plot
scipy.stats.probplot(values, plot=plt)
# Create a separate figure to create the histogram
fig2 = plt.figure()
fig2.suptitle('x%s Histogram' % (x_column + 1))
# Produce the histogram using normalized values
plt.hist(values, normed=True)
# Add the normal distribution curve to the plot
mu, std = scipy.stats.norm.fit(values)
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = scipy.stats.norm.pdf(x, mu, std)
plt.plot(x, p, 'k', linewidth=2)
plt.show()<jupyter_output><empty_output>
|
no_license
|
/Probability/Normal.ipynb
|
rlacherksu/notebooks
| 3 |
<jupyter_start><jupyter_text># 株価の関係<jupyter_code>%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import numpy
import sqlite3
import pandas as pd
import os
fmdate="2015-01-01"
todate="2016-12-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = os.environ["HOME"]+'/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
sql="""select date as tm,close from histDaily where code=? and date between ? and ? order by tm"""
res=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
dt=pd.to_datetime(res["tm"].values)
plt.scatter(dt,res["close"].values)
plt.xticks(rotation=90)
plt.title("株価ヒストリカル", fontproperties=fp)
plt.xlabel("date")
plt.ylabel("price")
plt.show()<jupyter_output><empty_output><jupyter_text>## 出来高との関係
出来高が増える。注目度が高くなる。なので<jupyter_code>%matplotlib inline
import matplotlib.pyplot as plt
import numpy
import sqlite3
import pandas as pd
import matplotlib
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = r'/Users/admin/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/stock/daily.db")
sql="""select strftime('%Y%m%d',date) as tm,volume from histDaily where code=? and date between ? and ? order by tm"""
res=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
dt=pd.to_datetime(res["tm"].values)
plt.scatter(dt,res["volume"].values)
plt.xticks(rotation=90)
plt.title("出来高のヒストリカル", fontproperties=fp)
plt.xlabel("date")
plt.ylabel("volume")
plt.show()<jupyter_output><empty_output><jupyter_text>## 投稿文字スコアの推移
1or-1に修正し合算する,悲観的な投稿が多いか少ないかを見る<jupyter_code>%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import pandas as pd
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = r'/Users/admin/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
# https://stackoverflow.com/questions/41324503/pandas-sqlite-query-using-variable
sql="""select strftime('%Y%m%d',b.date) as tm,sum(case when e.score >=0 then 1 else -1 end) as score from board b ,board_score e where b.code=e.code and b.tno=e.tno and b.mno=e.mno and b.code=? and b.date between ? and ? group by tm order by tm"""
res=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
#https://qiita.com/ColdFreak/items/1028927c81fdfa5f3ac2
dt=pd.to_datetime(res["tm"].values)
plt.scatter(dt,res["score"].values)
plt.xticks(rotation=90)
plt.title("投稿スコアの推移", fontproperties=fp)
plt.xlabel("tm")
plt.ylabel("log10(avg(score))")
plt.show()<jupyter_output><empty_output><jupyter_text>## ここまでの予想
- 出来高が上がると注目度が高くなる
- そうすると人が増える
- 怪しい情報も増えてくる
### 絵文字は置いといて
-時系列で集まる人数をカウントする。## 書き込み人数を時系列で出力
<jupyter_code># emoji kind
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy
import sqlite3
import os
import pandas as pd
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = r'/Users/admin/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
cur=con.cursor()
res=pd.read_sql_query("select tm,count(*) as user_count from (select strftime('%Y%m%d',date) as tm,user from board"+
" where date between ? and ? and code=? group by tm,user) group by tm",con,params=(fmdate,todate,code,))
dt=pd.to_datetime(res["tm"].values)
plt.scatter(dt,res["user_count"].values)
plt.xticks(rotation=90)
plt.title("投稿者数と時系列", fontproperties=fp)
plt.xlabel("tm")
plt.ylabel("user_count")
plt.show()
<jupyter_output><empty_output><jupyter_text>## 相関を見る
- 投稿者数と出来高の相関係数
- 営業日以外の場合もあるし,投稿のない場合もあるのでInnerJoin<jupyter_code># emoji kind
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import os
import pandas as pd
from sklearn import linear_model
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = r'/Users/admin/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
cur=con.cursor()
user=pd.read_sql_query("select tm,count(*) as user_count from (select strftime('%Y%m%d',date) as tm,user from board"+
" where date between ? and ? and code=? group by tm,user) group by tm",con,params=(fmdate,todate,code,))
user["tm"]=pd.to_datetime(user["tm"])
#user=user.set_index("tm")
#cur.close()
#con.close()
print(user.head())
#con=sqlite3.connect("../../../../data/stock/daily.db")
#cur=con.cursor()
volume=pd.read_sql_query("select strftime('%Y%m%d',date) as tm,volume/10000 as volume from histDaily where code=? and date between ? and ? order by tm",con,params=(code,fmdate,todate,))
volume["tm"]=pd.to_datetime(volume["tm"])
#volume=volume.set_index("tm")
cur.close()
con.close()
print(volume.head())
data=pd.merge(volume,user,how="inner",left_on="tm",right_on="tm")
print(data.head())
# https://qiita.com/m-hayashi/items/ee379c86e3e18f0ddc6d
# http://pythondatascience.plavox.info/scikit-learn/線形回帰
x=pd.DataFrame(data["volume"]).values
y=pd.DataFrame(data["user_count"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("出来高と投稿者数の関係", fontproperties=fp)
plt.xlabel("volume")
plt.ylabel("user_count")
plt.show()<jupyter_output> tm user_count
0 2015-01-01 1
1 2015-01-02 2
2 2015-01-03 1
3 2015-01-04 1
4 2015-01-05 1
tm volume
0 2015-01-05 2110.3
1 2015-01-06 3103.5
2 2015-01-07 2395.0
3 2015-01-08 1928.2
4 2015-01-09 3816.0
tm volume user_count
0 2015-01-05 2110.3 1
1 2015-01-06 3103.5 2
2 2015-01-07 2395.0 6
3 2015-01-08 1928.2 1
4 2015-01-09 3816.0 11
[[ 0.02094571]]
[-17.38032316]
0.709948491945
<jupyter_text>- 出来高が増えると投稿者数も増える
- 要は出来高が増えると目立つので参加者が増えるということ## 投稿スコアと出来高の関係
どうやら出来高で説明ができるのか?<jupyter_code># emoji kind
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import os
import pandas as pd
from sklearn import linear_model
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = r'/Users/admin/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
cur=con.cursor()
sql="""select strftime('%Y%m%d',b.date) as tm,sum(case when e.score >=0 then 1 else -1 end) as score,avg(e.score/e.wordnum) as ascore from board b ,board_score e where b.code=e.code and b.tno=e.tno and b.mno=e.mno and b.code=? and b.date between ? and ? group by tm order by tm"""
score=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
#https://qiita.com/ColdFreak/items/1028927c81fdfa5f3ac2
score["tm"]=pd.to_datetime(score["tm"])
#cur.close()
#con.close()
print(score.head())
#con=sqlite3.connect("../../../../data/stock/daily.db")
#cur=con.cursor()
volume=pd.read_sql_query("select strftime('%Y%m%d',date) as tm,volume/10000 as volume from histDaily where code=? and date between ? and ? order by tm",con,params=(code,fmdate,todate,))
volume["tm"]=pd.to_datetime(volume["tm"])
#volume=volume.set_index("tm")
cur.close()
con.close()
print(volume.head())
data=pd.merge(volume,score,how="inner",left_on="tm",right_on="tm")
print(data.head())
# https://qiita.com/m-hayashi/items/ee379c86e3e18f0ddc6d
# http://pythondatascience.plavox.info/scikit-learn/線形回帰
x=pd.DataFrame(data["volume"]).values
y=pd.DataFrame(data["score"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("出来高と投稿スコアの関係,スコアは-1to1に修正", fontproperties=fp)
plt.xlabel("volume")
plt.ylabel("score")
plt.show()
## 平均化
x=pd.DataFrame(data["volume"]).values
y=pd.DataFrame(data["ascore"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("出来高と投稿スコアの関係,スコアはscore/wordnumに修正", fontproperties=fp)
plt.xlabel("volume")
plt.ylabel("score")
plt.show()
<jupyter_output> tm score ascore
0 2015-01-01 1 0.043008
1 2015-01-02 -3 -0.463773
2 2015-01-03 -1 -0.399109
3 2015-01-04 -1 -0.455821
4 2015-01-05 -1 -0.555359
tm volume
0 2015-01-05 2110.3
1 2015-01-06 3103.5
2 2015-01-07 2395.0
3 2015-01-08 1928.2
4 2015-01-09 3816.0
tm volume score ascore
0 2015-01-05 2110.3 -1 -0.555359
1 2015-01-06 3103.5 -2 -0.728097
2 2015-01-07 2395.0 -7 -0.479000
3 2015-01-08 1928.2 -1 -0.605556
4 2015-01-09 3816.0 -13 -0.503629
[[-0.09497118]]
[ 101.33264451]
0.650158690959
<jupyter_text>出来高と投稿スコアにも相関関係がある## 投稿スコアと参加者数の関係
となるとこれも相関があるのか?<jupyter_code># emoji kind
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import os
import pandas as pd
from sklearn import linear_model
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = os.environ["HOME"]+'/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
cur=con.cursor()
sql="""select strftime('%Y%m%d',b.date) as tm,sum(case when e.score >=0 then 1 else -1 end) as score,avg(e.score/e.wordnum) as ascore from board b ,board_score e where b.code=e.code and b.tno=e.tno and b.mno=e.mno and b.code=? and b.date between ? and ? group by tm order by tm"""
score=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
#https://qiita.com/ColdFreak/items/1028927c81fdfa5f3ac2
score["tm"]=pd.to_datetime(score["tm"])
user=pd.read_sql_query("select tm,count(*) as user_count from (select strftime('%Y%m%d',date) as tm,user from board"+
" where date between ? and ? and code=? group by tm,user) group by tm",con,params=(fmdate,todate,code,))
user["tm"]=pd.to_datetime(user["tm"])
#user=user.set_index("tm")
cur.close()
con.close()
print(user.head())
print(score.head())
data=pd.merge(user,score,how="inner",left_on="tm",right_on="tm")
print(data.head())
# https://qiita.com/m-hayashi/items/ee379c86e3e18f0ddc6d
# http://pythondatascience.plavox.info/scikit-learn/線形回帰
x=pd.DataFrame(data["user_count"]).values
y=pd.DataFrame(data["score"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("参加者数と投稿スコアの関係,スコア-1to1修正", fontproperties=fp)
plt.xlabel("user_count")
plt.ylabel("score")
plt.show()
####
x=pd.DataFrame(data["user_count"]).values
y=pd.DataFrame(data["ascore"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("参加者数と投稿スコアの関係,スコアscore/wordnum", fontproperties=fp)
plt.xlabel("user_count")
plt.ylabel("score")
plt.show()<jupyter_output> tm user_count
0 2015-01-01 1
1 2015-01-02 2
2 2015-01-03 1
3 2015-01-04 1
4 2015-01-05 1
tm score ascore
0 2015-01-01 1 0.043008
1 2015-01-02 -3 -0.463773
2 2015-01-03 -1 -0.399109
3 2015-01-04 -1 -0.455821
4 2015-01-05 -1 -0.555359
tm user_count score ascore
0 2015-01-01 1 1 0.043008
1 2015-01-02 2 -3 -0.463773
2 2015-01-03 1 -1 -0.399109
3 2015-01-04 1 -1 -0.455821
4 2015-01-05 1 -1 -0.555359
[[-4.60171765]]
[ 28.35209875]
0.948489885375
<jupyter_text>どうやら人が増えるとスコアが下がる。下げたい人間が多いようだ## 参加者が多い時,少ない時の絵文字の特徴
- 参加者が多い時は下げようとする傾向が高いので絵文字がスコア低くなる?
- 1日ごとの絵文字別投稿数と参加者数,もしくは出来高との相関
- 使われている絵文字の上位10個
これはそのまま出して,一投稿1絵文字種類に修正する。(同じ絵文字を繰り返し使う投稿の影響削除)<jupyter_code>%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import os
import pandas as pd
from sklearn import linear_model
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = os.environ["HOME"]+'/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
# top 10
sql="select c.emoji,sum(c.num),s.score from board_emoji_count c,emoji_score s,board b where b.code=c.code and b.tno=c.tno and b.mno=c.mno and c.emoji=s.emoji and c.code=? and b.date between ? and ? group by c.emoji order by sum(c.num) desc limit 10"
emoji_list=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
print(emoji_list)
for em in emoji_list["emoji"]:
print(em)
user=pd.read_sql_query("select tm,count(*) as user_count from (select strftime('%Y%m%d',date) as tm,user from board"+
" where date between ? and ? and code=? group by tm,user) group by tm",con,params=(fmdate,todate,code,))
user["tm"]=pd.to_datetime(user["tm"])
emoji=pd.read_sql_query("select strftime('%Y%m%d',date) as tm,sum(c.num) as emoji_count from board b ,board_emoji_count c"+
" where b.code=c.code and b.tno=c.tno and b.mno=c.mno and b.date between ? and ? and b.code=? and c.emoji=? group by tm",con,params=(fmdate,todate,code,em,))
emoji["tm"]=pd.to_datetime(emoji["tm"])
data=pd.merge(user,emoji,how="inner")
data.drop("tm",axis=1)
x=pd.DataFrame(data["user_count"]).values
y=pd.DataFrame(data["emoji_count"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("絵文字"+em+"数とユーザ数の関係", fontproperties=fp)
plt.xlabel("user_count")
plt.ylabel("emoji_score")
plt.show()
con.close()
<jupyter_output> emoji sum(c.num) score
0 🙌 3329 1.00
1 😱 908 -0.75
2 💛 549 0.75
3 💀 514 -0.50
4 💩 450 -0.75
5 😁 383 0.50
6 😅 381 0.50
7 👍 180 0.50
8 😊 165 0.50
9 😃 134 0.50
🙌
[[ 0.06752109]]
[ 33.30399642]
0.0204427406372
<jupyter_text>## 一投稿人絵文字にDistinctする<jupyter_code>%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import os
import pandas as pd
from sklearn import linear_model
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = os.environ["HOME"]+'/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
# top 10
sql="select c.emoji,sum(c.num),s.score from board_emoji_count c,emoji_score s,board b where b.code=c.code and b.tno=c.tno and b.mno=c.mno and c.emoji=s.emoji and c.code=? and b.date between ? and ? group by c.emoji order by sum(c.num) desc limit 10"
emoji_list=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
print(emoji_list)
for em in emoji_list["emoji"]:
print(em)
user=pd.read_sql_query("select tm,count(*) as user_count from "+
"(select strftime('%Y%m%d',date) as tm,user from board "+
" where date between ? and ? and code=? group by tm,user) "+
"group by tm",con,params=(fmdate,todate,code,))
user["tm"]=pd.to_datetime(user["tm"])
emoji=pd.read_sql_query("select strftime('%Y%m%d',date) as tm,count(*) as emoji_count "+
" from board b ,board_emoji_count c"+
" where b.code=c.code and b.tno=c.tno and b.mno=c.mno and "+
" b.date between ? and ? and b.code=? and c.emoji=? group by tm",con,params=(fmdate,todate,code,em,))
emoji["tm"]=pd.to_datetime(emoji["tm"])
data=pd.merge(user,emoji,how="inner")
data.drop("tm",axis=1)
x=pd.DataFrame(data["user_count"]).values
y=pd.DataFrame(data["emoji_count"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("絵文字"+em+"数とユーザ数の関係", fontproperties=fp)
plt.xlabel("user_count")
plt.ylabel("emoji_score")
plt.show()
con.close()
<jupyter_output> emoji sum(c.num) score
0 🙌 3329 1.00
1 😱 908 -0.75
2 💛 549 0.75
3 💀 514 -0.50
4 💩 450 -0.75
5 😁 383 0.50
6 😅 381 0.50
7 👍 180 0.50
8 😊 165 0.50
9 😃 134 0.50
🙌
[[ 0.00145165]]
[ 0.92381301]
0.23822882925
<jupyter_text># 絵文字投稿率と出来高
絵文字の投稿率と出来高は関係があるか?出来高が増えると人が増える,絵文字率が増えるか。1日あたりの投稿率で<jupyter_code>%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import os
import pandas as pd
from sklearn import linear_model
fmdate="2015-01-01"
todate="2017-07-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = os.environ["HOME"]+'/Library/Fonts/ipaexg.ttf'
fp = matplotlib.font_manager.FontProperties(fname=fname, size=14)
con=sqlite3.connect(os.environ["DATA"]+"/board/board.db")
# emoji post
sql="select strftime('%Y%m%d',b.date) as tm,count(*) emoji_post_count from board b,board_emoji_count e where b.code=e.code and b.mno=e.mno and b.tno=e.mno and e.code=? and b.date between ? and ? group by tm order by tm "
emoji_list=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
# all post
sql="select strftime('%Y%m%d',date) as tm,count(*) all_post_count from board where code=? and date between ? and ? group by tm"
allpost_list=pd.read_sql_query(sql,con,params=(code,fmdate,todate,))
data=pd.merge(emoji_list,allpost_list,how="inner",left_on="tm",right_on="tm")
data["emoji_post_ratio"]=data["emoji_post_count"]/data["all_post_count"]
con.close()
# 出来高
con=sqlite3.connect("../../../../data/stock/daily.db")
cur=con.cursor()
volume=pd.read_sql_query("select strftime('%Y%m%d',date) as tm,volume/10000 as volume from histDaily where code=? and date between ? and ? order by tm",con,params=(code,fmdate,todate,))
#volume["tm"]=pd.to_datetime(volume["tm"])
#volume=volume.set_index("tm")
cur.close()
con.close()
data=pd.merge(data,volume,how="inner",left_on="tm",right_on="tm")
x=pd.DataFrame(data["volume"]).values
y=pd.DataFrame(data["emoji_post_ratio"]).values
model=linear_model.LinearRegression()
model.fit(x,y)
# 回帰係数
print(model.coef_)
# 切片 (誤差)
print(model.intercept_)
# 決定係数
print(model.score(x, y))
plt.scatter(x,y,color="red")
plt.plot(x,model.predict(x))
plt.title("絵文字投稿率と出来高", fontproperties=fp)
plt.xlabel("volume")
plt.ylabel("emoji_post_ratio")
plt.show()<jupyter_output>[[ -4.82129903e-07]]
[ 0.01603249]
0.12813828662
|
no_license
|
/theme/notebook/22_toshiba_stock.ipynb
|
k-utsubo/xc40
| 10 |
<jupyter_start><jupyter_text>## Online Factorization Machine
Online factorization models take single data as an input, make a prediction, and train with the data.### 1. Setup
The from models imports the package for use. We have also imported a few other packages for plotting.<jupyter_code>import sys
sys.path.append('./../')
from utils import data_preprocess, plot
import os
import pickle
import numpy as np
import torch
from time import time
from models.models_online.FM_FTRL import FM_FTRL
from models.models_online.SFTRL_CCFM import SFTRL_CCFM
from models.models_online.SFTRL_Vanila import SFTRL_Vanila
from models.models_online.RRF_Online import RRF_Online
from utils.data_manager import *
from utils.metric_manager import *
Tensor_type = torch.DoubleTensor
import seaborn as sns
import os
sns.set()
sns.set_style('white')
import matplotlib
plt.rcParams["axes.grid"] = True
plt.rc('font', family='serif')
current_palette = sns.color_palette(sns.hls_palette(5, l=.6, s=1.0))
sns.palplot(current_palette)
current_palette = np.asarray(current_palette)<jupyter_output><empty_output><jupyter_text>### 2. Create a dataset depending on task
We prepare movielens100-k for regression and cod-ran dataset for classification<jupyter_code>task = 'cls'
if task == 'reg':
nbUsers = 943
nbMovies = 1682
nbFeatures = nbUsers + nbMovies
nbRatingsTrain = 90570
nbRatingsTest = 9430
data_dir = os.getcwd() + '/../dataset/ml-100k/'
#filename1, filename2 = 'ub.base', 'ub.test'
filename1, filename2 = 'ua.base', 'ua.test'
_, x_train, y_train, rate_train, timestamp_train = load_dataset_movielens(data_dir + filename1,nbRatingsTrain,nbFeatures,nbUsers)
x_train_s, rate_train_s, _ = sort_dataset_movielens(x_train, rate_train, timestamp_train)
want_permute = True
if want_permute :
idx = np.random.permutation(x_train_s.shape[0])
x_train_s = x_train_s[idx]
rate_train_s = rate_train_s[idx]
else:
pass
x_train_s = x_train_s.todense()
elif task == 'cls':
data_dir = './../dataset/cod-rna2/'
filename = "cod-rna2.scale"
X, y = load_svmlight_file(data_dir + filename, n_features=8)
X = X.toarray()
want_permute = False
if want_permute :
idx = np.random.permutation(X.shape[0])
x_train_s = np.asarray(X[idx])
rate_train_s = np.asarray(y[idx])
else:
x_train_s = np.asarray(X)
rate_train_s = np.asarray(y)
else:
raise ValueError<jupyter_output><empty_output><jupyter_text>### 3. Create factorization machines and conduct online task
There are four models in this notebook; 'FTRL','SFTRL_C','SFTRL_V','Online_RFF'<jupyter_code>model_option_list = ['FTRL','SFTRL_C','SFTRL_V','Online_RFF']
model_option = 'SFTRL_C'
#model_option = 'Online_RFF'
m = 40
lr_FM = 0.005
assert(model_option in model_option_list)
if model_option is 'FTRL':
Model_FM_FTRL = FM_FTRL(Tensor_type(x_train_s),Tensor_type(rate_train_s) , task,lr_FM, m )
pred_y, label_y , time = Model_FM_FTRL.online_learning()
elif model_option is 'SFTRL_C':
Model_SFTRL_C = SFTRL_CCFM(Tensor_type(x_train_s),Tensor_type(rate_train_s) , task,lr_FM,m )
pred_y, label_y , time = Model_SFTRL_C.online_learning()
elif model_option is 'SFTRL_V':
Model_SFTRL_V = SFTRL_Vanila(Tensor_type(x_train_s),Tensor_type(rate_train_s) , task,lr_FM,m )
pred_y, label_y , time = Model_SFTRL_V.online_learning()
elif model_option in 'Online_RFF':
inputs_matrix, outputs = Tensor_type(x_train_s), Tensor_type(rate_train_s)
Online_RRF = RRF_Online(inputs_matrix,outputs,task)
pred_y, label_y , time = Online_RRF.online_learning()
#print(type(x_train_s))
<jupyter_output>========================================
SFTRL_CCFM_0.005_40_start
0 th : pred 1.000000 , real -1.000000
1000 th : pred -1.000000 , real -1.000000
2000 th : pred -1.000000 , real 1.000000
3000 th : pred -1.000000 , real -1.000000
4000 th : pred -1.000000 , real -1.000000
5000 th : pred -1.000000 , real -1.000000
6000 th : pred -1.000000 , real -1.000000
7000 th : pred -1.000000 , real -1.000000
8000 th : pred -1.000000 , real -1.000000
9000 th : pred -1.000000 , real -1.000000
10000 th : pred -1.000000 , real -1.000000
11000 th : pred -1.000000 , real -1.000000
12000 th : pred -1.000000 , real -1.000000
13000 th : pred -1.000000 , real -1.000000
14000 th : pred -1.000000 , real -1.000000
15000 th : pred -1.000000 , real -1.000000
16000 th : pred -1.000000 , real -1.000000
17000 th : pred -1.000000 , real -1.000000
18000 th : pred -1.000000 , real -1.000000
19000 th : pred -1.000000 , real -1.000000
20000 th : pred -1.000000 , real -1.000000
21[...]<jupyter_text>### 4. Cacluate Metric
We prepare the each metric for regression (average root mean squre) and classification (average accuracy )<jupyter_code>if task == 'reg':
metric = regression_metric(pred_y,label_y)
elif task == 'cls':
_,metric = classfication_metric(pred_y,label_y)<jupyter_output><empty_output><jupyter_text>### 5. Results figures
We shows the resutls of prediction and accuracy for regression and classification<jupyter_code>fontsiz = 15
fig = plt.figure(figsize=(8*2,5.5))
fig.add_subplot(1,2,1)
plt.plot(pred_y,'b.', label = 'pred',alpha = 0.8)
plt.plot(label_y,'r.' , label = 'label',alpha = 0.8)
plt.xlabel('iteration' , fontsize = fontsiz)
plt.ylabel('y' , fontsize = fontsiz)
plt.title('pred vs real' ,fontsize = fontsiz)
plt.tick_params(axis='both', labelsize=fontsiz)
#plt.legend(loc='upper center', bbox_to_anchor=(0, 1. + 0.075 ,1-0.05,0.1), ncol = 2 , fontsize = fontsiz)
plt.legend( fontsize = fontsiz)
fig.add_subplot(1,2,2)
plt.plot(metric,'b', label = 'metric',alpha = 0.8 ,linewidth = 2.0)
plt.xlabel('iteration' , fontsize = fontsiz)
plt.ylabel('accumulated metric' , fontsize = fontsiz)
plt.title('accumulated metric over iteration' ,fontsize = fontsiz)
plt.tick_params(axis='both', labelsize=fontsiz)
#plt.legend(ncol = 2 , fontsize = 12)
#plt.legend(loc='upper center', bbox_to_anchor=(0, 1. + 0.075 ,1-0.05,0.1), ncol = 2 , fontsize = fontsiz)
plt.legend( fontsize = fontsiz)
fig.tight_layout()
<jupyter_output><empty_output>
|
no_license
|
/jupyters/online_models_example.ipynb
|
yejihan-dev/fm-for-online-recommendation
| 5 |
<jupyter_start><jupyter_text>### 运行一次来获得cookie
- 注意填充自己的帐号密码<jupyter_code>import requests
import time
from selenium import webdriver
def get_pixiv_cookie(pixiv_id,pixiv_pw):
driver = webdriver.Chrome() # Optional argument, if not specified will search pat
driver.get('https://accounts.pixiv.net/login');
time.sleep(0.5)
account = driver.find_element_by_css_selector('input[autocomplete="username"]')
account.send_keys(pixiv_id)
time.sleep(10)
password = driver.find_element_by_css_selector('input[autocomplete="current-password"]')
password.send_keys(pixiv_pw)
time.sleep(1)
password.submit()
time.sleep(10)
cookie=driver.get_cookies()
driver.close()
return cookie
f=open("account.txt")
p_id = f.readline().rstrip()
p_pw = f.readline().rstrip()
f.close()
cookies_list=get_pixiv_cookie(p_id,p_pw)
f=open("cookie.txt","w")
f.write(str(cookies_list))
f.close()<jupyter_output><empty_output><jupyter_text>### 下面的直接运行可以就可以根据id爬图了- 从保存的文件获取cookie<jupyter_code>f=open("cookie.txt","r")
cookie_list=eval(f.readline())
f.close()
s = requests.Session()
for cookie in cookie_list:
s.cookies.set(cookie['name'], cookie['value'])<jupyter_output><empty_output><jupyter_text>- 获取预载信息<jupyter_code>pid='86685722'
delay=0.5
url="https://www.pixiv.net/artworks/"+pid
hder1={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36 Edg/84.0.522.63',
}
r=s.get(url,headers=hder1)
soup=BeautifulSoup(r.text, 'html.parser')
meta=soup.find('meta',id="meta-preload-data")# 格式化后可清晰看到这里是预载信息
js=json.loads(meta['content'])
# print(json.dumps(js,indent=2))<jupyter_output><empty_output><jupyter_text>使用`print(soup.prettify())`打印输出之后,可以清楚的发现预载信息在一个叫`meta-preload-data`的`meta`中
之后将内容提取之后也格式化
这是json文件,使用`json.dumps(js,indent=2)`进行格式化输出

js本身是列表,通过列表的形式获取需要的信息,这里需要的主要是url和页数信息- 获取图片<jupyter_code>def get_image(_url,_hder,_filename=None,_folder='img'):
if _filename==None:
img_path=urllib.parse.urlparse(_url).path
_filename=img_path.split('/')[-1]#路径的最后一项是文件名
f=open(_folder+'/'+_filename,"wb")
re=requests.get(_url,headers=_hder)
f.write(re.content)
f.close()
hder2 = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36 Edg/84.0.522.63',
"referer":"https://www.pixiv.net/"
}
ori=js["illust"][pid]['urls']['original']
pagecount=js['illust'][pid]['pageCount']
for i in range(pagecount):
url=ori.replace("p0","p"+str(i))
# print(url)
get_image(url,hder2)
time.sleep(delay)<jupyter_output><empty_output><jupyter_text>这里注意只有`pixiv.net`发出的原图请求`pximg.net`才会受理,这个需要在原图界面观察Network后筛选得到
### 整合了以上代码<jupyter_code>import requests
import urllib.request
import urllib.parse
from bs4 import BeautifulSoup
from PIL import Image
import sys
import time
import json
def get_image(_url,_hder,_filename=None,_folder='img'):
if _filename==None:
img_path=urllib.parse.urlparse(_url).path
_filename=img_path.split('/')[-1]#路径的最后一项是文件名
f=open(_folder+'/'+_filename,"wb")
re=requests.get(_url,headers=_hder)
f.write(re.content)
f.close()
def download_id(pid):
f=open("cookie.txt","r")
cookie_list=eval(f.readline())
f.close()
s = requests.Session()
for cookie in cookie_list:
s.cookies.set(cookie['name'], cookie['value'])
if(type(pid)==int):
pid=str(pid)
delay=0.5
url="https://www.pixiv.net/artworks/"+pid
hder1={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36 Edg/84.0.522.63',
}
r=s.get(url,headers=hder1)
soup=BeautifulSoup(r.text, 'html.parser')
meta=soup.find('meta',id="meta-preload-data")
js=json.loads(meta['content'])
hder2 = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36 Edg/84.0.522.63',
"referer":"https://www.pixiv.net/"
}
ori=js["illust"][pid]['urls']['original']
pagecount=js['illust'][pid]['pageCount']
for i in range(pagecount):
url=ori.replace("p0","p"+str(i))
get_image(url,hder2)
time.sleep(delay)
download_id(90479236)<jupyter_output><empty_output><jupyter_text>### 尝试通过作者爬取作品id<jupyter_code>author_id='27691'
url='https://www.pixiv.net/ajax/user/'+author_id+'/profile/all'
hder={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36 Edg/84.0.522.63',
}
js=json.loads(s.get(url,headers=hder).text)
print(js["body"]["illusts"].keys())<jupyter_output><empty_output><jupyter_text>首先通过
```
url='https://www.pixiv.net/users/'+author_id+'/artworks'
soup=BeautifulSoup(s.get(url,headers=hder).text)
print(soup.prettify())
```
直接爬取该页面信息,但是分析后发现没有id信息,然后打开浏览器加载这个页面,在Network下的XHR里可以找到传来id信息的页面,是https://www.pixiv.net/ajax/user/27691/profile/all 
直接点击可以访问,说明不需要cookie,只需要浏览器UA即可.
得到所有插画的id就等同于得到所有图片了<jupyter_code>for id in list(js["body"]["illusts"].keys()):
download_id(id)<jupyter_output><empty_output>
|
no_license
|
/pixiv.ipynb
|
Unknown-Chinese-User/pixiv-spider
| 7 |
<jupyter_start><jupyter_text>**This notebook is an exercise in the [Intro to Deep Learning](https://www.kaggle.com/learn/intro-to-deep-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/deep-neural-networks).**
---
# Introduction #
In the tutorial, we saw how to build deep neural networks by stacking layers inside a `Sequential` model. By adding an *activation function* after the hidden layers, we gave the network the ability to learn more complex (non-linear) relationships in the data.
In these exercises, you'll build a neural network with several hidden layers and then explore some activation functions beyond ReLU. Run this next cell to set everything up!<jupyter_code>import tensorflow as tf
# Setup plotting
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning_intro.ex2 import *<jupyter_output><empty_output><jupyter_text>In the *Concrete* dataset, your task is to predict the compressive strength of concrete manufactured according to various recipes.
Run the next code cell without changes to load the dataset.<jupyter_code>import pandas as pd
concrete = pd.read_csv('../input/dl-course-data/concrete.csv')
concrete.head()<jupyter_output><empty_output><jupyter_text># 1) Input Shape #
The target for this task is the column `'CompressiveStrength'`. The remaining columns are the features we'll use as inputs.
What would be the input shape for this dataset?<jupyter_code>concrete.shape
# YOUR CODE HERE
input_shape = [8]
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#q_1.hint()
#q_1.solution()<jupyter_output><empty_output><jupyter_text># 2) Define a Model with Hidden Layers #
Now create a model with three hidden layers, each having 512 units and the ReLU activation. Be sure to include an output layer of one unit and no activation, and also `input_shape` as an argument to the first layer.<jupyter_code>from tensorflow import keras
from tensorflow.keras import layers
# YOUR CODE HERE
model = keras.Sequential([
layers.Dense(units= 512, activation= 'relu',input_shape= [8]),
layers.Dense(units= 512, activation= 'relu'),
layers.Dense(units= 512, activation= 'relu'),
layers.Dense(units= 1)
])
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#q_2.hint()
#q_2.solution()<jupyter_output><empty_output><jupyter_text># 3) Activation Layers #
Let's explore activations functions some.
The usual way of attaching an activation function to a `Dense` layer is to include it as part of the definition with the `activation` argument. Sometimes though you'll want to put some other layer between the `Dense` layer and its activation function. (We'll see an example of this in Lesson 5 with *batch normalization*.) In this case, we can define the activation in its own `Activation` layer, like so:
```
layers.Dense(units=8),
layers.Activation('relu')
```
This is completely equivalent to the ordinary way: `layers.Dense(units=8, activation='relu')`.
Rewrite the following model so that each activation is in its own `Activation` layer.<jupyter_code>### YOUR CODE HERE: rewrite this to use activation layers
model = keras.Sequential([
layers.Dense(32, activation='relu', input_shape=[8]),
layers.Dense(32, activation='relu'),
layers.Dense(1),
])
model = keras.Sequential([
layers.Dense(units= 32, input_shape=[8]),
layers.Activation('relu'),
layers.Dense(units= 32),
layers.Activation('relu'),
layers.Dense(units= 1),
])
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#q_3.hint()
#q_3.solution()<jupyter_output><empty_output><jupyter_text># Optional: Alternatives to ReLU #
There is a whole family of variants of the `'relu'` activation -- `'elu'`, `'selu'`, and `'swish'`, among others -- all of which you can use in Keras. Sometimes one activation will perform better than another on a given task, so you could consider experimenting with activations as you develop a model. The ReLU activation tends to do well on most problems, so it's a good one to start with.
Let's look at the graphs of some of these. Change the activation from `'relu'` to one of the others named above. Then run the cell to see the graph. (Check out the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/activations) for more ideas.)<jupyter_code># YOUR CODE HERE: Change 'relu' to 'elu', 'selu', 'swish'... or something else
activation_layer = layers.Activation('swish')
x = tf.linspace(-3.0, 3.0, 100)
y = activation_layer(x) # once created, a layer is callable just like a function
plt.figure(dpi=100)
plt.plot(x, y)
plt.xlim(-3, 3)
plt.xlabel("Input")
plt.ylabel("Output")
plt.show()<jupyter_output><empty_output>
|
no_license
|
/Intro to Deep Learning/2 - Deep Neural Networks/exercise-deep-neural-networks.ipynb
|
mtamjidhossain/Kaggle-courses
| 6 |
<jupyter_start><jupyter_text># Answering Questions for the Chinook Record StoreThe Chinook record store has just signed a deal with a new record label, and you've been tasked with selecting the first three albums that will be added to the store, from a list of four. All four albums are by artists that don't have any tracks in the store right now - we have the artist names, and the genre of music they produce:
Artist Name - Genre
- Regal - Hip-Hop
- Red Tone - Punk
- Meteor and the Girls - Pop
- Slim Jim Bites - Blues
The record label specializes in artists from the USA, and they have given Chinook some money to advertise the new albums in the USA, so we're interested in finding out which genres sell the best in the USA.<jupyter_code>%%capture
%load_ext sql
%sql sqlite:///chinook.db<jupyter_output><empty_output><jupyter_text># Overview of the Data
Query the database to get a list of all tables and views in our database:<jupyter_code>%%sql
SELECT
name,
type
FROM sqlite_master
WHERE type IN ("table","view");<jupyter_output>Done.
<jupyter_text># Selecting Albums to PurchaseWrite a query that returns each genre, with the number of tracks sold in the USA:
- in absolute numbers
- in percentages<jupyter_code>%%sql
SELECT il.* FROM invoice_line il
INNER JOIN invoice i ON i.invoice_id = il.invoice_id
INNER JOIN customer c on c.customer_id = i.customer_id
WHERE c.country = 'USA'
LIMIT 10;
%%sql
WITH usa_sold AS(SELECT il.* FROM invoice_line il
INNER JOIN invoice i ON i.invoice_id = il.invoice_id
INNER JOIN customer c on c.customer_id = i.customer_id
WHERE c.country = 'USA'
)
SELECT g.name genre,
count(us.invoice_line_id) tracks_sold
FROM usa_sold us
INNER JOIN track t ON t.track_id = us.track_id
INNER JOIN genre g ON g.genre_id = t.genre_id
GROUP BY 1
ORDER BY 2;
%%sql
WITH usa_sold AS(SELECT il.* FROM invoice_line il
INNER JOIN invoice i ON i.invoice_id = il.invoice_id
INNER JOIN customer c on c.customer_id = i.customer_id
WHERE c.country = 'USA'
)
SELECT g.name genre,
count(us.invoice_line_id) tracks_sold,
cast(count(us.invoice_line_id) AS FLOAT) / (
SELECT COUNT(*) from usa_sold
) percentage_sold
FROM usa_sold us
INNER JOIN track t ON t.track_id = us.track_id
INNER JOIN genre g ON g.genre_id = t.genre_id
GROUP BY 1
ORDER BY 2 DESC
LIMIT 10;
<jupyter_output>Done.
<jupyter_text>Based on the table above and the four options mentioned at the beginning, the albums to choose would be from should be:
- Red Tone - Punk
- Slim Jim Bites - Blues
- Meteor and the Girls - Pop<jupyter_code>%%sql
WITH usa_sold AS(SELECT il.* FROM invoice_line il
INNER JOIN invoice i ON i.invoice_id = il.invoice_id
INNER JOIN customer c on c.customer_id = i.customer_id
WHERE c.country = 'USA'
)
SELECT DISTINCT t.composer artist_name,
g.name genre,
count(us.invoice_line_id) tracks_sold,
cast(count(us.invoice_line_id) AS FLOAT) / (
SELECT COUNT(*) from usa_sold
) percentage_sold
FROM usa_sold us
INNER JOIN track t ON t.track_id = us.track_id
INNER JOIN genre g ON g.genre_id = t.genre_id
WHERE g.name = 'Rock'
GROUP BY 1
ORDER BY 3 DESC
LIMIT 5;
<jupyter_output>Done.
<jupyter_text>Rock, Alternative & Punk, and Metal have the most tracks sold. Adding more artists along the lines of The Rolling Stones and Nirvana would be beneficial after adding the three albums from above.# Analyzing Employee Sales PerformanceWrite a query that finds the total dollar amount of sales assigned to each sales support agent within the company. Add any extra attributes for that employee that you find are relevant to the analysis.<jupyter_code>%%sql
SELECT SUM(i.total) total_sales,
c.support_rep_id
FROM invoice i
INNER JOIN customer c ON c.customer_id = i.customer_id
GROUP BY 2<jupyter_output>Done.
<jupyter_text>There is data on three sales agents.<jupyter_code>%%sql
SELECT e.first_name ||' '|| e.last_name employee_name,
e.title
FROM employee e<jupyter_output>Done.
<jupyter_text>Confirming the three sales agents.<jupyter_code>%%sql
WITH total_sales_report AS
(
SELECT SUM(i.total) total,
c.support_rep_id
FROM invoice i
INNER JOIN customer c ON c.customer_id = i.customer_id
GROUP BY 2
)
SELECT e.first_name ||' '|| e.last_name employee_name,
e.hire_date,
SUM(tsr.total) total_sales
FROM total_sales_report tsr
INNER JOIN employee e ON e.employee_id = tsr.support_rep_id
GROUP BY 1;<jupyter_output>Done.
<jupyter_text>Steve started six months later, so that's why his numbers are a bit behind.# Analyzing Sales by Country<jupyter_code>%%sql
WITH country_or_other AS
(
SELECT
CASE
WHEN (
SELECT count(*)
FROM customer
where country = c.country
) = 1 THEN "Other"
ELSE c.country
END AS country,
c.customer_id,
il.*
FROM invoice_line il
INNER JOIN invoice i ON i.invoice_id = il.invoice_id
INNER JOIN customer c ON c.customer_id = i.customer_id
)
SELECT
country,
customers,
total_sales,
average_order,
customer_lifetime_value
FROM
(
SELECT
country,
count(distinct customer_id) customers,
SUM(unit_price) total_sales,
SUM(unit_price) / count(distinct customer_id) customer_lifetime_value,
SUM(unit_price) / count(distinct invoice_id) average_order,
CASE
WHEN country = "Other" THEN 1
ELSE 0
END AS sort
FROM country_or_other
GROUP BY country
ORDER BY sort ASC, total_sales DESC
);<jupyter_output>Done.
|
no_license
|
/chinook_store_sql.ipynb
|
EdsTyping/chinook_record_store_sql
| 8 |
<jupyter_start><jupyter_text>## Surrogate Models & Helper Functions<jupyter_code>ValueRange = namedtuple('ValueRange', ['min', 'max'])
def determinerange(values):
"""Determine the range of values in each dimension"""
return ValueRange(np.min(values, axis=0), np.max(values, axis=0))
def linearscaletransform(values, *, range_in=None, range_out=ValueRange(0, 1), scale_only=False):
"""Perform a scale transformation of `values`: [range_in] --> [range_out]"""
if range_in is None:
range_in = determinerange(values)
elif not isinstance(range_in, ValueRange):
range_in = ValueRange(*range_in)
if not isinstance(range_out, ValueRange):
range_out = ValueRange(*range_out)
scale_out = range_out.max - range_out.min
scale_in = range_in.max - range_in.min
if scale_only:
scaled_values = (values / scale_in) * scale_out
else:
scaled_values = (values - range_in.min) / scale_in
scaled_values = (scaled_values * scale_out) + range_out.min
return scaled_values
''' F16 '''
def F16(X):
f = bn.F16()
X = np.array(X)
return f(X)
''' Latin HyperCube Sampling Design of Experiment '''
def DOE(n_obs, dim):
np.random.seed(0)
lhd = pyDOE.lhs(n=dim, samples=n_obs, criterion='m')
X = [lhd[:,idx] for idx in range(dim)]
return X
def create_basis_function(data):
true = np.array(data['Y'])
data = pd.DataFrame(np.atleast_2d(PolynomialFeatures(degree=2).fit_transform(data.iloc[:,:-1])))
data['Y'] = pd.Series(true)
return data
''' Create Basis Functions '''
def create_function_basis(x):
return np.atleast_2d(PolynomialFeatures(degree=2).fit_transform(x.reshape(1,-1)))
''' Elastic Net Regression '''
def elastic_net(train_data,test_data):
scaler = MinMaxScaler().fit(np.r_[train_data.iloc[:,:-1].values])
regr = ElasticNet(alpha= 0.12 ,random_state=0 , l1_ratio=0.81, fit_intercept =True, max_iter=3000,selection='random').fit(scaler.transform ( np.array(train_data.iloc[:,:-1])) , np.array(train_data.iloc[:,-1]))
pred = regr.predict(scaler.transform(test_data))
def predict(scaler, regr):
def __predict__(x):
x = create_function_basis(x)
return regr.predict(scaler.transform(x))
return __predict__
return regr,pred, predict(scaler, regr)
''' Kriging'''
def kriging(train_data,test_data):
kernel = RBF()
scaler = MinMaxScaler().fit(np.r_[train_data.iloc[:,:-1].values])
gpr = GaussianProcessRegressor(kernel=kernel,n_restarts_optimizer= 15,random_state=0,
normalize_y=True ).fit(scaler.transform(train_data.iloc[:,:-1]), train_data.iloc[:,-1])
pred = gpr.predict(scaler.transform(test_data))
def predict(scaler, gpr):
def __predict__(x):
x = np.atleast_2d(x)
return gpr.predict(scaler.transform(x))
return __predict__
return gpr,pred, predict(scaler,gpr)
''' Support Vector Regression'''
def _SVR(train_data,test_data):
scaler = MinMaxScaler().fit(np.r_[train_data.iloc[:,:-1].values])
gpr = sklearn.svm.SVR(kernel='rbf', gamma = 37.213462 , C = 1000.000000 ,max_iter=1500).fit( scaler.transform(train_data.iloc[:,:-1]), train_data.iloc[:,-1])
pred = gpr.predict(scaler.transform(test_data))
def predict(scaler, gpr):
def __predict__(x):
x = np.atleast_2d(x)
return gpr.predict(scaler.transform(x))
return __predict__
return gpr,pred, predict(scaler,gpr)<jupyter_output><empty_output><jupyter_text>## Load Training and Test Data Set initially Generated<jupyter_code>path = "train_16_4000Samples.csv"
train = pd.read_csv(path).iloc[:,1:]
test = pd.read_csv('test_16_800Samples.csv').iloc[:,1:]
true = np.array(test['Y'])<jupyter_output><empty_output><jupyter_text>## Surrogate Models<jupyter_code>model_kri , pred_kri , predict_kri = kriging(train,test.iloc[:,:-1])
model_svr , pred_svr , predict_svr = _SVR(train,test.iloc[:,:-1])
train = create_basis_function(train)
test = create_basis_function(test)
model_eln , pred_eln , predict_eln = elastic_net(train,test.iloc[:,:-1])<jupyter_output>/usr/local/lib/python3.6/dist-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=1500). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
<jupyter_text>## CMA-ES<jupyter_code>Columns = ['Kri' , 'SVR' , 'ELN' ]
Cols = []
for j in range(len(Columns)):
for i in range(1,201):
Cols.append(Columns[j]+'_X'+str(i))
const = [ [-5] * 200, [5] * 200 ]
opt = cma.CMAOptions()
opt.set("bounds", const)
opt.set ("seed" , 0)
opt.set ("maxfevals" , 200000)
n_obs , dim = 30, 200
G = DOE(n_obs, dim)
G = [ linearscaletransform(G[idx] , range_out=(-5,5)) for idx in range(dim) ]
G = [ G[idx].reshape(n_obs,1) for idx in range(len(G)) ]
X_Values = np.zeros([10,600])
for i in range(10, 20):
print ('Run : '+ str(i))
min_kri = cma.fmin(predict_kri , np.concatenate(G, 1)[i] , 2.5 , options=opt) [0]
min_svr = cma.fmin(predict_svr , np.concatenate(G, 1)[i] , 2.5 , options=opt) [0]
min_eln = cma.fmin(predict_eln , np.concatenate(G, 1)[i] , 2.5 , options=opt) [0]
X_Values [i-10,:] = list(min_kri)+list(min_svr)+list(min_eln)
X_Values = pd.DataFrame(X_Values)
X_Values.columns = Cols
X_Values.to_csv('sample_data\\F16_X_Values_first.csv')
Krig_Fun = np.zeros(10)
SVM_Fun = np.zeros(10)
ELN_Fun = np.zeros(10)
for i in range(X_Values.shape[0]):
Krig_Fun [i] = F16(X_Values.iloc[i,:200])
SVM_Fun [i] = F16(X_Values.iloc[i,200:400])
ELN_Fun [i] = F16(X_Values.iloc[i,400:600])
print ('Kriging')
print (stats.mode(Krig_Fun) , np.std(Krig_Fun))
print ('SVM')
print (stats.mode(SVM_Fun) , np.std(SVM_Fun))
print ('ELN')
print (stats.mode(ELN_Fun) , np.std(ELN_Fun))
<jupyter_output><empty_output>
|
no_license
|
/Original - Optimality/200D/second/F16_200_original.ipynb
|
SibghatUllah13/Deep-Latent_Variable_Models-for-dimensionality-reduction-in-surrogate-assisted-optimization
| 4 |
<jupyter_start><jupyter_text>## Download rnn_merged.zip & rnn_embed.zip from https://drive.google.com/drive/folders/1yO_W-m0fF_PludrnScdgyTGsPFoDsA6_?usp=sharing and unzip to the same folder of this file
## Also download train_jpg.zip & test_jpg.zip from competition website<jupyter_code>import pandas as pd
import tensorflow as tf
from keras.preprocessing import text, sequence
import numpy as np
from keras.layers import Input, SpatialDropout1D,Dropout, GlobalAveragePooling1D, GlobalMaxPooling1D, \
CuDNNGRU, GRU, Bidirectional, LSTM, Dense, Embedding, concatenate, Embedding, \
Flatten, Activation, BatchNormalization, regularizers, Conv1D, Conv2D, MaxPooling2D
from keras.constraints import max_norm
from keras.initializers import Orthogonal
from keras.models import Model
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, LambdaCallback, Callback, LearningRateScheduler
import keras.backend as K
import numpy as np
from sklearn import metrics
from sklearn.model_selection import train_test_split
import os
import pickle
import gc; gc.enable()
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import string
import nltk
from nltk.corpus import stopwords
from nltk.stem.snowball import RussianStemmer
from scipy.stats import boxcox
import re
#from tqdm import tqdm<jupyter_output>E:\Anaconda3\envs\tensorflow\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
<jupyter_text>### Check GPU Availability<jupyter_code>sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
K.tensorflow_backend._get_available_gpus()<jupyter_output>[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 339512226104842527
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 3174131302
locality {
bus_id: 1
links {
}
}
incarnation: 17746058336949755705
physical_device_desc: "device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1"
]
<jupyter_text>### Preprocess Training and Testing Data<jupyter_code>seed = 411
rnn_train_epochs = 10
batch_size=128 # 32 or 64 is good (too huge for my PC), 128 is worse in the past experiments
cpu_count=4
features = pickle.load(open('rnn_merged.pkl', 'rb'))
features.keys()
train = features['train']
test = features['test']
renamed_cols = []
count = 0
for col in train.columns:
if 'cat_features_user_id_category_name' in col:
col = 'cat_features_user_id_category_name_'+str(count)
count += 1
renamed_cols.append(col)
train.columns = renamed_cols
test.columns = renamed_cols
train_len = train.shape[0]
train_y = features['y_train']
categorical = features['categorical']
numerical = [f for f in train.columns if f not in categorical]
features = numerical + categorical
train.columns.tolist()
# remove features: text, image, other embeddings\feature engineerings
remove_cols = [
'cat_features_user_id_category_name_0',
'cat_features_user_id_category_name_1',
'cat_features_user_id_category_name_2',
'cat_features_user_id_category_name_3',
'cat_features_user_id_category_name_4',
'cat_features_user_id_category_name_5',
'cat_features_user_id_category_name_6',
'cat_features_user_id_category_name_7',
'cat_features_user_id_category_name_8',
'cat_features_user_id_category_name_9',
'cat_features_user_id_category_name_10',
'cat_features_user_id_category_name_11',
'cat_features_user_id_category_name_12',
'cat_features_user_id_category_name_13',
'cat_features_user_id_category_name_14',
'cat_features_user_id_category_name_15',
'cat_features_user_id_category_name_16',
'cat_features_user_id_category_name_17',
'cat_features_user_id_category_name_18',
'cat_features_user_id_category_name_19',
'cat_features_user_id_category_name_20',
'cat_features_user_id_category_name_21',
'cat_features_user_id_category_name_22',
'cat_features_user_id_category_name_23',
'cat_features_user_id_category_name_24',
'cat_features_user_id_category_name_25',
'cat_features_user_id_category_name_26',
'cat_features_user_id_category_name_27',
'cat_features_user_id_category_name_28',
'cat_features_user_id_category_name_29',
'cat_features_user_id_category_name_30',
'cat_features_user_id_category_name_31',
'cat_features_user_id_category_name_32',
'cat_features_user_id_category_name_33',
'cat_features_user_id_category_name_34',
'cat_features_user_id_category_name_35',
'cat_features_user_id_category_name_36',
'cat_features_user_id_category_name_37',
'cat_features_user_id_category_name_38',
'cat_features_user_id_category_name_39',
'cat_features_user_id_category_name_40',
'cat_features_user_id_category_name_41',
'cat_features_user_id_category_name_42',
'cat_features_user_id_category_name_43',
'cat_features_user_id_category_name_44',
'cat_features_user_id_category_name_45',
'cat_features_user_id_category_name_46',
'title_tfidf_svd_1',
'title_tfidf_svd_2',
'title_tfidf_svd_3',
'title_tfidf_svd_4',
'title_tfidf_svd_5',
'description_tfidf_svd_1',
'description_tfidf_svd_2',
'description_tfidf_svd_3',
'description_tfidf_svd_4',
'description_tfidf_svd_5',
'region_mean_price',
'region_mean_image_top_1',
'region_mean_item_seq_number',
'region_mean_price_pred',
'region_mean_price_pred_all',
'region_mean_ridge_preds',
'city_mean_price',
'city_mean_image_top_1',
'city_mean_item_seq_number',
'city_mean_price_pred',
'city_mean_price_pred_all',
'city_mean_ridge_preds',
'parent_category_name_mean_price',
'parent_category_name_mean_image_top_1',
'parent_category_name_mean_item_seq_number',
'parent_category_name_mean_price_pred',
'parent_category_name_mean_price_pred_all',
'parent_category_name_mean_ridge_preds',
'category_name_mean_price',
'category_name_mean_image_top_1',
'category_name_mean_item_seq_number',
'category_name_mean_price_pred',
'category_name_mean_price_pred_all',
'category_name_mean_ridge_preds',
'user_type_mean_price',
'user_type_mean_image_top_1',
'user_type_mean_item_seq_number',
'user_type_mean_price_pred',
'user_type_mean_price_pred_all',
'user_type_mean_ridge_preds',
'param_1_mean_price',
'param_1_mean_image_top_1',
'param_1_mean_item_seq_number',
'param_1_mean_price_pred',
'param_1_mean_price_pred_all',
'param_1_mean_ridge_preds',
'param_2_mean_price',
'param_2_mean_image_top_1',
'param_2_mean_item_seq_number',
'param_2_mean_price_pred',
'param_2_mean_price_pred_all',
'param_2_mean_ridge_preds',
'param_3_mean_price',
'param_3_mean_image_top_1',
'param_3_mean_item_seq_number',
'param_3_mean_price_pred',
'param_3_mean_price_pred_all',
'param_3_mean_ridge_preds',
'user_id_nunique_parent_category_name',
'user_id_nunique_category_name',
'user_id_nunique_param_1',
'user_id_nunique_param_2',
'user_id_nunique_param_3',
'user_id_nunique_activation_date',
'user_id_activation_date_count_item_id',
'image_top_1_nunique_item_id',
'image_top_1_nunique_user_id',
'image_top_1_nunique_category_name',
'image_top_1_nunique_param_1',
'image_top_1_nunique_item_seq_number',
'image_top_1_mean_price_pred',
'image_top_1_std_price_pred',
'image_top_1_mean_item_seq_number',
'user_id_mean_ridge_preds',
'user_id_category_name_mean_ridge_preds',
'user_id_image_top_1_mean_ridge_preds',
'user_id_category_name_sum_ridge_preds',
'cityxcatxusertypeitem_num',
'cityxcatxusertypecity_fm_factor_0',
'cityxcatxusertypecity_fm_factor_1',
'cityxcatxusertypecategory_name_fm_factor_0',
'cityxcatxusertypecategory_name_fm_factor_1',
'cityxcatxusertypeuser_type_fm_factor_0',
'cityxcatxusertypeuser_type_fm_factor_1',
'cityxcatxusertypecity_fm_bias',
'cityxcatxusertypecategory_name_fm_bias',
'cityxcatxusertypeuser_type_fm_bias',
'imgxcityxcatitem_num',
'imgxcityxcatimage_top_1_fm_factor_0',
'imgxcityxcatimage_top_1_fm_factor_1',
'imgxcityxcatcity_fm_factor_0',
'imgxcityxcatcity_fm_factor_1',
'imgxcityxcatcategory_name_fm_factor_0',
'imgxcityxcatcategory_name_fm_factor_1',
'imgxcityxcatimage_top_1_fm_bias',
'imgxcityxcatcity_fm_bias',
'imgxcityxcatcategory_name_fm_bias',
'imgxisqnxusertypeitem_num',
'imgxisqnxusertypeimage_top_1_fm_factor_0',
'imgxisqnxusertypeimage_top_1_fm_factor_1',
'imgxisqnxusertypeitem_seq_number_fm_factor_0',
'imgxisqnxusertypeitem_seq_number_fm_factor_1',
'imgxisqnxusertypeuser_type_fm_factor_0',
'imgxisqnxusertypeimage_top_1_fm_bias',
'imgxisqnxusertypeitem_seq_number_fm_bias',
'b_intensity_mean',
'b_intensity_median',
'b_intensity_std',
'g_intensity_mean',
'g_intensity_median',
'g_intensity_std',
'gray_intensity_mean',
'gray_intensity_median',
'gray_intensity_std',
'r_intensity_mean',
'r_intensity_median',
'r_intensity_std',
'nasnet_nima_med',
'nasnet_nima_std',
'nasnet_nima_max',
'nasnet_nima_min',
'nasnet_nima_1_quartile',
'nasnet_nima_3_quartile',
'nasnet_nima_13_quartile_diff',
'nasnet_nima_max_min_diff',
'nasnet_nima_non_max_mean',
'nasnet_nima_max_non_max_mean_diff',
]
train.drop(remove_cols, axis=1, inplace=True)
test.drop(remove_cols, axis=1, inplace=True)
for col in remove_cols:
if col in categorical:
categorical.remove(col)
if col in numerical:
numerical.remove(col)
features = numerical + categorical
train.loc[:, 'image'] = pd.read_csv('train.csv', usecols=['activation_date', 'image'], parse_dates=['activation_date']) \
.sort_values('activation_date').reset_index(drop=True)['image'].fillna('no-image')
test.loc[:, 'image'] = pd.read_csv('test.csv', usecols=['image'])['image'].fillna('no-image')
max_features = 500000
maxlen = 150
embed_size = 300
title_max_features = 200000
title_maxlen = 80
title_embed_size = 100
embed_info = pickle.load(open('rnn_embed.pkl', 'rb'))
embed_info.keys()
desc_embed_info = embed_info['desc_embed_info']
title_embed_info = embed_info['title_embed_info']
print('setup max info for embedding in categorical variables')
max_info = dict((col, train[col].max()+1) for col in categorical)<jupyter_output>setup max info for embedding in categorical variables
<jupyter_text>### Build RNN Model<jupyter_code>def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_true - y_pred)))
from keras.engine.topology import Layer
from keras import initializers, regularizers, constraints
class Attention(Layer):
def __init__(self, step_dim,
W_regularizer=None, b_regularizer=None,
W_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
self.step_dim = step_dim
self.features_dim = 0
super(Attention, self).__init__(**kwargs)
def build(self, input_shape):
print(input_shape)
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
self.features_dim = input_shape[-1]
if self.bias:
self.b = self.add_weight((input_shape[1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
else:
self.b = None
self.built = True
def compute_mask(self, input, input_mask=None):
return None
def call(self, x, mask=None):
features_dim = self.features_dim
step_dim = self.step_dim
eij = K.reshape(K.dot(K.reshape(x, (-1, features_dim)),
K.reshape(self.W, (features_dim, 1))), (-1, step_dim))
if self.bias:
eij += self.b
eij = K.tanh(eij)
a = K.exp(eij)
if mask is not None:
a *= K.cast(mask, K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], self.features_dim
def clip_rmse(true, prediction):
return np.sqrt(metrics.mean_squared_error(true, np.clip(prediction, 0., 1.)))
class NBatchEvalLogger(Callback):
def __init__(self, display, val_X, val_y, save_path=None, save_start=1000):
self.step = 0
self.display = display
self.val_X = val_X
self.val_y = val_y
self.best_loss = None
self.save_path = save_path
self.save_start = save_start
self.record_count = 0
def on_batch_end(self, batch, logs={}):
self.step += 1
if self.step % self.display == 0 and self.step >= self.save_start:
#loss, metric = self.model.evaluate(self.val_X, self.val_y, batch_size=128, verbose=1)
prediction = self.model.predict(self.val_X, batch_size=128, verbose=0)
loss = clip_rmse(self.val_y, prediction)
if self.best_loss is None:
self.best_loss = loss
else:
if loss < self.best_loss:
self.best_loss = loss
if self.save_path is not None:
self.model.save(self.save_path, overwrite=True)
self.record_count += 1
print('\rstep: {} val loss={:.5f}, best loss={:.5f}'.format(self.step, loss, self.best_loss))
import keras
from copy import deepcopy as cp
import os
from zipfile import ZipFile
import cv2
import numpy as np
import pandas as pd
from dask import bag, threaded
from dask.diagnostics import ProgressBar
import matplotlib.pyplot as plt
from keras.preprocessing.image import load_img, img_to_array
from keras.applications.resnet50 import preprocess_input
import concurrent.futures
from multiprocessing.pool import ThreadPool
class DataGenerator(keras.utils.Sequence):
#'Generates data for Keras'
def __init__(self, list_IDs, X, y, img_arch, img_path, batch_size=32, shuffle=True, is_train=True):
#'Initialization'
self.batch_size = batch_size
self.X = X
self.y = y
self.list_IDs = list_IDs
self.shuffle = shuffle
self.img_path = img_path
self.is_train = is_train
self.on_epoch_end()
self.zipped = ZipFile(img_arch)
#print('file names:\n', self.zipped.namelist()[1:10], '\n...')
self.img_path = img_path
global cpu_count
self.pool = ThreadPool(cpu_count)
def __getstate__(self):
""" This is called before pickling. """
state = self.__dict__.copy()
del state['zipped']
return state
def __setstate__(self, state):
""" This is called while unpickling. """
self.__dict__.update(state)
def __len__(self):
#'Denotes the number of batches per epoch'
return int(np.ceil(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
#'Generate one batch of data'
# Generate indexes of the batch
start = index*self.batch_size
end = min((index+1)*self.batch_size, len(self.indexes))
indexes = self.indexes[start: end]
# Generate data
return self.__data_generation(indexes)
def on_epoch_end(self):
#'Updates indexes after each epoch'
self.indexes = cp(list(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def load_img_from_zipped(self, img_id, i, imgs_holder):
invalid_img_ids = ['4f029e2a00e892aa2cac27d98b52ef8b13d91471f613c8d3c38e3f29d4da0b0c',
'8513a91e55670c709069b5f85e12a59095b802877715903abef16b7a6f306e58',
'60d310a42e87cdf799afcd89dc1b11ae3fdc3d0233747ec7ef78d82c87002e83',
'b98b291bd04c3d92165ca515e00468fd9756af9a8f1df42505deed1dcfb5d7ae']
try:
if img_id in invalid_img_ids or img_id == 'no-image':
pass
else:
exfile = self.zipped.read(self.img_path+img_id+'.jpg')
arr = np.frombuffer(exfile, np.uint8)
imz = cv2.imdecode(arr, flags=cv2.IMREAD_UNCHANGED)
imz = cv2.resize(imz, (224,224), interpolation=cv2.INTER_AREA)
imgs_holder[i] = img_to_array(imz)
except:
print(img_id, ' is invalid')
pass
return None
def parallel_load_imgs(self, img_ids, wait=True):
imgs_holder = np.zeros((len(img_ids), 224, 224, 3))
'''
for i, im_id in enumerate(img_ids):
self.load_img_from_zipped(im_id, i, imgs_holder)
'''
self.res = [self.pool.apply_async(self.load_img_from_zipped, (im_id, i, imgs_holder)) for i, im_id in enumerate(img_ids)]
if wait:
for r in self.res:
r.get()
#print(imgs_holder)
imgs_holder = preprocess_input(imgs_holder) # adjust to mean of rgb to some value
return imgs_holder
def __data_generation(self, list_IDs_temp):
#'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Generate data
X = dict((col, self.X.loc[list_IDs_temp, col].values) for col in features)
X['desc'] = desc_embed_info['text'][list_IDs_temp,:]
X['title'] = title_embed_info['text'][list_IDs_temp,:]
X['imgs'] = self.parallel_load_imgs(self.X.loc[list_IDs_temp, 'image'].values)
if self.is_train:
y = cp(self.y[list_IDs_temp])
return X, y
else:
return X
# 'train_jpg.zip', 'data/competition_files/train_jpg/',
# debug use
'''
zipped = ZipFile('train_jpg.zip')
print(zipped.namelist()[1:10])
img_id = '2809fd6afd6d3cae4dd4ad93a7f905a0db32292f4df4b3f19fa5492e08cbfd90'
target_size=(224,224)
try:
exfile = zipped.read('data/competition_files/train_jpg/'+img_id+'.jpg')
arr = np.frombuffer(exfile, np.uint8)
imz = cv2.imdecode(arr, flags=cv2.IMREAD_UNCHANGED)
imz = cv2.resize(imz, target_size, interpolation=cv2.INTER_AREA)
except:
print(img_id, ' is invalid')
imz = None
imz
'''
def build_model(categorical_features, numerical_features):
# non-cat features
non_cat_inputs = []
for col in numerical_features:
f = Input(shape=[1], name=col)
non_cat_inputs.append(f)
# cat features
cat_inputs = []
cat_embeds = []
for col in categorical_features:
f = Input(shape=[1], name=col)
embed_dim = max_info[col].max()
if max_info[col] > 10:
reduced_dim = 10
else:
reduced_dim = 1
embed_f = Embedding(embed_dim, reduced_dim)(f)
flatten_f = Flatten()(embed_f)
cat_inputs.append(f)
cat_embeds.append(flatten_f)
# text features: architecture of text to try here!!!
# description
text_inp = Input(shape = (maxlen, ), name='desc')
text_emb = Embedding(desc_embed_info['nb_words'], embed_size, weights = [desc_embed_info['emb_matrix']],
input_length = maxlen, trainable = False)(text_inp)
text_emb = SpatialDropout1D(0.3)(text_emb)
text_gru = Bidirectional(CuDNNGRU(128, return_sequences = True))(text_emb)
text_gru = Conv1D(64, kernel_size = 3, padding = "valid", kernel_initializer = "glorot_uniform")(text_gru)
text_gru_avg = GlobalAveragePooling1D()(text_gru)
text_gru_max = GlobalMaxPooling1D()(text_gru)
text_gru = concatenate([text_gru_avg, text_gru_max])
text_gru = Dropout(0.1)(text_gru)
# title
title_inp = Input(shape = (title_maxlen, ), name='title')
title_emb = Embedding(title_embed_info['nb_words'], title_embed_size, weights = [title_embed_info['emb_matrix']],
input_length = title_maxlen, trainable = False)(title_inp)
title_emb = SpatialDropout1D(0.1)(title_emb)
title_gru = Bidirectional(CuDNNGRU(32, return_sequences = True))(title_emb)
title_gru = Conv1D(16, kernel_size = 3, padding = "valid", kernel_initializer = "glorot_uniform")(title_gru)
title_gru_avg = GlobalAveragePooling1D()(title_gru)
title_gru_max = GlobalMaxPooling1D()(title_gru)
title_gru = concatenate([title_gru_avg, title_gru_max])
title_gru = Dropout(0.1)(title_gru)
# add image architecture
# reference: https://keras.io/getting-started/functional-api-guide/#more-examples, Visual question answering model
'''
img_inp = Input(shape = (224, 224, 3 ), name='imgs')
img_ch = Conv2D(64, (3, 3), activation='relu', padding='same', W_constraint=max_norm(3))(img_inp)
img_ch = Conv2D(64, (3, 3), activation='relu')(img_ch)
img_ch = MaxPooling2D((2, 2))(img_ch)
#img_ch = Conv2D(128, (3, 3), activation='relu', padding='same', W_constraint=max_norm(3))(img_ch)
#img_ch = Conv2D(128, (3, 3), activation='relu')(img_ch)
#img_ch = MaxPooling2D((2, 2))(img_ch)
#img_ch = Conv2D(256, (3, 3), activation='relu', padding='same', W_constraint=max_norm(3))(img_ch)
#img_ch = Conv2D(256, (3, 3), activation='relu')(img_ch)
#img_ch = Conv2D(256, (3, 3), activation='relu')(img_ch)
#img_ch = MaxPooling2D((2, 2))(img_ch)
img_ch = Flatten()(img_ch)
img_ch = Dense(64, activation='relu')(img_ch)
'''
# merge each branch: non-cat, cat, text, img
concat_main = non_cat_inputs+cat_embeds+[text_gru, title_gru]
main = concatenate(concat_main)
main = BatchNormalization()(main)
main = Dropout(0.1)(main)
main = BatchNormalization()(Dense(256, activation='relu')(main))
main = Dropout(0.1)(main)
main = BatchNormalization()(Dense(64, activation='relu')(main))
out = Dense(1, activation = "sigmoid")(main)
concat_input = non_cat_inputs+cat_inputs+[text_inp, title_inp]
model = Model(concat_input, out)
model.regularizers = [regularizers.l2(0.0001)]
model.compile(optimizer = Adam(lr=0.001), loss = root_mean_squared_error,
metrics =[root_mean_squared_error])
model.summary()
return model<jupyter_output><empty_output><jupyter_text>### Training<jupyter_code>from sklearn.model_selection import KFold
import warnings; warnings.filterwarnings('ignore')
train_indices = np.arange(0, train_len)
test_indices = np.arange(0, test.shape[0])
from keras_tqdm import TQDMNotebookCallback
from ipywidgets import IntProgress
start_fold = 0 # <= 0 for invalid, train from fold 1, > 0: used to train from fold=start_fold
resume_file_prefix = '0619_rnn' # whatever we like
if start_fold > 0:
import pickle
ret = pickle.load(open(resume_file_prefix+'_oof_val_pred', 'rb'))
ret_test = pickle.load(open(resume_file_prefix+'_oof_test_pred', 'rb'))
print(ret)
print(ret_test)
else:
ret = np.zeros((train.shape[0],))
ret_test = np.zeros((test.shape[0],))
fold = 0
for tr_ix, val_ix in KFold(5, shuffle=True, random_state=seed).split(train_indices):
fold += 1
if start_fold > 0 and fold < start_fold:
continue
else:
pass
model = build_model(categorical, numerical)
file_path = "rnn_weights/model_final_fold_{}.hdf5".format(fold)
# customized batch loader
training_generator = DataGenerator(tr_ix, train, train_y,
'train_jpg.zip', 'data/competition_files/train_jpg/',
batch_size=batch_size, shuffle=True)
validation_generator = DataGenerator(val_ix, train, train_y,
'train_jpg.zip', 'data/competition_files/train_jpg/',
batch_size=batch_size, shuffle=False)
lr_schd = LearningRateScheduler(lambda epoch: 0.001*(0.2**(epoch//6)), verbose=1)
check_point = ModelCheckpoint(file_path, monitor = "val_loss", mode = "min", save_best_only = True, verbose = 1)
history = model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=False,
workers=1,
epochs=rnn_train_epochs,
verbose = 0,
callbacks = [lr_schd, check_point, TQDMNotebookCallback(leave_inner=True, leave_outer=True)])
# Predict val + test oofs
model.load_weights(file_path) # load weight with best validation score
del validation_generator
validation_generator = DataGenerator(val_ix, train, None,
'train_jpg.zip', 'data/competition_files/train_jpg/',
batch_size=batch_size, shuffle=False, is_train=False)
test_generator = DataGenerator(test_indices, test, None,
'test_jpg.zip', 'data/competition_files/test_jpg/',
batch_size=batch_size, shuffle=False, is_train=False)
ret[val_ix] = model.predict_generator(validation_generator, use_multiprocessing=False, workers=1).reshape((len(val_ix),))
ret_test += model.predict_generator(test_generator, use_multiprocessing=False, workers=1).reshape((ret_test.shape[0],))
del model, history, training_generator, validation_generator, test_generator; gc.collect()
ret_test /= 5
# uncomment these to dump files if OOM (out-of-mem) happens
import pickle
pickle.dump(ret, open(resume_file_prefix+'_oof_val_pred', 'wb'))
pickle.dump(ret_test, open(resume_file_prefix+'_oof_test_pred', 'wb'))
# public kernel: cv = .2220, lb = .2247
# bigru-conv1d: cv =.2185 , lb = .2235
# bigru-attention: cv =.2186 , lb = .2235
# 2gru: lb: .2239
# self-trained wordvec: cv .217232, lb: .2229
# +partial new features: cv .216326, lb: <jupyter_output><empty_output><jupyter_text>### Generate OOFs and Submissions<jupyter_code>prefix = 'selftrained_bigru_conv1d_merged'
pd.DataFrame(data=ret, columns=[prefix+'_rnn_pred']).to_csv(prefix+'_rnn_oof_val_pred.csv', index=False)
pd.DataFrame(data=ret_test, columns=[prefix+'_rnn_pred']).to_csv(prefix+'_rnn_oof_test_pred.csv', index=False)
subm = pd.read_csv('sample_submission.csv')
subm['deal_probability'] = np.clip(ret_test, 0, 1)
subm.to_csv(prefix+'_rnn_submission.csv', index=False)<jupyter_output><empty_output>
|
no_license
|
/RNN Self-Trained WordVec + Image + Merge Features (with Fast Loading)-Copy1.ipynb
|
tnmichael309/kaggle-avito-demand-challenge
| 6 |
<jupyter_start><jupyter_text># Consume deployed webservice via REST
Demonstrates the usage of a deployed model via plain REST.
REST is language-agnostic, so you should be able to query from any REST-capable programming language.## Configuration<jupyter_code>from environs import Env
env = Env()
env.read_env("foundation.env")
env.read_env("service-principals.env")
# image to test
IMAGE_TO_TEST = "mnist_fashion/04_consumption/random_test_images/random-test-image-1601.png"
# endpoint of the scoring webservice
SCORING_URI = "<...use your own..., eg. https://....westeurope.cloudapp.azure.com:443/api/v1/service/mnist-fashion-service/score>"
# auth method, either "Token", "Keys" or "None".
# also specify additional values depending on auth method
AUTH_METHOD = "Token"
if AUTH_METHOD == "Keys":
AUTH_KEY = "<add your key here>"
elif AUTH_METHOD == "Token":
REGION = "eastus"
SUBSCRIPTION_ID = env("SUBSCRIPTION_ID")
RESOURCE_GROUP = env("RESOURCE_GROUP")
WORKSPACE_NAME = env("WORKSPACE_NAME")
SERVICE_NAME = "mnist-fashion-service"
CONSUME_MODEL_SP_TENANT_ID = env("CONSUME_MODEL_SP_TENANT_ID")
CONSUME_MODEL_SP_CLIENT_ID = env("CONSUME_MODEL_SP_CLIENT_ID")
CONSUME_MODEL_SP_CLIENT_SECRET = env("CONSUME_MODEL_SP_CLIENT_SECRET")
elif AUTH_METHOD == "None":
pass<jupyter_output><empty_output><jupyter_text>## Load a random image and plot it<jupyter_code>import matplotlib.pyplot as plt
from PIL import Image
image = Image.open(IMAGE_TO_TEST)
plt.figure()
plt.imshow(image)
plt.colorbar()
plt.grid(False)
plt.show()<jupyter_output><empty_output><jupyter_text>## Invoke the webservice and show result<jupyter_code>import requests
import json
# --- get input data
input_data = open(IMAGE_TO_TEST, "rb").read()
# alternatively for JSON input
#input_data = json.dumps({"x": 4711})
# --- get headers
# Content-Type
# for binary data
headers = {"Content-Type": "application/octet-stream"}
# alternatively for JSON data
#headers = {"Content-Type": "application/json"}
# Authorization
if AUTH_METHOD == "Token":
# get an access token for the service principal to access Azure
azure_access_token = requests.post(
f"https://login.microsoftonline.com/{CONSUME_MODEL_SP_TENANT_ID}/oauth2/token",
headers={"Content-Type": "application/x-www-form-urlencoded"},
data="grant_type=client_credentials"
+ "&resource=https%3A%2F%2Fmanagement.azure.com%2F"
+ f"&client_id={CONSUME_MODEL_SP_CLIENT_ID}"
+ f"&client_secret={CONSUME_MODEL_SP_CLIENT_SECRET}",
).json()["access_token"]
# use that token to get another token for accessing the webservice
# note: the token is only valid for a certain period of time.
# after that time, a new token has to be used. the logic
# to do this, is not implemented here yet. you can check
# the current time against the refresh after time to know
# if a new token is required. refreshAfter and expiryOn
# are UNIX timestamps. use time.time() to get the current
# timestamp.
token_response = requests.post(
f"https://{REGION}.modelmanagement.azureml.net/modelmanagement/v1.0/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.MachineLearningServices/workspaces/{WORKSPACE_NAME}/services/{SERVICE_NAME}/token",
headers={"Authorization": f"Bearer {azure_access_token}"}
).json()
access_token = token_response["accessToken"]
access_token_refresh_after = int(token_response["refreshAfter"])
access_token_expiry_on = int(token_response["expiryOn"])
# finally, use that token to access the webservice
headers["Authorization"] = f"Bearer {access_token}"
if AUTH_METHOD == "Keys":
headers["Authorization"] = f"Bearer {AUTH_KEY}"
if AUTH_METHOD == "None":
# do nothing
pass
# --- make request and display response
response = requests.post(SCORING_URI, input_data, headers=headers, verify=True)
print(response.json())<jupyter_output>{'predicted_label': 'Bag', 'confidence': '1.0'}
|
permissive
|
/mnist_fashion/04_consumption/consume-webservice.ipynb
|
anderl80/aml-template
| 3 |
<jupyter_start><jupyter_text># Problem : Print all ancestors of binary tree.
Algorithm:
1. Check if root or node is None, if yes, return False
2. Append the ancestor list with the root
3. If the root equals node return True to the calling function
4. Check the left and right subtree for node recursively
5. If found return true else pop the node from the list and return False.\
Time Complexity: O(n)
Space Complexity: O(n^2)<jupyter_code>def findAllAncestors(root, node, ancestors):
if root is None or node is None:
return False
ancestors.append(root.getData())
if root.getData() == node:
return True
if (findAllAncestors(root.getLeft(), node, ancestors)
or findAllAncestors(root.getRight(), node, ancestors)):
return True
ancestors.pop()
return False
class TreeNode:
def __init__(self, data, left=None, right=None):
self.data = data
self.left = left
self.right = right
def getData(self):
return self.data
def setData(self, data):
self.data = data
def getLeft(self):
return self.left
def getRight(self):
return self.right
def setLeft(self, left):
self.left = left
def setRight(self, right):
self.right = right
a = TreeNode(1)
b = TreeNode(2)
c = TreeNode(3)
a.setLeft(b)
a.setRight(c)
d = TreeNode(4)
e = TreeNode(5)
x = TreeNode(8)
y = TreeNode(9)
e.setLeft(x)
b.setLeft(d)
b.setRight(e)
f = TreeNode(6)
g = TreeNode(7)
f.setRight(y)
c.setLeft(f)
c.setRight(g)
ans = []
findAllAncestors(a, 9, ans)
print(ans)<jupyter_output>[1, 3, 6, 9]
<jupyter_text># Alternative way of solving the problem<jupyter_code>def printAllAncestors(root, node):
if root is None or node is None:
return False
if ((root.getData() == node)
or printAllAncestors(root.getLeft(), node)
or printAllAncestors(root.getRight(), node)):
print(root.getData())
return True
return False
printAllAncestors(a,9)<jupyter_output>9
6
3
1
<jupyter_text># Solution without recursion.
Use postorder traversal to solve this problem.
<jupyter_code>def findAncestorsIteratively(root, target):
if root is None or target is None:
return False
stack = []
node = root
prev_right = None
while node or stack:
if node:
stack.append(node)
node = node.getLeft()
else:
element = stack[-1]
if element.getData() == target:
break
if element.getRight() and prev_right!=element.getRight():
prev_right= node = element.getRight()
else:
stack.pop()
for i in stack:
print(i.getData())
findAncestorsIteratively(a,9)<jupyter_output>1
3
6
9
|
no_license
|
/Trees/Binary Trees/Problems/.ipynb_checkpoints/AncestorsOfBinaryTree-checkpoint.ipynb
|
sumeet13/Algorithms-and-Data-Structures
| 3 |
<jupyter_start><jupyter_text>Table of Contents
1 HISTOGRAM2 QQPLOT3 AGGREGATION PLOT (part4)## HISTOGRAM<jupyter_code>library(MASS)
# Create a histogram of counts with hist()
hist(Cars93$Horsepower, main = "hist() plot")
# Create a normalized histogram with truehist()
hist(Cars93$Horsepower, main = "truehist() plot", freq = FALSE)
# Add the density curve to the histogram
lines(density(Cars93$Horsepower))<jupyter_output><empty_output><jupyter_text>## QQPLOT<jupyter_code># Load the car package to make qqPlot() available
library(car)
# Create index16, pointing to 16-week chicks
index16 <- which(ChickWeight$Time == 16)
# Get the 16-week chick weights
weights <- ChickWeight[index16,"weight"]
# Show the normal QQ-plot of the chick weights
qqPlot(weights)
hist(weights, freq = FALSE)
# Show the normal QQ-plot of the Boston$tax data
qqPlot(Boston$tax)
hist(Boston$tax)<jupyter_output>Warning message:
"package 'car' was built under R version 3.5.2"<jupyter_text>## AGGREGATION PLOT (part4)<jupyter_code># Set up a two-by-two plot array
par(mfrow = c(2,2))
# Plot the raw duration data
plot(geyser$duration, main = "Raw data")
# Plot the normalized histogram of the duration data
truehist(geyser$duration, main = "Histogram")
# Plot the density of the duration data
plot(density(geyser$duration), main = "Density")
# Construct the normal QQ-plot of the duration data
qqPlot(geyser$duration, main = "QQ-plot")<jupyter_output><empty_output>
|
no_license
|
/Data visualization - base R/.ipynb_checkpoints/2.1. One variable-checkpoint.ipynb
|
yoogun143/Datacamp_R
| 3 |
<jupyter_start><jupyter_text>原文代码作者:François Chollet
github:https://github.com/fchollet/deep-learning-with-python-notebooks
中文注释制作:黄海广
github:https://github.com/fengdu78
代码全部测试通过。
配置环境:keras 2.2.1(原文是2.0.8,运行结果一致),tensorflow 1.8,python 3.6,
主机:显卡:一块1080ti;内存:32g(注:绝大部分代码不需要GPU)
<jupyter_code>import keras
keras.__version__<jupyter_output>Using TensorFlow backend.
<jupyter_text># Using word embeddings
# 使用词嵌入
This notebook contains the second code sample found in Chapter 6, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
---
Another popular and powerful way to associate a vector with a word is the use of dense "word vectors", also called "word embeddings".
While the vectors obtained through one-hot encoding are binary, sparse (mostly made of zeros) and very high-dimensional (same dimensionality as the
number of words in the vocabulary), "word embeddings" are low-dimensional floating point vectors
(i.e. "dense" vectors, as opposed to sparse vectors).
Unlike word vectors obtained via one-hot encoding, word embeddings are learned from data.
It is common to see word embeddings that are 256-dimensional, 512-dimensional, or 1024-dimensional when dealing with very large vocabularies.
On the other hand, one-hot encoding words generally leads to vectors that are 20,000-dimensional or higher (capturing a vocabulary of 20,000
token in this case). So, word embeddings pack more information into far fewer dimensions.
将单词与向量相关联还有另一种常用的强大方法,就是使用密集的词向量(word vector),也叫词嵌入(word embedding)。one-hot 编码得到的向量是二进制的、稀疏的(绝大部分元素都是 0)、维度很高的(维度大小等于词表中的单词个数),而词嵌入是低维的浮点数向量(即密集向量,与稀疏向量相对),参见下图。与 one-hot 编码得到的词向量不同,词嵌入是从数据中学习得到的。常见的词向量维度是 256、512 或 1024(处理非常大的词表时)。与此相对,one-hot 编码的词向量维度通常为 20 000 或更高(对应包含 20 000 个标记的词表)。因此,词向量可以将更多的信息塞入更低的维度中。
There are two ways to obtain word embeddings:
* Learn word embeddings jointly with the main task you care about (e.g. document classification or sentiment prediction).
In this setup, you would start with random word vectors, then learn your word vectors in the same way that you learn the weights of a neural network.
* Load into your model word embeddings that were pre-computed using a different machine learning task than the one you are trying to solve.
These are called "pre-trained word embeddings".
Let's take a look at both.
获取词嵌入有两种方法。
* 在完成主任务(比如文档分类或情感预测)的同时学习词嵌入。在这种情况下,一开始是随机的词向量,然后对这些词向量进行学习,其学习方式与学习神经网络的权重相同。
* 在不同于待解决问题的机器学习任务上预计算好词嵌入,然后将其加载到模型中。这些词嵌入叫作预训练词嵌入(pretrained word embedding)。
我们来分别看一下这两种方法。
## Learning word embeddings with the `Embedding` layer
The simplest way to associate a dense vector to a word would be to pick the vector at random. The problem with this approach is that the
resulting embedding space would have no structure: for instance, the words "accurate" and "exact" may end up with completely different
embeddings, even though they are interchangeable in most sentences. It would be very difficult for a deep neural network to make sense of
such a noisy, unstructured embedding space.
To get a bit more abstract: the geometric relationships between word vectors should reflect the semantic relationships between these words.
Word embeddings are meant to map human language into a geometric space. For instance, in a reasonable embedding space, we would expect
synonyms to be embedded into similar word vectors, and in general we would expect the geometric distance (e.g. L2 distance) between any two
word vectors to relate to the semantic distance of the associated words (words meaning very different things would be embedded to points
far away from each other, while related words would be closer). Even beyond mere distance, we may want specific __directions__ in the
embedding space to be meaningful.
[...]
In real-world word embedding spaces, common examples of meaningful geometric transformations are "gender vectors" and "plural vector". For
instance, by adding a "female vector" to the vector "king", one obtain the vector "queen". By adding a "plural vector", one obtain "kings".
Word embedding spaces typically feature thousands of such interpretable and potentially useful vectors.
Is there some "ideal" word embedding space that would perfectly map human language and could be used for any natural language processing
task? Possibly, but in any case, we have yet to compute anything of the sort. Also, there isn't such a thing as "human language", there are
many different languages and they are not isomorphic, as a language is the reflection of a specific culture and a specific context. But more
pragmatically, what makes a good word embedding space depends heavily on your task: the perfect word embedding space for an
English-language movie review sentiment analysis model may look very different from the perfect embedding space for an English-language
legal document classification model, because the importance of certain semantic relationships varies from task to task.
It is thus reasonable to __learn__ a new embedding space with every new task. Thankfully, backpropagation makes this really easy, and Keras makes it
even easier. It's just about learning the weights of a layer: the `Embedding` layer.
## 利用 Embedding 层学习词嵌入
要将一个词与一个密集向量相关联,最简单的方法就是随机选择向量。这种方法的问题在于,得到的嵌入空间没有任何结构。例如,accurate 和 exact 两个词的嵌入可能完全不同,尽管它们在大多数句子里都是可以互换的 a。深度神经网络很难对这种杂乱的、非结构化的嵌入空间进行学习。
说得更抽象一点,词向量之间的几何关系应该表示这些词之间的语义关系。词嵌入的作用应该是将人类的语言映射到几何空间中。例如,在一个合理的嵌入空间中,同义词应该被嵌入到相似的词向量中,一般来说,任意两个词向量之间的几何距离(比如 L2 距离)应该和这两个词的语义距离有关(表示不同事物的词被嵌入到相隔很远的点,而相关的词则更加靠近)。除了距离,你可能还希望嵌入空间中的特定方向也是有意义的。
在真实的词嵌入空间中,常见的有意义的几何变换的例子包括“性别”向量和“复数”向量。 例如,将 king(国王)向量加上 female(女性)向量,得到的是 queen(女王)向量。将 king(国王) 向量加上 plural(复数)向量,得到的是 kings 向量。词嵌入空间通常具有几千个这种可解释的、 并且可能很有用的向量。
[...]
有没有一个理想的词嵌入空间,可以完美地映射人类语言,并可用于所有自然语言处理任 务?可能有,但我们尚未发现。此外,也不存在人类语言(human language)这种东西。世界上有许多种不同的语言,而且它们不是同构的,因为语言是特定文化和特定环境的反射。但从更实际的角度来说,一个好的词嵌入空间在很大程度上取决于你的任务。英语电影评论情感分析 模型的完美词嵌入空间,可能不同于英语法律文档分类模型的完美词嵌入空间,因为某些语义关系的重要性因任务而异。
因此,合理的做法是对每个新任务都学习一个新的嵌入空间。幸运的是,反向传播让这种学习变得很简单,而 Keras 使其变得更简单。我们要做的就是学习一个层的权重,这个层就是 Embedding 层。
<jupyter_code>from keras.layers import Embedding
# The Embedding layer takes at least two arguments:
# the number of possible tokens, here 1000 (1 + maximum word index),
# and the dimensionality of the embeddings, here 64.
# Embedding 层至少需要两个参数:
# 标记的个数(这里是 1000,即最大单词索引 +1)和嵌入的维度(这里是 64)
embedding_layer = Embedding(1000, 64)<jupyter_output><empty_output><jupyter_text>
The `Embedding` layer is best understood as a dictionary mapping integer indices (which stand for specific words) to dense vectors. It takes
as input integers, it looks up these integers into an internal dictionary, and it returns the associated vectors. It's effectively a dictionary lookup.
最好将 Embedding 层理解为一个字典,将整数索引(表示特定单词)映射为密集向量。它接收整数作为输入,并在内部字典中查找这些整数,然后返回相关联的向量。Embedding 层实际上是一种字典查找。
The `Embedding` layer takes as input a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of
integers. It can embed sequences of variable lengths, so for instance we could feed into our embedding layer above batches that could have
shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15). All sequences in a batch must
have the same length, though (since we need to pack them into a single tensor), so sequences that are shorter than others should be padded
with zeros, and sequences that are longer should be truncated.
This layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. Such a 3D tensor can then
be processed by a RNN layer or a 1D convolution layer (both will be introduced in the next sections).
When you instantiate an `Embedding` layer, its weights (its internal dictionary of token vectors) are initially random, just like with any
other layer. During training, these word vectors will be gradually adjusted via backpropagation, structuring the space into something that the
downstream model can exploit. Once fully trained, your embedding space will show a lot of structure -- a kind of structure specialized for
the specific problem you were training your model for.
Let's apply this idea to the IMDB movie review sentiment prediction task that you are already familiar with. Let's quickly prepare
the data. We will restrict the movie reviews to the top 10,000 most common words (like we did the first time we worked with this dataset),
and cut the reviews after only 20 words. Our network will simply learn 8-dimensional embeddings for each of the 10,000 words, turn the
input integer sequences (2D integer tensor) into embedded sequences (3D float tensor), flatten the tensor to 2D, and train a single `Dense`
layer on top for classification.
Embedding 层的输入是一个二维整数张量,其形状为 (samples, sequence_length),每个元素是一个整数序列。它能够嵌入长度可变的序列,例如,对于前一个例子中的Embedding 层,你可以输入形状为 (32, 10)(32 个长度为 10 的序列组成的批量)或 (64,15)(64 个长度为 15 的序列组成的批量)的批量。不过一批数据中的所有序列必须具有相同的 长度(因为需要将它们打包成一个张量),所以较短的序列应该用 0 填充,较长的序列应该被截断。
这个Embedding 层返回一个形状为 (samples, sequence_length, embedding_dimensionality) 的三维浮点数张量。然后可以用 RNN 层或一维卷积层来处理这个三维张量(二者都会在后面介绍)。
将一个Embedding 层实例化时,它的权重(即标记向量的内部字典)最开始是随机的,与其他层一样。在训练过程中,利用反向传播来逐渐调节这些词向量,改变空间结构以便下游模型可以利用。一旦训练完成,嵌入空间将会展示大量结构,这种结构专门针对训练模型所要解决的问题。
我们将这个想法应用于你熟悉的 IMDB 电影评论情感预测任务。首先,我们需要快速准备数据。将电影评论限制为前 10 000 个最常见的单词(第一次处理这个数据集时就是这么做的), 然后将评论长度限制为只有 20 个单词。对于这 10 000 个单词,网络将对每个词都学习一个 8 维嵌入,将输入的整数序列(二维整数张量)转换为嵌入序列(三维浮点数张量),然后将这个张量展平为二维,最后在上面训练一个Dense 层用于分类。
<jupyter_code>from keras.datasets import imdb
from keras import preprocessing
# Number of words to consider as features
# 作为特征的单词个数
max_features = 10000
# Cut texts after this number of words
# (among top max_features most common words)
# 在这么多单词后截断文本(这些单词都属于前 max_features 个最常见的单词)
maxlen = 20
# Load the data as lists of integers.(将数据加载为整数列表)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# This turns our lists of integers
# into a 2D integer tensor of shape `(samples, maxlen)`
# 将整数列表转换成形状为 (samples, maxlen) 的二维整数张量
x_train = preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = preprocessing.sequence.pad_sequences(x_test, maxlen=maxlen)
from keras.models import Sequential
from keras.layers import Flatten, Dense
model = Sequential()
# We specify the maximum input length to our Embedding layer
# so we can later flatten the embedded inputs
# 指定 Embedding 层的最大输入长度,以便后面将嵌入输入展平。
model.add(Embedding(10000, 8, input_length=maxlen))
# After the Embedding layer,
# our activations have shape `(samples, maxlen, 8)`.
# Embedding 层 激活的形状为 (samples, maxlen, 8)
# We flatten the 3D tensor of embeddings
# into a 2D tensor of shape `(samples, maxlen * 8)`
# 将三维的嵌入张量展平成形状为 (samples, maxlen * 8) 的二维张量
model.add(Flatten())
# We add the classifier on top
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
model.summary()
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_split=0.2)<jupyter_output>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_2 (Embedding) (None, 20, 8) 80000
_________________________________________________________________
flatten_1 (Flatten) (None, 160) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 161
=================================================================
Total params: 80,161
Trainable params: 80,161
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 5s 237us/step - loss: 0.6759 - acc: 0.6050 - val_loss: 0.6398 - val_acc: 0.6814
Epoch 2/10
20000/20000 [==============================] - 3s 126us/step - loss: 0.5[...]<jupyter_text>We get to a validation accuracy of ~76%, which is pretty good considering that we only look at the first 20 words in every review. But
note that merely flattening the embedded sequences and training a single `Dense` layer on top leads to a model that treats each word in the
input sequence separately, without considering inter-word relationships and structure sentence (e.g. it would likely treat both _"this movie
is shit"_ and _"this movie is the shit"_ as being negative "reviews"). It would be much better to add recurrent layers or 1D convolutional
layers on top of the embedded sequences to learn features that take into account each sequence as a whole. That's what we will focus on in
the next few sections.
得到的验证精度约为 76%,考虑到仅查看每条评论的前 20 个单词,这个结果还是相当不错的。但请注意,仅仅将嵌入序列展开并在上面训练一个 Dense 层,会导致模型对输入序列中的每个单词单独处理,而没有考虑单词之间的关系和句子结构(举个例子,这个模型可能会将 this movie is a bomb 和 this movie is the bomb 两条都归为负面评论)。更好的做法是在嵌入序列上添加循环层或一维卷积层,将每个序列作为整体来学习特征。这也是接下来几节的重点。
## Using pre-trained word embeddings
Sometimes, you have so little training data available that could never use your data alone to learn an appropriate task-specific embedding
of your vocabulary. What to do then?
Instead of learning word embeddings jointly with the problem you want to solve, you could be loading embedding vectors from a pre-computed
embedding space known to be highly structured and to exhibit useful properties -- that captures generic aspects of language structure. The
rationale behind using pre-trained word embeddings in natural language processing is very much the same as for using pre-trained convnets
in image classification: we don't have enough data available to learn truly powerful features on our own, but we expect the features that
we need to be fairly generic, i.e. common visual features or semantic features. In this case it makes sense to reuse features learned on a
different problem.
Such word embeddings are generally computed using word occurrence statistics (observations about what words co-occur in sentences or
documents), using a variety of techniques, some involving neural networks, others not. The idea of a dense, low-dimensional embedding space
for words, computed in an unsupervised way, was initially explored by Bengio et al. in the early 2000s, but it only started really taking
off in research and industry applications after the release of one of the most famous and successful word embedding scheme: the Word2Vec
algorithm, developed by Mikolov at Google in 2013. Word2Vec dimensions capture specific semantic properties, e.g. gender.
There are various pre-computed databases of word embeddings that can download and start using in a Keras `Embedding` layer. Word2Vec is one
of them. Another popular one is called "GloVe", developed by Stanford researchers in 2014. It stands for "Global Vectors for Word
Representation", and it is an embedding technique based on factorizing a matrix of word co-occurrence statistics. Its developers have made
available pre-computed embeddings for millions of English tokens, obtained from Wikipedia data or from Common Crawl data.
Let's take a look at how you can get started using GloVe embeddings in a Keras model. The same method will of course be valid for Word2Vec
embeddings or any other word embedding database that you can download. We will also use this example to refresh the text tokenization
techniques we introduced a few paragraphs ago: we will start from raw text, and work our way up.
## 使用预训练的词嵌入
有时可用的训练数据很少,以至于只用手头数据无法学习适合特定任务的词嵌入。那么应该怎么办?
你可以从预计算的嵌入空间中加载嵌入向量(你知道这个嵌入空间是高度结构化的,并且具有有用的属性,即抓住了语言结构的一般特点),而不是在解决问题的同时学习词嵌入。在自然语言处理中使用预训练的词嵌入,其背后的原理与在图像分类中使用预训练的卷积神经网络是一样的:没有足够的数据来自己学习真正强大的特征,但你需要的特征应该是非常通用的,比如常见的视觉特征或语义特征。在这种情况下,重复使用在其他问题上学到的特征,这种做法是有道理的。
这种词嵌入通常是利用词频统计计算得出的(观察哪些词共同出现在句子或文档中),用到的技术很多,有些涉及神经网络,有些则不涉及。Bengio 等人在 21 世纪初首先研究了一种思路,就是用无监督的方法计算一个密集的低维词嵌入空间,但直到最有名且最成功的词嵌入方案之一 word2vec 算法发布之后,这一思路才开始在研究领域和工业应用中取得成功。word2vec 算法由 Google 的 Tomas Mikolov 于 2013 年开发,其维度抓住了特定的语义属性,比如性别。
有许多预计算的词嵌入数据库,你都可以下载并在 Keras 的 Embedding 层中使用。word2vec 就是其中之一。另一个常用的是 GloVe(global vectors for word representation,词表示全局向量),由斯坦福大学的研究人员于 2014 年开发。这种嵌入方法基于对词共现统计矩阵进行因式分解。其开发者已经公开了数百万个英文标记的预计算嵌入,它们都是从维基百科数据和 Common Crawl 数据得到的。
我们来看一下如何在 Keras 模型中使用 GloVe 嵌入。同样的方法也适用于 word2vec 嵌入或其他词嵌入数据库。这个例子还可以改进前面刚刚介绍过的文本分词技术,即从原始文本开始,一步步进行处理。
## Putting it all together: from raw text to word embeddings
We will be using a model similar to the one we just went over -- embedding sentences in sequences of vectors, flattening them and training a
`Dense` layer on top. But we will do it using pre-trained word embeddings, and instead of using the pre-tokenized IMDB data packaged in
Keras, we will start from scratch, by downloading the original text data.
## 整合在一起:从原始文本到词嵌入
本节的模型与之前刚刚见过的那个类似:将句子嵌入到向量序列中,然后将其展平,最后在上面训练一个 Dense 层。但此处将使用预训练的词嵌入。此外,我们将从头开始,先下载IMDB 原始文本数据,而不是使用 Keras 内置的已经预先分词的 IMDB 数据。
### Download the IMDB data as raw text
First, head to `http://ai.stanford.edu/~amaas/data/sentiment/` and download the raw IMDB dataset (if the URL isn't working anymore, just
Google "IMDB dataset"). Uncompress it.
Now let's collect the individual training reviews into a list of strings, one string per review, and let's also collect the review labels
(positive / negative) into a `labels` list:
### 下载 IMDB 数据的原始文本
首先,打开 `http://ai.stanford.edu/~amaas/data/sentiment/` ,下载原始 IMDB 数据集并解压。 接下来,我们将训练评论转换成字符串列表,每个字符串对应一条评论。你也可以将评论
标签(正面 / 负面)转换成 labels 列表。
注:这一步我们已经做好了,不需要再下载,数据已经在 'data/aclImdb'目录。
<jupyter_code>import os
imdb_dir = 'data/aclImdb'
train_dir = os.path.join(imdb_dir, 'train')
labels = []
texts = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(train_dir, label_type)
for fname in os.listdir(dir_name):
if fname[-4:] == '.txt':
f = open(os.path.join(dir_name, fname),encoding='utf8')
texts.append(f.read())
f.close()
if label_type == 'neg':
labels.append(0)
else:
labels.append(1)<jupyter_output><empty_output><jupyter_text>### Tokenize the data
Let's vectorize the texts we collected, and prepare a training and validation split.
We will merely be using the concepts we introduced earlier in this section.
Because pre-trained word embeddings are meant to be particularly useful on problems where little training data is available (otherwise,
task-specific embeddings are likely to outperform them), we will add the following twist: we restrict the training data to its first 200
samples. So we will be learning to classify movie reviews after looking at just 200 examples...
### 对数据进行分词
利用本节前面介绍过的概念,我们对文本进行分词,并将其划分为训练集和验证集。
因为预训练的词嵌入对训练数据很少的问题特别有用(否则,针对于具体任务的嵌入可能效果更好),所以我们又添加了以下限制:将训练数据限定为前 200 个样本。因此,你需要在读取 200 个样本之后学习对电影评论进行分类。
<jupyter_code>from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
maxlen = 100 # We will cut reviews after 100 words(在 100 个单词后截断评论)
training_samples = 200 # We will be training on 200 samples(在 200 个样本上训练)
validation_samples = 10000 # We will be validating on 10000 samples(在 10 000 个样本上验证)
max_words = 10000 # We will only consider the top 10,000 words in the dataset(只考虑数据集中前 10 000 个最常见的单词)
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=maxlen)
labels = np.asarray(labels)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# Split the data into a training set and a validation set
# But first, shuffle the data, since we started from data
# where sample are ordered (all negative first, then all positive).
# 将数据划分为训练集和验证集,但首先要打乱数据,因为一开始数据中的样本是排好序的(所有负面评论都在前面, 然后是所有正面评论)
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]<jupyter_output>Found 88582 unique tokens.
Shape of data tensor: (25000, 100)
Shape of label tensor: (25000,)
<jupyter_text>### Download the GloVe word embeddings
Head to `https://nlp.stanford.edu/projects/glove/` (where you can learn more about the GloVe algorithm), and download the pre-computed
embeddings from 2014 English Wikipedia. It's a 822MB zip file named `glove.6B.zip`, containing 100-dimensional embedding vectors for
400,000 words (or non-word tokens). Un-zip it.
### 下载 GloVe 词嵌入
打开 https://nlp.stanford.edu/projects/glove ,下载 2014 年英文维基百科的预计算嵌入。这是 一个 822 MB 的压缩文件,文件名是 glove.6B.zip,里面包含 400 000 个单词(或非单词的标记) 的100 维嵌入向量。解压文件。
备注:我们下载了,文件放在 'data/glove.6B.100d.txt'
### Pre-process the embeddings
Let's parse the un-zipped file (it's a `txt` file) to build an index mapping words (as strings) to their vector representation (as number
vectors).
### 对嵌入进行预处理
我们对解压后的文件(一个 .txt 文件)进行解析,构建一个将单词(字符串)映射为其向量表示(数值向量)的索引。
<jupyter_code>glove_dir = 'data/'
embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.100d.txt'),encoding='utf8')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))<jupyter_output>Found 400000 word vectors.
<jupyter_text>
Now let's build an embedding matrix that we will be able to load into an `Embedding` layer. It must be a matrix of shape `(max_words,
embedding_dim)`, where each entry `i` contains the `embedding_dim`-dimensional vector for the word of index `i` in our reference word index
(built during tokenization). Note that the index `0` is not supposed to stand for any word or token -- it's a placeholder.
接下来,需要构建一个可以加载到 Embedding 层中的嵌入矩阵。它必须是一个形状为(max_words, embedding_dim) 的矩阵,对于单词索引(在分词时构建)中索引为 i 的单词, 这个矩阵的元素 i 就是这个单词对应的 embedding_dim 维向量。注意,索引 0 不应该代表任何单词或标记,它只是一个占位符。
<jupyter_code>embedding_dim = 100
embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if i < max_words:
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
# 嵌入索引(embeddings_index) 中找不到的词,其嵌入向量全为 0
embedding_matrix[i] = embedding_vector<jupyter_output><empty_output><jupyter_text>### Define a model
We will be using the same model architecture as before:
### 定义模型
我们将使用与前面相同的模型架构:
<jupyter_code>from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()<jupyter_output>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) (None, 100, 100) 1000000
_________________________________________________________________
flatten_2 (Flatten) (None, 10000) 0
_________________________________________________________________
dense_2 (Dense) (None, 32) 320032
_________________________________________________________________
dense_3 (Dense) (None, 1) 33
=================================================================
Total params: 1,320,065
Trainable params: 1,320,065
Non-trainable params: 0
_________________________________________________________________
<jupyter_text>### Load the GloVe embeddings in the model
The `Embedding` layer has a single weight matrix: a 2D float matrix where each entry `i` is the word vector meant to be associated with
index `i`. Simple enough. Let's just load the GloVe matrix we prepared into our `Embedding` layer, the first layer in our model:
### 在模型中加载 GloVe 嵌入
Embedding 层只有一个权重矩阵,是一个二维的浮点数矩阵,其中每个元素 i 是与索引 i 相关联的词向量。够简单。将准备好的 GloVe 矩阵加载到 Embedding 层中,即模型的第一层。
<jupyter_code>model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False<jupyter_output><empty_output><jupyter_text>
Additionally, we freeze the embedding layer (we set its `trainable` attribute to `False`), following the same rationale as what you are
already familiar with in the context of pre-trained convnet features: when parts of a model are pre-trained (like our `Embedding` layer),
and parts are randomly initialized (like our classifier), the pre-trained parts should not be updated during training to avoid forgetting
what they already know. The large gradient update triggered by the randomly initialized layers would be very disruptive to the already
learned features.
此外,需要冻结 Embedding 层(即将其 trainable 属性设为 False),其原理和预训练的卷积神经网络特征相同,你已经很熟悉了。如果一个模型的一部分是经过预训练的(如 Embedding 层),而另一部分是随机初始化的(如分类器),那么在训练期间不应该更新预训练的部分,以避免丢失它们所保存的信息。随机初始化的层会引起较大的梯度更新,会破坏已经学到的特征。### Train and evaluate
Let's compile our model and train it:
### 训练模型与评估模型
编译并训练模型:
<jupyter_code>model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val))
model.save_weights('pre_trained_glove_model.h5')<jupyter_output>Train on 200 samples, validate on 10000 samples
Epoch 1/10
200/200 [==============================] - 1s 4ms/step - loss: 1.6337 - acc: 0.5250 - val_loss: 0.7130 - val_acc: 0.5100
Epoch 2/10
200/200 [==============================] - 1s 3ms/step - loss: 0.7565 - acc: 0.5800 - val_loss: 0.6910 - val_acc: 0.5418
Epoch 3/10
200/200 [==============================] - 1s 3ms/step - loss: 0.5956 - acc: 0.6950 - val_loss: 1.1205 - val_acc: 0.4936
Epoch 4/10
200/200 [==============================] - 1s 3ms/step - loss: 0.5335 - acc: 0.7350 - val_loss: 0.7134 - val_acc: 0.5362
Epoch 5/10
200/200 [==============================] - 1s 3ms/step - loss: 0.4713 - acc: 0.8100 - val_loss: 0.7177 - val_acc: 0.5589
Epoch 6/10
200/200 [==============================] - 0s 2ms/step - loss: 0.1448 - acc: 0.9800 - val_loss: 1.3373 - val_acc: 0.4952
Epoch 7/10
200/200 [==============================] - 1s 3ms/step - loss: 0.2545 - acc: 0.8800 - val_loss: 1.3110 - val_acc: 0.4960
Epoch 8/10
200/200 [========[...]<jupyter_text>Let's plot its performance over time:
绘制结果:<jupyter_code>import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text>
The model quickly starts overfitting, unsurprisingly given the small number of training samples. Validation accuracy has high variance for
the same reason, but seems to reach high 60s.
Note that your mileage may vary: since we have so few training samples, performance is heavily dependent on which exact 200 samples we
picked, and we picked them at random. If it worked really poorly for you, try picking a different random set of 200 samples, just for the
sake of the exercise (in real life you don't get to pick your training data).
We can also try to train the same model without loading the pre-trained word embeddings and without freezing the embedding layer. In that
case, we would be learning a task-specific embedding of our input tokens, which is generally more powerful than pre-trained word embeddings
when lots of data is available. However, in our case, we have only 200 training samples. Let's try it:
模型很快就开始过拟合,考虑到训练样本很少,这一点也不奇怪。出于同样的原因,验证精度的波动很大,但似乎达到了接近 60%。
注意,你的结果可能会有所不同。训练样本数太少,所以模型性能严重依赖于你选择的200个样本,而样本是随机选择的。如果你得到的结果很差,可以尝试重新选择 200 个不同的 随机样本,你可以将其作为练习(在现实生活中无法选择自己的训练数据)。
你也可以在不加载预训练词嵌入、也不冻结嵌入层的情况下训练相同的模型。在这种情况下, 你将会学到针对任务的输入标记的嵌入。如果有大量的可用数据,这种方法通常比预训练词嵌入更加强大,但本例只有 200 个训练样本。我们来试一下这种方法:。
<jupyter_code>from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val))
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text>
Validation accuracy stalls in the low 50s. So in our case, pre-trained word embeddings does outperform jointly learned embeddings. If you
increase the number of training samples, this will quickly stop being the case -- try it as an exercise.
Finally, let's evaluate the model on the test data. First, we will need to tokenize the test data:
验证精度停留在 50% 多一点。因此,在本例中,预训练词嵌入的性能要优于与任务一起学习的嵌入。如果增加样本数量,情况将很快发生变化,你可以把它作为一个练习。
最后,我们在测试数据上评估模型。首先,你需要对测试数据进行分词。
<jupyter_code>test_dir = os.path.join(imdb_dir, 'test')
labels = []
texts = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(test_dir, label_type)
for fname in sorted(os.listdir(dir_name)):
if fname[-4:] == '.txt':
f = open(os.path.join(dir_name, fname),encoding='utf8')
texts.append(f.read())
f.close()
if label_type == 'neg':
labels.append(0)
else:
labels.append(1)
sequences = tokenizer.texts_to_sequences(texts)
x_test = pad_sequences(sequences, maxlen=maxlen)
y_test = np.asarray(labels)<jupyter_output><empty_output><jupyter_text>And let's load and evaluate the first model:
接下来,加载并评估第一个模型。<jupyter_code>model.load_weights('pre_trained_glove_model.h5')
model.evaluate(x_test, y_test)<jupyter_output>25000/25000 [==============================] - 1s 59us/step
|
permissive
|
/References/Python深度学习/code/6.1-using-word-embeddings.ipynb
|
Jianhan-Liu/NLP
| 14 |
<jupyter_start><jupyter_text><jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import os
import zipfile
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import shutil
## GOOGLE DRIVE AUTHENTICATION
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
## SEACHING FOR THE ZIP FILE AND THEN UNZIPPING IT IN COLAB AND DELETING UN-NECESSARY FILES
fid = drive.ListFile({'q':"title='train.csv'"}).GetList()[0]['id']
f = drive.CreateFile({'id': fid})
f.GetContentFile('train.csv')
fid = drive.ListFile({'q':"title='test.csv'"}).GetList()[0]['id']
f = drive.CreateFile({'id': fid})
f.GetContentFile('test.csv')
df = pd.read_csv('train.csv')
df.head()
test = pd.read_csv('test.csv')
test.head()
print(df.shape)
df.describe()
print(test.shape)
test.describe()
df.isna().sum().sort_values(ascending=False)
test.isna().sum().sort_values(ascending=False)<jupyter_output><empty_output><jupyter_text>### Cleaning and analysing the datasetLet's remove few columns which does not look like to have importance and also few of the columns which have a lot of NaN's.<jupyter_code>df['belongs_to_collection'][0]
df['homepage'][:5]
## tagline features seems important but there are too many NaN values(1/6th of dataset).
df['tagline'][0]
df['Keywords'][0]
df['production_companies'][0]
df['production_countries'][0]
df['spoken_languages'][0]
df['crew'][0]
df['cast'][0]
df['overview'][0]
dropping_cols = ['belongs_to_collection','homepage','tagline']
df.drop(dropping_cols,axis=1,inplace=True)
print(df.shape)
df.head()
dropping_cols = ['belongs_to_collection','homepage','tagline']
test.drop(dropping_cols,axis=1,inplace=True)
print(test.shape)
test.head()
df.isna().sum().sort_values(ascending=False)
test.isna().sum().sort_values(ascending=False)
df.dropna(inplace=True)
df.shape
test.dropna(inplace=True)
test.shape
df.isna().sum().sort_values(ascending=False)
test.isna().sum().sort_values(ascending=False)<jupyter_output><empty_output><jupyter_text>## Lets start to pre-process the dataset to use it for prediction<jupyter_code>sns.countplot(x='status',data=df)
sns.boxplot(data=df,y='runtime')
df.info()
sns.jointplot(x='revenue',y='popularity',data=df)
sns.jointplot(x='revenue',y='budget',data=df)
sns.jointplot(x='revenue',y='runtime',data=df)
plt.figure(figsize=(12,12))
sns.distplot(df['revenue'],rug=True)
plt.figure(figsize=(13,20))
sns.countplot(y='original_language',data=df)<jupyter_output>/usr/local/lib/python3.6/dist-packages/seaborn/categorical.py:1428: FutureWarning: remove_na is deprecated and is a private function. Do not use.
stat_data = remove_na(group_data)
<jupyter_text>Main language of movies are English.<jupyter_code>df['status'].value_counts()
print(df['revenue'][df['status']=='Rumored'])
## applying log to revenue to normally distribute.
df['logrevenue'] = np.log1p(df['revenue'])
sns.distplot(df['logrevenue'])
df['release_date'] = pd.to_datetime(df['release_date'], infer_datetime_format=True)
df['release_day'] = df['release_date'].apply(lambda t: t.day)
df['release_weekday'] = df['release_date'].apply(lambda t: t.weekday())
df['release_month'] = df['release_date'].apply(lambda t: t.month)
# Year was being interpreted as future dates in some cases so I had to adjust some values
df['release_year'] = df['release_date'].apply(lambda t: t.year if t.year < 2018 else t.year -100)
df[['release_date','release_day','release_weekday','release_month','release_year']].head()
test['release_date'] = pd.to_datetime(test['release_date'], infer_datetime_format=True)
test['release_day'] = test['release_date'].apply(lambda t: t.day)
test['release_weekday'] = test['release_date'].apply(lambda t: t.weekday())
test['release_month'] = test['release_date'].apply(lambda t: t.month)
# Year was being interpreted as future dates in some cases so I had to adjust some values
test['release_year'] = test['release_date'].apply(lambda t: t.year if t.year < 2018 else t.year -100)
test[['release_date','release_day','release_weekday','release_month','release_year']].head()
print(len(df[df['runtime']==0]))
print(len(test[test['runtime']==0]))
from collections import defaultdict
def map_runtime(df):
df['runtime'].fillna(0)
run = df[(df['runtime'].notnull()) & (df['runtime'] != 0)]
year_mean = run.groupby(['release_year'])['runtime'].agg('mean')
d = dict(year_mean)
for i in df[df['runtime'] == 0]:
df['runtime'] = df.loc[:, 'release_year'].map(d)
return df
test = map_runtime(test)
sns.countplot(x='release_month',data=df)<jupyter_output>/usr/local/lib/python3.6/dist-packages/seaborn/categorical.py:1428: FutureWarning: remove_na is deprecated and is a private function. Do not use.
stat_data = remove_na(group_data)
<jupyter_text>Many movies come in the month of september and october<jupyter_code>plt.figure(figsize=(12,10))
sns.countplot(x='release_day',data=df)<jupyter_output>/usr/local/lib/python3.6/dist-packages/seaborn/categorical.py:1428: FutureWarning: remove_na is deprecated and is a private function. Do not use.
stat_data = remove_na(group_data)
<jupyter_text>Most of the movies are either released on 1st of every month or 15th.<jupyter_code>plt.figure(figsize=(25,15))
sns.countplot(x='release_year',data=df)
plt.xticks(rotation=90)<jupyter_output>/usr/local/lib/python3.6/dist-packages/seaborn/categorical.py:1428: FutureWarning: remove_na is deprecated and is a private function. Do not use.
stat_data = remove_na(group_data)
<jupyter_text>Maximum number of movies released are in year 2013 to 2017<jupyter_code>sns.countplot(x='release_weekday',data=df)<jupyter_output>/usr/local/lib/python3.6/dist-packages/seaborn/categorical.py:1428: FutureWarning: remove_na is deprecated and is a private function. Do not use.
stat_data = remove_na(group_data)
<jupyter_text>Most of the movies are released on Friday, Thrusday and Wednesday.<jupyter_code>df['genres'].describe()
len(df[df['budget']==0])
import scipy.stats as stats
X = df[df['budget'] != 0]
for i in X.select_dtypes(include='number', exclude='datetime'):
print(i, stats.pearsonr(X.budget, X[i]))
# release_year and popularity correlate most strongly with budget
def map_budget(df):
d = defaultdict()
#df['budget'] = df['budget'].fillna(0)
X = df[df['budget'] != 0]
year_mean = pd.Series(X.groupby(['release_year'])['budget'].agg('mean'))
d = dict(year_mean)
for i in df[df['budget'] == 0]:
df['budget'] = df.loc[:, 'release_year'].map(d)
# In a few cases, there are only 1 or 2 movies provided from a given year and are filled with Na values
df.budget = df.sort_values(by='release_year').budget.fillna(method='ffill')
return df
df = map_budget(df)
df.budget.describe()
print(len(df[df['budget']==0]))
len(test[test['budget']==0])
test = map_budget(test)
len(test[test['budget']==0])
df.head()
test.head()
df['poster_path'].fillna(0, inplace=True)
df.loc[df['poster_path'] != 0, 'poster_path'] = 1
test['poster_path'].fillna(0, inplace=True)
test.loc[test['poster_path'] != 0, 'poster_path'] = 1
genres= df.genres.str.get_dummies(sep=',')
df =pd.concat([df,genres],axis=1)
df.drop(['genres'],axis=1,inplace=True)
df.head()
df.shape
genres= test.genres.str.get_dummies(sep=',')
test =pd.concat([test,genres],axis=1)
test.drop(['genres'],axis=1,inplace=True)
test.shape
dropping_cols = ['id','imdb_id','original_title','overview','production_companies','status','title','Keywords','cast','crew','production_countries','release_date','spoken_languages']
df.drop(dropping_cols,axis=1,inplace=True)
print(df.shape)
df.head()
dropping_cols = ['id','imdb_id','original_title','overview','production_companies','status','title','Keywords','cast','crew','production_countries','release_date','spoken_languages']
test.drop(dropping_cols,axis=1,inplace=True)
print(test.shape)
test.head()
just_dummies = pd.get_dummies(df['original_language'])
df = pd.concat([df, just_dummies], axis=1)
df.drop(['original_language'], inplace=True, axis=1)
print(df.shape)
df.head()
just_dummies = pd.get_dummies(test['original_language'])
test = pd.concat([test, just_dummies], axis=1)
test.drop(['original_language'], inplace=True, axis=1)
print(test.shape)
test.head()
df.reset_index(inplace=True)
test.reset_index(inplace=True)
print(df.shape)
df.head(5)
print(test.shape)
test.head(5)<jupyter_output>(3814, 121)
<jupyter_text>## Building model for training and predictions <jupyter_code>from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
X = df.drop(['revenue'],axis=1)
y = df['revenue']
X_train,X_test, y_train, y_test = train_test_split(X,y,test_size=0.20,random_state=0)
print("Shape of X_train is : ",X_train.shape[0])
print("Shape of y_train is : ",y_train.shape[0])
print("Shape of X_test is : ",X_test.shape[0])
print("Shape of y_test is : ",y_test.shape[0])
X_train.head()
y_train.head()
X_test.head()
y_test.head()
from sklearn.linear_model import LinearRegression
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
lr_ = LinearRegression()
lr_.fit(X_train,y_train)
pred = lr_.predict(X_test)
sns.distplot((y_test-pred),bins=50)
plt.show()
def rmsle(y_true, y_pred):
return 'rmsle', np.sqrt(np.mean(np.power(np.log1p(y_pred) - np.log1p(y_true), 2))), False
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, pred))
print('MSE:', metrics.mean_squared_error(y_test, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, pred)))
print('RMSLE:', rmsle(y_test, pred))
lr = LGBMRegressor(boosting_type='dart',num_leaves=20,max_depth=-1,min_data_in_leaf=20, learning_rate=0.2,n_estimators=500,subsample_for_bin=200000,
class_weight=None,min_split_gain=0.0,min_child_weight=0.001,subsample=0.1,subsample_freq=0,colsample_bytree=0.75,reg_alpha=0.0,reg_lambda=0.0,
random_state=101,n_jobs=-1)
lr.fit(X_train, y_train,eval_set=[(X_test, y_test)],eval_metric=rmsle,verbose=False)
pred = lr.predict(X_test, num_iteration=lr.best_iteration_)
print('MAE:', metrics.mean_absolute_error(y_test, pred))
print('MSE:', metrics.mean_squared_error(y_test, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, pred)))
print('RMSLE:', rmsle(y_test, pred))
fig, ax = plt.subplots()
ax.scatter(y_test, pred)
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], lw=1)
ax.set_xlabel('Measured')
ax.set_ylabel('Predicted')
plt.show()
lr.best_score_
# features = list(X.columns)
importance = lr.feature_importances_
# feature_indexes_by_importance = importance.argsort()
# for index in feature_indexes_by_importance:
# print('{}-{:.2f}%'.format(features[index], (importance[index] *100.0)))
importance
lr.n_estimators<jupyter_output><empty_output>
|
no_license
|
/TMDB_Box_Office_Revenue_Prediction.ipynb
|
inboxpraveen/TMDB-Box-office-revenue
| 9 |
<jupyter_start><jupyter_text># Data Scaling ## StandardScaler (Standardization)
y = (x – mean) / standard_deviation
Where the mean is calculated as:
mean = sum(x) / count(x)
And the standard_deviation is calculated as:
standard_deviation = sqrt( sum( (x – mean)^2 ) / count(x))
<jupyter_code>from sklearn.preprocessing import StandardScaler
data = [[0, 0], [0, 0], [1, 1], [1, 1]]
scaler = StandardScaler()
scaler.fit(data)
print(scaler.mean_)
newdata = scaler.transform(data)
print(newdata)
# or
newdata = scaler.fit_transform(data)
print(newdata)<jupyter_output>[[-1. -1.]
[-1. -1.]
[ 1. 1.]
[ 1. 1.]]
<jupyter_text>using in train and test data
just example without data
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
sc_y = StandardScaler()
y_train = sc_y.fit_transform(y_train)
y_test = sc_y.fit_transform(y_test)
## MinMaxScaler (Normalization)
y = (x – min) / (max – min)
Where the minimum and maximum values pertain to the value x being normalized
<jupyter_code>from sklearn.preprocessing import MinMaxScaler
data = [[-1, 2], [-0.5, 6], [0, 10], [1, 18]]
scaler = MinMaxScaler()
scaler.fit(data)
print(scaler.data_range_)
print(scaler.data_min_)
print(scaler.data_max_)
newdata = scaler.transform(data)
print(newdata)<jupyter_output>[ 2. 16.]
[-1. 2.]
[ 1. 18.]
[[0. 0. ]
[0.25 0.25]
[0.5 0.5 ]
[1. 1. ]]
<jupyter_text>## Normalizer
و هي مخصصة لتناول كل صف علي حدة في المصفوفات ثنائية الأبعاد<jupyter_code>from sklearn.preprocessing import Normalizer
X = [[4, 1, 2, 2], [1, 3, 9, 3], [5, 7, 5, 1]]
# تستخدم l1 لجعل مجموع كل صف هو القيمة العظمي
#transformer = Normalizer(norm='l1' )
# تستخدم l2 لجعل جذر مجموع مربعات كل صف هو القيمة العظمي
#transformer = Normalizer(norm='l2' )
# تستخدم max لجعل القيمة العظمي في كل صف هي القيمة العظمي
transformer = Normalizer(norm='max' )
transformer.fit(X)
transformer.transform(X)<jupyter_output><empty_output><jupyter_text>## MaxAbsScaler
مشابهة لـ normalizer
لكن بالنسبة للعمود و ليس الصف حيث تجعل أكبر قيمة في كل عمود هي القيمة
العظمي و تغير الباقيين علي اساسه<jupyter_code>from sklearn.preprocessing import MaxAbsScaler
X = [[ 1., 10., 2.],
[ 2., 0., 0.],
[ 5., 1., -1.]]
transformer = MaxAbsScaler().fit(X)
transformer
transformer.transform(X)<jupyter_output><empty_output><jupyter_text>## FunctionTransformer
وهي تستخدم لعمل سكيل بدالة اقوم كتابتها بنفسي
FunctionTransformer(func=None, inverse_func=None, validate= None,
accept_sparse=False,pass_y='deprecated', check_inverse=True,
kw_args=None,inv_kw_args=None)<jupyter_code>import numpy as np
from sklearn.preprocessing import FunctionTransformer
X = [[4, 1, 2, 2], [1, 3, 9, 3], [5, 7, 5, 1]]
def function1(z):
return np.sqrt(z)
FT = FunctionTransformer(func = function1)
FT.fit(X)
newdata = FT.transform(X)
newdata<jupyter_output><empty_output><jupyter_text>## Binarizer
و هي تقوم بتحويل كل الأرقام إلي صفر او واحد , حسب قيمة العتبة
threshold<jupyter_code>from sklearn.preprocessing import Binarizer
X = [[ 1., -1., -2.],[ 2., 0., -1.], [ 0., 1., -1.]]
transformer = Binarizer(threshold=1.5 )
transformer.fit(X)
transformer
transformer.transform(X)<jupyter_output><empty_output><jupyter_text>## PolynomialFeatures
وهي خاصة بعمل فيتشرز جديدة , هي عبارة عن حاصل ضرب الفيتشرز الحالية بطريقة البولونوميال , فلو كانت الدرجة 2 مثلا , و كان لدينا اصلا عمودين فقط اي 2 فيتشرز , فسيعطي لنا علي الترتيب<jupyter_code>import numpy as np
from sklearn.preprocessing import PolynomialFeatures
X = np.arange(6).reshape(3, 2)
print(X)
# يتم كتابة الدرجة , وهل تحتوي علي قيمة بياس (رقم 1) ام لا
poly = PolynomialFeatures(degree=2 , include_bias = True)
poly.fit_transform(X)
# ولو تم اختيار interaction_only كقيمة True سيعرض فقط قيم a مضروبة في b و يحذف الاسس للقيم الوحيدة
poly = PolynomialFeatures(interaction_only=True)
poly.fit_transform(X)
<jupyter_output><empty_output>
|
no_license
|
/Data_Scaling.ipynb
|
abohashem95/SKlearn-codes
| 7 |
<jupyter_start><jupyter_text># Lab 4: Functions and VisualizationsWelcome to Lab 4! This week, we'll learn about functions, table methods such as `apply`, and how to generate visualizations!
Recommended Reading:
* [Applying a Function to a Column](https://www.inferentialthinking.com/chapters/08/1/applying-a-function-to-a-column.html)
* [Visualizations](https://www.inferentialthinking.com/chapters/07/visualization.html)
First, set up the notebook by running the cell below.<jupyter_code>import numpy as np
from datascience import *
# These lines set up graphing capabilities.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# When you log-in please hit return (not shift + return) after typing in your email
from client.api.notebook import Notebook
ok = Notebook('lab04.ok')
_ = ok.submit()<jupyter_output>=====================================================================
Assignment: Functions and Visualizations
OK, version v1.18.1
=====================================================================
<jupyter_text>## 1. Defining functions
Let's write a very simple function that converts a proportion to a percentage by multiplying it by 100. For example, the value of `to_percentage(.5)` should be the number 50 (no percent sign).
A function definition has a few parts.
##### `def`
It always starts with `def` (short for **def**ine):
def
##### Name
Next comes the name of the function. Like other names we've defined, it can't start with a number or contain spaces. Let's call our function `to_percentage`:
def to_percentage
##### Signature
Next comes something called the *signature* of the function. This tells Python how many arguments your function should have, and what names you'll use to refer to those arguments in the function's code. A function can have any number of arguments (including 0!).
`to_percentage` should take one argument, and we'll call that argument `proportion` since it should be a proportion.
def to_percentage(proportion)
If we want our function to take more than one argument, we add a comma between each argument name. Note that if we had zero arguments, we'd still place the parentheses () after that name.
We put a **colon** after the signature to tell Python that the next indented lines are the body of the function. If you're getting a syntax error after defining a function, check to make sure you remembered the colon!
def to_percentage(proportion):
##### Documentation
Functions can do complicated things, so you should write an explanation of what your function does. For small functions, this is less important, but it's a good habit to learn from the start. Conventionally, Python functions are documented by writing an **indented** triple-quoted string:
def to_percentage(proportion):
"""Converts a proportion to a percentage."""
##### Body
Now we start writing code that runs when the function is called. This is called the *body* of the function and every line **must be indented with a tab**. Any lines that are *not* indented and left-aligned with the def statement is considered outside the function.
Some notes about the body of the function:
- We can write code that we would write anywhere else.
- We use the arguments defined in the function signature. We can do this because we assume that when we call the function, values are already assigned to those arguments.
- We generally avoid referencing variables defined *outside* the function. If you would like to reference variables outside of the function, pass them through as arguments!
Now, let's give a name to the number we multiply a proportion by to get a percentage:
def to_percentage(proportion):
"""Converts a proportion to a percentage."""
factor = 100
##### `return`
The special instruction `return` is part of the function's body and tells Python to make the value of the function call equal to whatever comes right after `return`. We want the value of `to_percentage(.5)` to be the proportion .5 times the factor 100, so we write:
def to_percentage(proportion):
"""Converts a proportion to a percentage."""
factor = 100
return proportion * factor
`return` only makes sense in the context of a function, and **can never be used outside of a function**. `return` is always the last line of the function because Python stops executing the body of a function once it hits a `return` statement. If a function does not have a return statement, it will not return anything; if you expect a value back from the function, make sure to include a return statement.
*Note:* `return` inside a function tells Python what value the function evaluates to. However, there are other functions, like `print`, that have no `return` value. For example, `print` simply prints a certain value out to the console.
`return` and `print` are **very** different. **Question 1.1.** Define `to_percentage` in the cell below. Call your function to convert the proportion .2 to a percentage. Name that percentage `twenty_percent`.
<!--
BEGIN QUESTION
name: q11
--><jupyter_code>def ...
'''...'''
... = ...
return ...
twenty_percent = ...
twenty_percent
ok.grade("q11");<jupyter_output><empty_output><jupyter_text>Like you’ve done with built-in functions in previous labs (max, abs, etc.), you can pass in named values as arguments to your function.
**Question 1.2.** Use `to_percentage` again to convert the proportion named `a_proportion` (defined below) to a percentage called `a_percentage`.
*Note:* You don't need to define `to_percentage` again! Like other named values, functions stick around after you define them.
<!--
BEGIN QUESTION
name: q12
--><jupyter_code>a_proportion = 2**(.5) / 2
a_percentage = ...
a_percentage
ok.grade("q12");<jupyter_output><empty_output><jupyter_text>Here's something important about functions: the names assigned *within* a function body are only accessible within the function body. Once the function has returned, those names are gone. So even if you created a variable called `factor` and defined `factor = 100` inside of the body of the `to_percentage` function and then called `to_percentage`, `factor` would not have a value assigned to it outside of the body of `to_percentage`:<jupyter_code># You should see an error when you run this. (If you don't, you might
# have defined factor somewhere above.)
factor<jupyter_output><empty_output><jupyter_text>As we've seen with built-in functions, functions can also take strings (or arrays, or tables) as arguments, and they can return those things, too.
**Question 1.3.** Define a function called `disemvowel`. It should take a single string as its argument. (You can call that argument whatever you want.) It should return a copy of that string, but with all the characters that are vowels removed. (In English, the vowels are the characters "a", "e", "i", "o", and "u".) You can use as many lines inside of the function to do this as you’d like.
*Hint:* To remove all the "a"s from a string, you can use `a_string.replace("a", "")`. The `.replace` method for strings returns a new string, so you can call `replace` multiple times, one after the other.
<!--
BEGIN QUESTION
name: q13
--><jupyter_code>def disemvowel(a_string):
...
...
# An example call to your function. (It's often helpful to run
# an example call from time to time while you're writing a function,
# to see how it currently works.)
disemvowel("Can you read this without vowels?")
ok.grade("q13");<jupyter_output><empty_output><jupyter_text>##### Calls on calls on calls
Just as you write a series of lines to build up a complex computation, it's useful to define a series of small functions that build on each other. Since you can write any code inside a function's body, you can call other functions you've written.
If a function is a like a recipe, defining a function in terms of other functions is like having a recipe for cake telling you to follow another recipe to make the frosting, and another to make the jam filling. This makes the cake recipe shorter and clearer, and it avoids having a bunch of duplicated frosting recipes. It's a foundation of productive programming.
For example, suppose you want to count the number of characters *that aren't vowels* in a piece of text. One way to do that is this to remove all the vowels and count the size of the remaining string.
**Question 1.4.** Write a function called `num_non_vowels`. It should take a string as its argument and return a number. That number should be the number of characters in the argument string that aren't vowels. You should use the `disemvowel` function you wrote above inside of the `num_non_vowels` function.
*Hint:* The function `len` takes a string as its argument and returns the number of characters in it.
<!--
BEGIN QUESTION
name: q14
--><jupyter_code>def num_non_vowels(a_string):
"""The number of characters in a string, minus the vowels."""
...
# Try calling your function yourself to make sure the output is what
# you expect.
ok.grade("q14");<jupyter_output><empty_output><jupyter_text>Functions can also encapsulate code that *displays output* instead of computing a value. For example, if you call `print` inside a function, and then call that function, something will get printed.
The `movies_by_year` dataset in the textbook has information about movie sales from 1980 to 2015. Suppose you'd like to display the year with the 5th-highest total gross movie sales, printed in a human-readable way. You might do this:<jupyter_code>movies_by_year = Table.read_table("movies_by_year.csv")
rank = 5
fifth_from_top_movie_year = movies_by_year.sort("Total Gross", descending=True).column("Year").item(rank-1)
print("Year number", rank, "for total gross movie sales was:", fifth_from_top_movie_year)<jupyter_output><empty_output><jupyter_text>After writing this, you realize you also wanted to print out the 2nd and 3rd-highest years. Instead of copying your code, you decide to put it in a function. Since the rank varies, you make that an argument to your function.
**Question 1.5.** Write a function called `print_kth_top_movie_year`. It should take a single argument, the rank of the year (like 2, 3, or 5 in the above examples). It should print out a message like the one above.
*Note:* Your function shouldn't have a `return` statement.
<!--
BEGIN QUESTION
name: q15
--><jupyter_code>def print_kth_top_movie_year(k):
...
...
print(...)
# Example calls to your function:
print_kth_top_movie_year(2)
print_kth_top_movie_year(3)
ok.grade("q15");
# interact also allows you to pass in an array for a function argument. It will
# then present a dropdown menu of options.
_ = interact(print_kth_top_movie_year, k=np.arange(1, 10))<jupyter_output><empty_output><jupyter_text>### `print` is not the same as `return`
The `print_kth_top_movie_year(k)` function prints the total gross movie sales for the year that was provided! However, since we did not return any value in this function, we can not use it after we call it. Let's look at an example of another function that prints a value but does not return it.<jupyter_code>def print_number_five():
print(5)
print_number_five()<jupyter_output><empty_output><jupyter_text>However, if we try to use the output of `print_number_five()`, we see that the value `5` is printed but we get a TypeError when we try to add the number 2 to it!<jupyter_code>print_number_five_output = print_number_five()
print_number_five_output + 2<jupyter_output><empty_output><jupyter_text>It may seem that `print_number_five()` is returning a value, 5. In reality, it just displays the number 5 to you without giving you the actual value! If your function prints out a value without returning it and you try to use that value, you will run into errors, so be careful!
Explain to your neighbor how you might add a line of code to the `print_number_five` function (after `print(5)`) so that the code `print_number_five_output + 5` would result in the value `10`, rather than an error.## 2. Functions and CEO Incomes
In this question, we'll look at the 2015 compensation of CEOs at the 100 largest companies in California. The data was compiled from a [Los Angeles Times analysis](http://spreadsheets.latimes.com/california-ceo-compensation/), and ultimately came from [filings](https://www.sec.gov/answers/proxyhtf.htm) mandated by the SEC from all publicly-traded companies. Two companies have two CEOs, so there are 102 CEOs in the dataset.
We've copied the raw data from the LA Times page into a file called `raw_compensation.csv`. (The page notes that all dollar amounts are in **millions of dollars**.)<jupyter_code>raw_compensation = Table.read_table('raw_compensation.csv')
raw_compensation<jupyter_output><empty_output><jupyter_text>We want to compute the average of the CEOs' pay. Try running the cell below.<jupyter_code>np.average(raw_compensation.column("Total Pay"))<jupyter_output><empty_output><jupyter_text>You should see a TypeError. Let's examine why this error occurred by looking at the values in the `Total Pay` column.
**Question 2.1.** Use the `type` function and set `total_pay_type` to the type of the first value in the "Total Pay" column.
<!--
BEGIN QUESTION
name: q21
--><jupyter_code>total_pay_type = ...
total_pay_type
ok.grade("q21");<jupyter_output><empty_output><jupyter_text>**Question 2.2.** You should have found that the values in the `Total Pay` column are strings. It doesn't make sense to take the average of string values, so we need to convert them to numbers if we want to do this. Extract the first value in `Total Pay`. It's Mark Hurd's pay in 2015, in *millions* of dollars. Call it `mark_hurd_pay_string`.
<!--
BEGIN QUESTION
name: q22
--><jupyter_code>mark_hurd_pay_string = ...
mark_hurd_pay_string
ok.grade("q22");<jupyter_output><empty_output><jupyter_text>**Question 2.3.** Convert `mark_hurd_pay_string` to a number of *dollars*.
Some hints, as this question requires multiple steps:
- The string method `strip` will be useful for removing the dollar sign; it removes a specified character from the start or end of a string. For example, the value of `"100%".strip("%")` is the string `"100"`.
- You'll also need the function `float`, which converts a string that looks like a number to an actual number.
- Finally, remember that the answer should be in dollars, not millions of dollars.
<!--
BEGIN QUESTION
name: q23
--><jupyter_code>mark_hurd_pay = ...
mark_hurd_pay
ok.grade("q23");<jupyter_output><empty_output><jupyter_text>To compute the average pay, we need to do this for every CEO. But that looks like it would involve copying this code 102 times.
This is where functions come in. First, we'll define a new function, giving a name to the expression that converts "total pay" strings to numeric values. Later in this lab, we'll see the payoff: we can call that function on every pay string in the dataset at once.
The next section of this lab explains how to define a function For now, just fill in the ellipses in the cell below.
**Question 2.4.** Copy the expression you used to compute `mark_hurd_pay`, and use it as the return expression of the function below. But make sure you replace the specific `mark_hurd_pay_string` with the generic `pay_string` name specified in the first line in the `def` statement.
*Hint*: When dealing with functions, you should generally not be referencing any variable outside of the function. Usually, you want to be working with the arguments that are passed into it, such as `pay_string` for this function. If you're using `mark_hurd_pay_string` within your function, you're referencing an outside variable!
<!--
BEGIN QUESTION
name: q24
--><jupyter_code>def convert_pay_string_to_number(pay_string):
"""Converts a pay string like '$100' (in millions) to a number of dollars."""
...
ok.grade("q24");<jupyter_output><empty_output><jupyter_text>Running that cell doesn't convert any particular pay string. Instead, it creates a function called `convert_pay_string_to_number` that can convert *any* string with the right format to a number representing millions of dollars.
We can call our function just like we call the built-in functions we've seen. It takes one argument -- a string -- and it returns a float.<jupyter_code>convert_pay_string_to_number('$42')
convert_pay_string_to_number(mark_hurd_pay_string)
# We can also compute Safra Catz's pay in the same way:
convert_pay_string_to_number(raw_compensation.where("Name", are.containing("Safra")).column("Total Pay").item(0))<jupyter_output><empty_output><jupyter_text>So, what have we gained by defining the `convert_pay_string_to_number` function?
Well, without it, we'd have to copy the code `10**6 * float(some_pay_string.strip("$"))` each time we wanted to convert a pay string. Now we just call a function whose name says exactly what it's doing.## 3. `apply`ing functions
Defining a function is a lot like giving a name to a value with `=`. In fact, a function is a value just like the number 1 or the text "data"!
For example, we can make a new name for the built-in function `max` if we want:<jupyter_code>our_name_for_max = max
our_name_for_max(2, 6)<jupyter_output><empty_output><jupyter_text>The old name for `max` is still around:<jupyter_code>max(2, 6)<jupyter_output><empty_output><jupyter_text>Try just writing `max` or `our_name_for_max` (or the name of any other function) in a cell, and run that cell. Python will print out a (very brief) description of the function.<jupyter_code>max<jupyter_output><empty_output><jupyter_text>Now try writing `?max` or `?our_name_for_max` (or the name of any other function) in a cell, and run that cell. A information box should show up at the bottom of your screen a longer description of the function
*Note: You can also press Shift+Tab after clicking on a name to see similar information!*<jupyter_code>?our_name_for_max<jupyter_output><empty_output><jupyter_text>Let's look at what happens when we set `max`to a non-function value. You'll notice that a TypeError will occur when you try calling `max`. Things like integers and strings are not callable. Look out for any functions that might have been renamed when you encounter this type of error<jupyter_code>max = 6
max(2, 6)
# This cell resets max to the built-in function. Just run this cell, don't change its contents
import builtins
max = builtins.max<jupyter_output><empty_output><jupyter_text>Why is this useful? Since functions are just values, it's possible to pass them as arguments to other functions. Here's a simple but not-so-practical example: we can make an array of functions.<jupyter_code>make_array(max, np.average, are.equal_to)<jupyter_output><empty_output><jupyter_text>**Question 3.1.** Make an array containing any 3 other functions you've seen. Call it `some_functions`.
<!--
BEGIN QUESTION
name: q31
--><jupyter_code>some_functions = ...
some_functions
ok.grade("q31");<jupyter_output><empty_output><jupyter_text>Working with functions as values can lead to some funny-looking code. For example, see if you can figure out why the following code works. Check your explanation with a neighbor.<jupyter_code>make_array(max, np.average, are.equal_to).item(0)(4, -2, 7)<jupyter_output><empty_output><jupyter_text>A more useful example of passing functions to other functions as arguments is the table method `apply`.
`apply` calls a function many times, once on *each* element in a column of a table. It produces an *array* of the results. Here we use `apply` to convert every CEO's pay to a number, using the function you defined:<jupyter_code>raw_compensation.apply(convert_pay_string_to_number, "Total Pay")<jupyter_output><empty_output><jupyter_text>Here's an illustration of what that did:
Note that we didn’t write `raw_compensation.apply(convert_pay_string_to_number(), “Total Pay”)` or `raw_compensation.apply(convert_pay_string_to_number(“Total Pay”))`. We just passed the name of the function, with no parentheses, to `apply`, because all we want to do is let `apply` know the name of the function we’d like to use and the name of the column we’d like to use it on. `apply` will then call the function `convert_pay_string_to_number` on each value in the column for us!
**Question 3.2.** Using `apply`, make a table that's a copy of `raw_compensation` with one additional column called `Total Pay ($)`. That column should contain the result of applying `convert_pay_string_to_number` to the `Total Pay` column (as we did above). Call the new table `compensation`.
<!--
BEGIN QUESTION
name: q32
--><jupyter_code>compensation = raw_compensation.with_column(
"Total Pay ($)",
...
)
compensation
ok.grade("q32");<jupyter_output><empty_output><jupyter_text>Now that we have all the pays as numbers, we can learn more about them through computation.
**Question 3.3.** Compute the average total pay of the CEOs in the dataset.
<!--
BEGIN QUESTION
name: q33
--><jupyter_code>average_total_pay = ...
average_total_pay
ok.grade("q33");<jupyter_output><empty_output><jupyter_text>**Question 3.4.** Companies pay executives in a variety of ways: in cash, by granting stock or other equity in the company, or with ancillary benefits (like private jets). Compute the proportion of each CEO's pay that was cash. (Your answer should be an array of numbers, one for each CEO in the dataset.)
*Note:* When you answer this question, you'll encounter a red box appearing below your code cell that says something like `RuntimeWarning: invalid value encountered in true_divide`. Don't worry too much about the message. Warnings are raised by Python when it encounters an unusual condition in your code, but the condition is not severe enough to warrant throwing an error.
The warning below is Python's cryptic way of telling you that you're dividing a number by zero. If you extract the values in `Total Pay ($)` as an array, you'll see that the last element is 0.
<!--
BEGIN QUESTION
name: q34
--><jupyter_code>cash_proportion = ...
cash_proportion
ok.grade("q34");<jupyter_output><empty_output><jupyter_text>Check out the `% Change` column in `compensation`. It shows the percentage increase in the CEO's pay from the previous year. For CEOs with no previous year on record, it instead says "(No previous year)". The values in this column are *strings*, not numbers, so like the `Total Pay` column, it's not usable without a bit of extra work.
Given your current pay and the percentage increase from the previous year, you can compute your previous year's pay. For example, if your pay is $\$120$ this year, and that's an increase of 50% from the previous year, then your previous year's pay was $\frac{\$120}{1 + \frac{50}{100}}$, or \$80.
**Question 3.5.** Create a new table called `with_previous_compensation`. It should be a copy of `compensation`, but with the "(No previous year)" CEOs filtered out, and with an extra column called `2014 Total Pay ($)`. That column should have each CEO's pay in 2014.
*Hint 1:* You can print out your results after each step to make sure you're on the right track.
*Hint 2:* We've provided a structure that you can use to get to the answer. However, if it's confusing, feel free to delete the current structure and approach the problem your own way!
<!--
BEGIN QUESTION
name: q35
--><jupyter_code># Definition to turn percent to number
def percent_string_to_num(percent_string):
"""Converts a percentage string to a number."""
return ...
# Compensation table where there is a previous year
having_previous_year = ...
# Get the percent changes as numbers instead of strings
# We're still working off the table having_previous_year
percent_changes = ...
# Calculate the previous year's pay
# We're still working off the table having_previous_year
previous_pay = ...
# Put the previous pay column into the having_previous_year table
with_previous_compensation = ...
with_previous_compensation
ok.grade("q35");<jupyter_output><empty_output><jupyter_text>**Question 3.6.** What was the average pay of these CEOs in 2014?
<!--
BEGIN QUESTION
name: q36
--><jupyter_code>average_pay_2014 = ...
average_pay_2014
ok.grade("q36");<jupyter_output><empty_output><jupyter_text>**Why is `apply` useful?**
For operations like arithmetic, or the functions in the NumPy library, you don't need to use `apply`, because they automatically work on each element of an array. But there are many things that don't. The string manipulation we did in today's lab is one example. Since you can write any code you want in a function, `apply` gives you total control over how you operate on data.## 4. Histograms
Earlier, we computed the average pay among the CEOs in our 102-CEO dataset. The average doesn't tell us everything about the amounts CEOs are paid, though. Maybe just a few CEOs make the bulk of the money, even among these 102.
We can use a *histogram* method to display the *distribution* of a set of numbers. The table method `hist` takes a single argument, the name of a column of numbers. It produces a histogram of the numbers in that column.
**Question 4.1.** Make a histogram of the total pay of the CEOs in `compensation`. Check with your neighbor or professor to make sure you have the right plot.<jupyter_code>...<jupyter_output><empty_output><jupyter_text>**Question 4.2.** How many CEOs made more than $30 million in total pay? Find the value using code, then check that the value you found is consistent with what you see in the histogram.
*Hint:* Use the table method `where` and the property `num_rows`.
<!--
BEGIN QUESTION
name: q42
--><jupyter_code>num_ceos_more_than_30_million_2 = ...
num_ceos_more_than_30_million_2
ok.grade("q42");<jupyter_output><empty_output><jupyter_text>Great job! You're finished with lab 4! Be sure to...
* **run all the tests** (the next cell has a shortcut for that),
* **Save and Checkpoint** from the File menu,
* **run the last cell to submit your work**,<jupyter_code># For your convenience, you can run this cell to run all the tests at once!
import os
_ = [ok.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
_ = ok.submit()<jupyter_output><empty_output>
|
no_license
|
/lab04/lab04.ipynb
|
Peter-Jantsch/m121-sp21-lab
| 34 |
<jupyter_start><jupyter_text># List<jupyter_code># make a list with integer, float, string types and store it to var
var = [10 ,"Apple" ,"a+b", 5.5, "Ball" ]
# access individual items of list using square brackets [] in variable name
print(var[0])
print(var[1])
print(var[2])
# use negative indexing
print(var[-1])
print(var[-2])
# use len() to find length
brands = ["Coke", "Apple", "Google", "Microsoft", "Toyota"]
num_brands = len(brands)
print(num_brands)
# Slice a list in python using positive and negative indexing
fruits = ["Apple", "Banana", "Mango", "Grapes", "Orange"]
print(fruits[1:4])
print(fruits[ : 3])
print(fruits[-4:])
print(fruits[-3:-1])
# Modify elements of a list in python using index of the elements
fruits = ["Apple", "Banana", "Mango", "Grapes", "Orange"]
fruits[1] = "Pineapple"
fruits[-1] = "Guava"
print(fruits)
# Combine two arrays using + operator
concat = [1, 2, 3]
concat + [4,5,6]
print(concat)
# Repeat elements in a list using * operator
repeat = ["a"]
repeat = repeat * 5
print(repeat)
# Add an element in a list using append()
add = ['a', 'b', 'c']
add.append('d')
print(add)
# Remove elements of an array using del, remove() and pop()
colors = ["violet", "indigo", "blue", "green", "yellow", "orange", "red" ]
del color[4]
colors.remove("blue")
colors.pop(3)
print(color)
# Declare a two-dimensional list in python using lists and accessing it using list index
multd = [[1,2], [3,4], [5,6], [7,8]]
print(multd[0])
print(multd[3])
print(multd[2][1])
print(multd[3][0])<jupyter_output><empty_output>
|
no_license
|
/python_list.ipynb
|
zhubaiyuan/exercise-book
| 1 |
<jupyter_start><jupyter_text># Implement a Queue - Using Two Stacks
Given the Stack class below, implement a Queue class using **two** stacks! Note, this is a "classic" interview problem. Use a Python list data structure as your Stack.<jupyter_code># Uses lists instead of your own Stack class.
stack1 = []
stack2 = []<jupyter_output><empty_output><jupyter_text>## Solution
Fill out your solution below:<jupyter_code>class Queue2Stacks(object):
def __init__(self):
# Two Stacks
self.instack = []
self.outstack = []
def enqueue(self,element):
self.instack.append(element)
def dequeue(self):
if not self.outstack:
while self.instack:
self.outstack.append(self.instack.pop())
return self.outstack.pop()
class MyQueue:
def __init__(self):
"""
Initialize your data structure here.
"""
self.instack = []
self.outstack = []
def __repr__(self):
return('instack {0}, outstack {1}'.format(str(self.instack), \
str(self.outstack)))
def push(self, x):
"""
Push element x to the back of queue.
:type x: int
:rtype: void
"""
self.instack.append(x)
def pop(self):
"""
Removes the element from in front of queue and returns that element.
:rtype: int
"""
self.peek()
return self.outstack.pop()
def peek(self):
"""
Get the front element.
:rtype: int
"""
if self.outstack == []:
while self.instack != []:
self.outstack.append(self.instack.pop())
last_peek = self.outstack[-1]
return last_peek
def empty(self):
"""
Returns whether the queue is empty.
:rtype: bool
"""
return not self.instack and not self.outstack
Testqueue = MyQueue()
Testqueue.push(1)
print(Testqueue)
Testqueue.push(2)
print(Testqueue)
Testqueue.peek()
print(Testqueue)
Testqueue.pop()
print(Testqueue)
Testqueue.peek()
print(Testqueue)
Testqueue.pop()
print(Testqueue)
Testqueue.empty()<jupyter_output>instack [1], outstack []
instack [1, 2], outstack []
instack [], outstack [2, 1]
instack [], outstack [2]
instack [], outstack [2]
instack [], outstack []
<jupyter_text># Test Your Solution
You should be able to tell with your current knowledge of Stacks and Queues if this is working as it should. For example, the following should print as such:<jupyter_code>"""
RUN THIS CELL TO CHECK THAT YOUR SOLUTION OUTPUT MAKES SENSE AND BEHAVES AS A QUEUE
"""
q = Queue2Stacks()
for i in range(5):
q.enqueue(i)
for i in range(5):
print(q.dequeue())<jupyter_output>0
1
2
3
4
|
no_license
|
/Python_algo/Chapter13_QueuesDeques/Stacks, Queues, and Deques Interview Problems/Stacks, Queues, Deques Interview Questions/.ipynb_checkpoints/Implement a Queue -Using Two Stacks -checkpoint.ipynb
|
qy2205/MyUdemy
| 3 |
<jupyter_start><jupyter_text>#### for improving model performance we need to split all wines types in three category good,fine and bad<jupyter_code>reviews = []
for i in data['quality']:
if i >= 1 and i <= 3:
reviews.append('1')
elif i >= 4 and i <= 7:
reviews.append('2')
elif i >= 8 and i <= 10:
reviews.append('3')
data['Reviews'] = reviews
data.head()<jupyter_output><empty_output><jupyter_text>### Splitting Data
x = independent values
y = depended values review is the only one depended yea it's review
**notice:
you can select last index into coulmn index by : -1**<jupyter_code>x = data.iloc[:, 0:-2].values
y = data.iloc[:, -1].values<jupyter_output><empty_output><jupyter_text>### Creatting the Model
here we create pip line with Standard scaler step(1) and KernelPCA step(2) and logistic regression model step(3) we will change it later any way ### A-Dimensionality reduction
you can read more about it here [Principal component](https://en.wikipedia.org/wiki/Principal_component_analysis) , but simply you can reduce your Dimensions easly , the dimensions is your columns count and this process try to reduce it with out effect with your data.
PCA , PCA KERNEL and plotting explained_variance to see the best components number for setting !<jupyter_code>sc_x=StandardScaler()
x = sc_x.fit_transform(x)
pca=PCA()
x_pca = pca.fit_transform(x)
plt.figure(figsize=(10,10))
plt.plot(np.cumsum(pca.explained_variance_ratio_), 'ro-')
plt.grid()
pca_new = PCA(n_components=8)
x_new = pca_new.fit_transform(x)
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(x_new, y, test_size = 0.2, random_state = 1)
def classifier(model):
model.fit(X_train,y_train)
y_pred=model.predict(X_test)
score=accuracy_score(y_pred,y_test)
return score*100
classifier(KNeighborsClassifier(n_neighbors=100)),classifier(RandomForestClassifier(n_estimators=100)),classifier(LogisticRegression()),classifier(GaussianNB())
<jupyter_output><empty_output>
|
no_license
|
/notebooks/loaiabdalslam/classification-pca-kernel-99.ipynb
|
Sayem-Mohammad-Imtiaz/kaggle-notebooks
| 3 |
<jupyter_start><jupyter_text># Generate content<jupyter_code>wav_out = "/Users/tmiano/Documents/Projects/cvr/me_drilling-sound_171128.wav"
img_out = "/Users/tmiano/Documents/Projects/cvr/me_drilling-sound_171128.jpg"
sr=44100
duration = 4 # seconds
mic_recording = sd.rec(duration * sr, samplerate=sr, channels=2,dtype='float64')
print("Recording Audio")
sd.wait()
print("Audio recording complete , Play Audio")
sd.play(mic_recording, sr)
sd.wait()
print("Play Audio Complete")
wav.write(wav_out,sr,mic_recording)
print("Audio written.")
generate_spectrogram(mic_recording[:,0],sr,img_out)<jupyter_output><empty_output><jupyter_text># Review recordings<jupyter_code>wav_out = "/Users/tmiano/Documents/Projects/cvr/me_barking_171128.wav"
sr, myrecording = wav.read(wav_out)<jupyter_output><empty_output>
|
no_license
|
/final_project/notebooks/sandbox/mic_audio_spectrogram.ipynb
|
thommiano/cvr_6501-009
| 2 |
<jupyter_start><jupyter_text># WeatherPy
----
### Analysis
* As expected, the weather becomes significantly warmer as one approaches the equator (0 Deg. Latitude). More interestingly, however, is the fact that the southern hemisphere tends to be warmer this time of year than the northern hemisphere. This may be due to the tilt of the earth.
* There is no strong relationship between latitude and cloudiness. However, it is interesting to see that a strong band of cities sits at 0, 80, and 100% cloudiness.
* There is no strong relationship between latitude and wind speed. However, in northern hemispheres there is a flurry of cities with over 20 mph of wind.
---
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.<jupyter_code># Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from random import uniform
from citipy import citipy
from datetime import datetime
import seaborn as sns
import urllib
from urllib.error import HTTPError
from datetime import datetime
# Import API key
from api_keys import api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)<jupyter_output><empty_output><jupyter_text>## Generate Cities List<jupyter_code># List for holding lat_lngs and cities
lat_lngs = []
cities = []
lat = []
lng = []
cloudiness = []
dt = []
temp = []
humidity = []
wind = []
country = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)<jupyter_output><empty_output><jupyter_text>### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
<jupyter_code>api_key
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
# query URL
query_url = url + "appid=" + api_key + "&units=" + units + "&q="
print("Beginning Data Retrival")
print("------------------------------")
i = 1
j = 1
weather_df = {"City":[],"Cloudiness":[],"Country":[],"Date":[],"Humidity":[],"Lat":[],"Lng":[],"Max Temp":[],"Wind Speed": []}
for city in cities:
response = requests.get(query_url + city)
response_json = response.json()
if response.status_code == 200:
weather_df["City"].append(city)
weather_df["Cloudiness"].append(response_json['clouds']['all'])
weather_df["Country"].append(response_json['sys']['country'])
weather_df["Date"].append(response_json['dt'])
weather_df["Humidity"].append(response_json['main']['humidity'])
weather_df["Lat"].append(response_json['coord']['lat'])
weather_df["Lng"].append(response_json['coord']['lon'])
weather_df["Max Temp"].append(response_json['main']['temp_max'])
weather_df["Wind Speed"].append(response_json['wind']['speed'])
if j <= 50:
print(f"Processing Record {j} of Set {i} | {city}")
j = j + 1
else:
j = 0
i = i + 1
print(f"Processing Record {j} of Set {i} | {city}")
j = j + 1
print("City not found. Skipping...")
print("-------------------------")
print("Data Retrieval Complete")
print("-------------------------")
<jupyter_output>Beginning Data Retrival
------------------------------
Processing Record 1 of Set 1 | rocha
City not found. Skipping...
Processing Record 2 of Set 1 | ushuaia
City not found. Skipping...
Processing Record 3 of Set 1 | cape town
City not found. Skipping...
Processing Record 4 of Set 1 | lebu
City not found. Skipping...
Processing Record 5 of Set 1 | hilo
City not found. Skipping...
Processing Record 6 of Set 1 | punta arenas
City not found. Skipping...
Processing Record 7 of Set 1 | ocos
City not found. Skipping...
Processing Record 8 of Set 1 | bluff
City not found. Skipping...
Processing Record 9 of Set 1 | touros
City not found. Skipping...
Processing Record 10 of Set 1 | hami
City not found. Skipping...
Processing Record 11 of Set 1 | tuatapere
City not found. Skipping...
Processing Record 12 of Set 1 | hasaki
City not found. Skipping...
Processing Record 13 of Set 1 | talnakh
City not found. Skipping...
Processing Record 14 of Set 1 | thompson
City not found. Skipping...
Processing[...]<jupyter_text>### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame<jupyter_code>weather_df = pd.DataFrame(weather_df)
weather_df.head()
weather_df.count()
# convert data to csv
weather_df.to_csv('weather_df.csv', encoding='utf-8', index=False)
weather_df.shape<jupyter_output><empty_output><jupyter_text>### Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.#### Latitude vs. Temperature Plot<jupyter_code>plt.scatter(weather_df["Lat"],weather_df["Max Temp"], marker="o", alpha=0.8, edgecolor="black", color='blue')
plt.title("City Latitude vs. Max Temperature (08/22/18)")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.ylim(0,104)
plt.grid(True)
plt.savefig("LatVsTemp.png")
plt.show()<jupyter_output><empty_output><jupyter_text>#### Latitude vs. Humidity Plot<jupyter_code>plt.scatter(weather_df["Lat"],weather_df["Humidity"], marker="o", alpha=0.8, edgecolor="black", color='blue')
plt.title("City Latitude vs. Humidity (08/22/18)")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.ylim(0,104)
plt.grid(True)
plt.savefig("LatVsHumidity.png")
plt.show()<jupyter_output><empty_output><jupyter_text>#### Latitude vs. Cloudiness Plot<jupyter_code>plt.scatter(weather_df["Lat"],weather_df["Cloudiness"], marker="o", alpha=0.8, edgecolor="black", color='blue')
plt.title("City Latitude vs. Cloudiness (08/22/18)")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.ylim(0,104)
plt.grid(True)
plt.savefig("LatVsCloudiness.png")
plt.show()<jupyter_output><empty_output><jupyter_text>#### Latitude vs. Wind Speed Plot<jupyter_code>plt.scatter(weather_df["Lat"],weather_df["Wind Speed"], marker="o", alpha=0.8, edgecolor="black", color='blue')
plt.title("City Latitude vs. Wind Speed (08/22/18)")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.ylim(0,38)
plt.grid(True)
plt.savefig("LatVsWindSpeed.png")
plt.show()<jupyter_output><empty_output>
|
no_license
|
/starter_code/WeatherPy.ipynb
|
nebajuinpou/Weather_challenge_Python
| 8 |
<jupyter_start><jupyter_text>## Dimensionality Reduction### PCA<jupyter_code>from sklearn.decomposition import PCA
import numpy as np
X = np.random.rand(100, 3)
y = 4 + 3 * X.dot(X.T) + np.random.randn(100, 1)
pca = PCA(n_components = 2)
X2D = pca.fit_transform(X)
print(pca.explained_variance_ratio_)
pca = PCA()
pca.fit(X)
cumsum = np.cumsum(pca.explained_variance_ratio_)
d = np.argmax(cumsum >= 0.95) + 1
print(d)
pca = PCA(n_components=0.95)
X_reduced = pca.fit_transform(X)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.plot(cumsum)
plt.show()
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
X, y = mnist["data"], mnist["target"]
pca = PCA()
pca.fit(X)
cumsum = np.cumsum(pca.explained_variance_ratio_)
d = np.argmax(cumsum >= 0.95) + 1
plt.plot(cumsum)
plt.show()
pca = PCA(n_components = 154)
X_mnist_reduced = pca.fit_transform(X)
X_mnist_recovered = pca.inverse_transform(X_mnist_reduced)
# Incremental PCA.
from sklearn.decomposition import IncrementalPCA
n_batches = 100
inc_pca = IncrementalPCA(n_components=154)
for X_batch in np.array_split(X, n_batches):
inc_pca.partial_fit(X_batch)
X_mnist_reduced = inc_pca.transform(X)
# Randomized PCA.
rnd_pca = PCA(n_components=154, svd_solver="randomized")
X_reduced = rnd_pca.fit_transform(X)<jupyter_output><empty_output><jupyter_text>## Kernel PCA<jupyter_code>from sklearn.decomposition import KernelPCA
X = np.random.rand(100, 3)
y = 4 + 3 * X.dot(X.T) + np.random.randn(100, 1)
rbf_pca = KernelPCA(n_components = 2, kernel="rbf", gamma=0.04)
X_reduced = rbf_pca.fit_transform(X)
X_mnist, y_mnist = mnist["data"], mnist["target"]
shuffled_indices = np.random.permutation(len(X_mnist))[:40]
X = np.random.rand(100, 2)
y = np.random.rand(100, 1) > 0.5
y.shape
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
clf = Pipeline([
("kpca", KernelPCA(n_components=2)),
("log_reg", LogisticRegression())
])
param_grid = [{
"kpca__gamma": np.linspace(0.03, 0.05, 10),
"kpca__kernel": ["rbf", "sigmoid"]
}]
grid_search = GridSearchCV(clf, param_grid, cv=3)
grid_search.fit(X, y)
print(grid_search.best_params_)<jupyter_output>{'kpca__gamma': 0.03, 'kpca__kernel': 'rbf'}
<jupyter_text>## LLE<jupyter_code>from sklearn.manifold import LocallyLinearEmbedding
from sklearn import datasets
X, y = datasets.make_swiss_roll()
lle = LocallyLinearEmbedding(n_components=2, n_neighbors=10)
X_reduced = lle.fit_transform(X)<jupyter_output><empty_output>
|
no_license
|
/notebooks/dimensionality-reduction.ipynb
|
Modest-as/machine-learning-exercises
| 3 |
<jupyter_start><jupyter_text># Exercise 02 - OLAP Cubes - Grouping SetsAll the databases table in this demo are based on public database samples and transformations
- `Sakila` is a sample database created by `MySql` [Link](https://dev.mysql.com/doc/sakila/en/sakila-structure.html)
- The postgresql version of it is called `Pagila` [Link](https://github.com/devrimgunduz/pagila)
- The facts and dimension tables design is based on O'Reilly's public dimensional modelling tutorial schema [Link](http://archive.oreilly.com/oreillyschool/courses/dba3/index.html)
Start by connecting to the database by running the cells below. If you are coming back to this exercise, then uncomment and run the first cell to recreate the database. If you recently completed the slicing and dicing exercise, then skip to the second cell.<jupyter_code> !PGPASSWORD=student createdb -h 127.0.0.1 -U student pagila_star
!PGPASSWORD=student psql -q -h 127.0.0.1 -U student -d pagila_star -f Data/pagila-star.sql<jupyter_output> set_config
------------
(1 row)
setval
--------
200
(1 row)
setval
--------
605
(1 row)
setval
--------
16
(1 row)
setval
--------
600
(1 row)
setval
--------
109
(1 row)
setval
--------
599
(1 row)
setval
--------
1
(1 row)
setval
--------
1
(1 row)
setval
--------
1
(1 row)
setval
--------
1
(1 row)
setval
--------
16049
(1 row)
setval
--------
1000
(1 row)
setval
--------
4581
(1 row)
setval
--------
6
(1 row)
setval
--------
32098
(1 row)
setval
--------
16049
(1 row)
setval
--------
2
(1 row)
setval
--------
2
(1 row)
<jupyter_text>### Connect to the local database where Pagila is loaded<jupyter_code>import sql
%reload_ext sql
DB_ENDPOINT = "127.0.0.1"
DB = 'pagila_star'
DB_USER = 'student'
DB_PASSWORD = 'student'
DB_PORT = '5432'
# postgresql://username:password@host:port/database
conn_string = "postgresql://{}:{}@{}:{}/{}" \
.format(DB_USER, DB_PASSWORD, DB_ENDPOINT, DB_PORT, DB)
print(conn_string)
%sql $conn_string<jupyter_output>postgresql://student:[email protected]:5432/pagila_star
<jupyter_text>### Star Schema# Grouping Sets
- It happens often that for 3 dimensions, you want to aggregate a fact:
- by nothing (total)
- then by the 1st dimension
- then by the 2nd
- then by the 3rd
- then by the 1st and 2nd
- then by the 2nd and 3rd
- then by the 1st and 3rd
- then by the 1st and 2nd and 3rd
- Since this is very common, and in all cases, we are iterating through all the fact table anyhow, there is a more clever way to do that using the SQL grouping statement "GROUPING SETS" ## Total Revenue
TODO: Write a query that calculates total revenue (sales_amount)<jupyter_code>%%sql
SELECT SUM(fs.sales_amount) as revenue
FROM factSales fs<jupyter_output> * postgresql://student:***@127.0.0.1:5432/pagila_star
1 rows affected.
<jupyter_text>## Revenue by Country
TODO: Write a query that calculates total revenue (sales_amount) by country<jupyter_code>%%sql
SELECT ds.country, SUM(fs.sales_amount) as revenue
FROM factSales fs
JOIN dimstore ds ON fs.store_key = ds.store_key
GROUP BY ds.country<jupyter_output> * postgresql://student:***@127.0.0.1:5432/pagila_star
2 rows affected.
<jupyter_text>## Revenue by Month
TODO: Write a query that calculates total revenue (sales_amount) by month<jupyter_code>%%sql
SELECT dd.month, SUM(fs.sales_amount) as revenue
FROM factSales fs
JOIN dimdate dd ON fs.date_key = dd.date_key
GROUP BY dd.month
ORDER BY dd.month asc, revenue DESC<jupyter_output> * postgresql://student:***@127.0.0.1:5432/pagila_star
5 rows affected.
<jupyter_text>## Revenue by Month & Country
TODO: Write a query that calculates total revenue (sales_amount) by month and country. Sort the data by month, country, and revenue in descending order. The first few rows of your output should match the table below.<jupyter_code>%%sql
SELECT dd.month, ds.country, SUM(fs.sales_amount) as revenue
FROM factSales fs
JOIN dimdate dd ON fs.date_key = dd.date_key
JOIN dimstore ds ON fs.store_key = ds.store_key
GROUP BY dd.month, ds.country
ORDER BY dd.month asc, ds.country asc, revenue DESC<jupyter_output> * postgresql://student:***@127.0.0.1:5432/pagila_star
10 rows affected.
<jupyter_text>
month
country
revenue
1
Australia
2364.19
1
Canada
2460.24
2
Australia
4895.10
2
Canada
4736.78
3
Australia
12060.33
## Revenue Total, by Month, by Country, by Month & Country All in one shot
TODO: Write a query that calculates total revenue at the various grouping levels done above (total, by month, by country, by month & country) all at once using the grouping sets function. Your output should match the table below.<jupyter_code>%%time
%%sql
SELECT dd.month, ds.country, SUM(fs.sales_amount) as revenue
FROM factSales fs
JOIN dimdate dd ON fs.date_key = dd.date_key
JOIN dimstore ds ON fs.store_key = ds.store_key
GROUP BY grouping sets ((), dd.month, ds.country, (dd.month, ds.country))<jupyter_output> * postgresql://student:***@127.0.0.1:5432/pagila_star
18 rows affected.
CPU times: user 4.19 ms, sys: 686 µs, total: 4.88 ms
Wall time: 34.5 ms
|
no_license
|
/course_DE/Udacity-Data-Engineering-master/Cloud Data Warehouse/L1 E2 - Grouping Sets.ipynb
|
yennanliu/analysis
| 7 |
<jupyter_start><jupyter_text>EOS 491/526 Assignment #1
Daniel Scanks
V00788200
Question #1<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n1= np.random.uniform(-1,1,200000) # 1 uniformly distributed random variable on [-1,1] with mean and std dev calculated
mean = np.mean(n1)
std = np.std(n1)
print('mean, standard deviaton')
print (mean,std)
gaussian = np.random.normal(mean,std,200000)
plt.hist(gaussian, bins=50, histtype='step', normed=True, color='r', label='Gaussian')
plt.hist(n1, bins=50, histtype='stepfilled', normed=True, color='b', alpha=0.5, label='Uniform Distribution') #histogram
plt.title("Gaussian/Uniform Histogram")
plt.xlabel("Value")
plt.ylabel("Probability")
plt.legend()
plt.show()
n1=np.random.uniform(-1,1,200000) # 2 uniformly distributed variables added w/ same calcuations and plot
n2=np.random.uniform(-1,1,200000)
n= n1+n2
mean = np.mean(n)
std = np.std(n)
print('mean, standard deviaton')
print (mean,std)
gaussian = np.random.normal(mean,std,200000)
plt.hist(gaussian, bins=50, histtype='step', normed=True, color='r', label='Gaussian')
plt.hist(n, bins=50, histtype='stepfilled', normed=True, color='b', alpha=0.5, label='Uniform Distribution')
plt.title("Gaussian/Uniform Histogram")
plt.xlabel("Value")
plt.ylabel("Probability")
plt.legend()
plt.show()
n1=np.random.uniform(-1,1,200000) # 5 uniformly distributed variables added w/ same calcuations and plot
n2=np.random.uniform(-1,1,200000)
n3=np.random.uniform(-1,1,200000)
n4=np.random.uniform(-1,1,200000)
n5=np.random.uniform(-1,1,200000)
n= n1+n2+n3+n4+n5
mean = np.mean(n)
std = np.std(n)
print('mean, standard deviaton')
print (mean,std)
gaussian = np.random.normal(mean,std,200000)
plt.hist(gaussian, bins=50, histtype='step', normed=True, color='r', label='Gaussian')
plt.hist(n, bins=50, histtype='stepfilled', normed=True, color='b', alpha=0.5, label='Uniform Distribution')
plt.title("Gaussian/Uniform Histogram")
plt.xlabel("Value")
plt.ylabel("Probability")
plt.legend()
plt.show()
<jupyter_output>mean, standard deviaton
(0.0008137093395222234, 0.57696430269689469)
|
no_license
|
/.ipynb_checkpoints/EOS491Assignment#1-checkpoint.ipynb
|
scanks/EOS491Assignments
| 1 |
<jupyter_start><jupyter_text>
## [mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course
Authors: [Vitaliy Radchenko](https://www.linkedin.com/in/vitaliyradchenk0/), [Yury Kashnitsky](https://yorko.github.io), and Mikalai Parshutsich. Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), Artem Gruzdev, [Egor Polusmak](https://www.linkedin.com/in/egor-polusmak/), [Anastasia Manokhina](https://www.linkedin.com/in/anastasiamanokhina/), [Anna Shirshova](http://linkedin.com/in/anna-shirshova-b908458b), and [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose.
*This is a static version of a Jupyter notebook. You can also check out the latest version in the [course repository](https://github.com/Yorko/mlcourse.ai), the corresponding interactive web-based [Kaggle Notebook](https://www.kaggle.com/kashnitsky/topic-5-ensembles-part-3-feature-importance) or a [video lecture](https://www.youtube.com/watch?v=neXJL-AqI_c).*# Topic 5. Ensembles and random forest
## Part 3. Feature importance## Article outline
1. [Intuition](#1.-Intuition)
2. [Illustrating permutation importance](#2.-Illustrating-permutation-importance)
3. [Sklearn Random Forest Feature Importance](#3.-Sklearn-Random-Forest-Feature-Importance)
4. [Practical example](#4.-Practical-example)
5. [Demo assignment](#5.-Demo-assignment)
6. [Useful resources](#6.-Useful-resources)
It's quite often that you want to make out the exact reasons of the algorithm outputting a particular answer. Or at the very least to find out which input features contributed most to the result. With Random Forest, you can obtain such information quite easily.## 1. Intuition
From the picture below, it is intuitively clear that, in our credit scoring problem, *Age* is much more important than *Income*. This can be formally explained using the concept of *information gain*.
In the case of many decision trees or a random forest, the closer the mean position of a feature over all the trees to the root, the more significant it is for a given classification or regression problem. Gains in the splitting criterion, such as the *Gini impurity*, obtained at each optimal split in every tree is a measure of importance that is directly associated with the splitting feature. The value of this score is distinct for each feature and accumulates over all the trees.
Let's go a little deeper into the details.
There exist a lot of methods to assess feature importances. Leo Breinman in his works suggested to evaluate the importance of a variable by measuring decrease of accuracy of the forest when the variable is randomly permuted or decrease of impurity of a nodes where the given variable is used for splitting. The former method is often called **permutation importance**. The latter method is used in `sklearn`.### Permutation importance
Inspired by [this](https://www.researchgate.net/publication/5231126_Conditional_Variable_Importance_for_Random_Forests) article.
The average reduction in accuracy caused by a variable is determined during the calculation of the out-of-bag error. The greater the reduction in accuracy due to an exclusion or permutation of the variable, the higher its *importance score*. For this reason, variables with a greater average reduction in accuracy are generally more significant for classification.
The rationale for calculating permutation importance is the following: By randomly permuting the predictor variable $X_j$, its original association with the response $Y$ is broken. When the permuted variable $X_j$, together with all the others non-permuted variables, is used the response for the out-of-bag observations, the prediction *accuracy* decreases substantially if the original $X_j$ was associated with response. Thus, as a measure of variable importance, the difference in prediction accuracy before and after permuting is used.More formally: denote $\overline{\mathfrak{B}}^{(t)}$ as the out-of-bag sample for a tree $t$, for $t\in\{1, ..., N\}$ where $N$ is the number of trees in ensemble. Then the permutation importance of variable $X_j$ in tree $t$ is
$${PI}^{(t)}\left(X_j\right)=\frac{\sum_{i\in\overline{\mathfrak{B}}^{(t)}}I\left(y_i=\hat{y}_i^{(t)}\right)}{\left|\overline{\mathfrak{B}}^{(t)}\right|}-\frac{\sum_{i\in\overline{\mathfrak{B}}^{(t)}}I\left(y_i=\hat{y}_{i,\pi_j}^{(t)}\right)}{\left|\overline{\mathfrak{B}}^{(t)}\right|}$$
where $\hat{y}_i^{(t)}=f^{(t)}(\mathbf{x}_i)$ is the predicted class for observation $i$ before and $\hat{y}_{i, \pi_j}^{(t)}=f^{(t)}(\mathbf{x}_{i,\pi_j})$ is the predicted class for observation $i$ after permuting $X_j$, $\mathbf{x}_{i,\pi_j}=\left(x_{i,1}, ..., x_{i,j-1},x_{\pi_j(i),j},x_{i,j+1},...,x_{i,p}\right)$
Note that by definition ${PI}^{(t)}=0$, if variable $X_j$ isn't in tree $t$.
Now, we can give the feature importance calculation for ensembles:
* not normalized:
$${PI}\left(X_j\right)=\frac{\sum_{t=1}^N {PI}^{(t)}(X_j)}{N}$$
* normalized by the standard deviation of the differences:
$$z_j=\frac{{PI}\left(X_j\right)}{\frac{\hat{\sigma}}{\sqrt{N}}}$$## 2. Illustrating permutation importance
Let's assume that we have a toy dataset with 10 instances. Target variable can be either **'N'** or **'P'**.
$$\begin{array}{c|c|c|c|c|c|c|c|c|c}
\text{Instances}, i & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \\
\hline
y_i & N & P & P & N & N & P & N & N & N & P \\
\end{array}$$
We build an ensemble of 5 trees $t$, for $t\in\{1, ..., 5\}$. For each tree we get out-of-bag sample (denoted $\overline{\mathfrak{B}}^{(t)}$ above). For example for the first tree out-of-bag sample consists of instances # 2, 4, 5, and 6.
$$\begin{array}{c|c|c|c|c|c|c|c|c|c}
\text{Tree 1} & \text{Bootstrap-sample 1} & 10 & 9 & 7 & 8 & 1 & 3 & 9 & 10 & 10 & 7\\
\hline
\text{Tree 2} & \text{Bootstrap-sample 2} & 4 & 8 & 5 & 8 & 3 & 9 & 2 & 6 & 1 & 6\\
\hline
\text{Tree 3} & \text{Bootstrap-sample 3} & 6 & 2 & 6 & 10 & 2 & 10 & 3 & 6 & 5 & 1\\
\hline
\text{Tree 4} & \text{Bootstrap-sample 4} & 6 & 7 & 8 & 10 & 6 & 10 & 9 & 10 & 8 & 2\\
\hline
\text{Tree 5} & \text{Bootstrap-sample 5} & 5 & 8 & 1 & 8 & 5 & 7 & 10 & 1 & 10 & 9\\
\end{array}$$
Thus, out-of-bag samples for each tree $t$ are
$$\begin{array}{c|cccc}
\text{Tree}, t & \overline{\mathfrak{B}}^{(t)} \\
\hline
\text{Tree 1} & 2 & 4 & 5 & 6\\
\hline
\text{Tree 2} & 7 & 10\\
\hline
\text{Tree 3} & 4 & 7 & 8 & 9\\
\hline
\text{Tree 4} & 1 & 3 & 4 & 5\\
\hline
\text{Tree 5} & 2 & 3 & 4 & 6\\
\hline
\end{array}$$Suppose that we have four features $X_j$, $j\in\{1, 2, 3, 4\}$ and we'd like to compute _permutation importance_ for $X_2$. First, for each out-of-bag sample we compute _accuracy_ of the model before and after permutation of the values of $X_2$.
For instance, before permutation for $\overline{\mathfrak{B}}^{(1)}$ we have
$$\begin{array}{c|cccc|cc|c}
& X_1 & \color{red}{X_2} & X_3 & X_4 & y_i & \hat{y}_i & I\left(y_i=\hat{y}_i\right)\\
\hline
\textbf{2} & 1 & \color{red}2 & 11 & 101 & \textbf{P} & \textbf{P} & 1\\
\hline
\textbf{4} & 2 & \color{red}3 & 12 & 102 & \textbf{N} & \textbf{P} & 0\\
\hline
\textbf{5} & 3 & \color{red}5 & 13 & 103 & \textbf{N} & \textbf{N} & 1\\
\hline
\textbf{6} & 4 & \color{red}7 & 14 & 104 & \textbf{P} & \textbf{P} & 1\\
\end{array}$$
Thus, the accuracy before permutation is $3/4=0.75$.
After permutation for $\overline{\mathfrak{B}}^{(1)}$ we have
$$\begin{array}{c|cccc|cc|c}
& X_1 & \color{red}{X_2} & X_3 & X_4 & y_i & \hat{y}_i & I\left(y_i=\hat{y}_i\right)\\
\hline
\textbf{2} & 1 & \color{red}5 & 11 & 101 & \textbf{P} & \textbf{P} & 0\\
\hline
\textbf{4} & 2 & \color{red}7 & 12 & 102 & \textbf{N} & \textbf{P} & 0\\
\hline
\textbf{5} & 3 & \color{red}2 & 13 & 103 & \textbf{N} & \textbf{N} & 1\\
\hline
\textbf{6} & 4 & \color{red}3 & 14 & 104 & \textbf{P} & \textbf{P} & 1\\
\end{array}$$
The accuracy after permutation is $2/4=0.50$.
Then the difference between accuracies is computed.
The above mentioned steps are to be done for each out-of-bag sample $\overline{\mathfrak{B}}^{(t)}$. To get not normalized _permutation importance_ we sum all computed differences and divide by the number of trees. Normalization is done by dividing _not normalized permutation importance_ by standard error.## 3. Sklearn Random Forest Feature Importance
Inspired by [this](https://medium.com/@srnghn/the-mathematics-of-decision-trees-random-forest-and-feature-importance-in-scikit-learn-and-spark-f2861df67e3) article.
Sklearn library uses another approach to determine feature importance. The rationale for that method is that the more gain in information the node (with splitting feature $X_j$) provides, the higher its importance.
The average reduction in the Gini impurity – or MSE for regression – represents the contribution of each feature to the homogeneity of nodes and leaves in the resulting Random Forest model. Each time a selected feature is used for splitting, the Gini impurity of the child nodes is calculated and compared with that of the original node.
Gini impurity is a score of homogeneity with the range from 0 (homogeneous) to 1 (heterogeneous). The changes in the value of the splitting criterion are accumulated for each feature and normalized at the end of the calculation. A higher reduction in the Gini impurity signals that splitting results by this feature results in nodes with higher purity.
The algorithm of obtaining feature importance may be represented with the following sequence of steps:
1. For each tree $t$ in ensemble $t\in\{1,...,N\}$:
1.1. for each node $i$ calculate the reduction in impurity (or MSE, or entropy) as ${RI}_i^{(t)}=w_i^{(t)}\cdot I_i^{(t)} - w_{LEFT_i}^{(t)}\cdot I_{LEFT_i}^{(t)}-w_{RIGHT_i}^{(t)}\cdot I_{RIGHT_i}^{(t)}$, where:
- $w_i^{(t)}$, $w_{LEFT_i}^{(t)}$, and $w_{RIGHT_i}^{(t)}$ are respectively weighted number of samples reaching node $i$ in tree $t$, and its left $LEFT_i$ and right $RIGHT_i$ children
- $I_i^{(t)}$, $I_{LEFT_i}^{(t)}$, $I_{RIGHT_i}^{(t)}$ are impurity of the nodes. For leaves ${RI}_i^{(t)}$ is equal to 0.
1.2. for each feature $j$ calculate its importance in that particular tree as
$${FI}_j^{(t)}=\frac{\sum_{i:\text{node }i\text{ splits on feature } j}{RI}_i^{(t)}}{\sum_{i\in\text{all nodes}}{RI}_i^{(t)}}$$
That means that in numerator we sum the reduction in impurity only in those nodes where feature $j$ is situated.
2. Calculate the average feature importances over all trees in ensemble:
$${FI}_j=\frac{\sum_{t=1}^N {FI}_j^{(t)}}{N}$$Those are pretty confusing formulas so let's demonstrate each step with the Iris Dataset.<jupyter_code>import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
import seaborn as sns
iris = load_iris()
data = iris['data']
target = iris['target']
data = pd.DataFrame(data, columns=iris['feature_names'])
data.head()<jupyter_output><empty_output><jupyter_text>Since our aim is just to demonstrate the sequence of steps in calculating feature importances we'll transform the `target` variable as for classifying Iris Virginica One-To-All.<jupyter_code>target = pd.Series(target).map({0: 0, 1: 0, 2: 1})<jupyter_output><empty_output><jupyter_text>Creating Random Forest. For reproducibility, we set `random_state=17`. For the sake of simplicity we set the number of trees to 3 and limit the depth of trees in ensemble to be not greater than 3.<jupyter_code>from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=3, max_depth=3, random_state=17)
rfc.fit(data, target);<jupyter_output><empty_output><jupyter_text>After fitting list of all the trees are stored in `estimators_` property.<jupyter_code>tree_list = rfc.estimators_<jupyter_output><empty_output><jupyter_text>Visualizing trees<jupyter_code>from sklearn import tree
plt.figure(figsize=(16,12))
tree.plot_tree(tree_list[0], filled=True, feature_names=iris['feature_names'],
class_names=['Y', 'N'], node_ids=True);
plt.figure(figsize=(16,12))
tree.plot_tree(tree_list[1], filled=True, feature_names=iris['feature_names'],
class_names=['Y', 'N'], node_ids=True);
plt.figure(figsize=(6,4))
tree.plot_tree(tree_list[2], filled=True, feature_names=iris['feature_names'],
class_names=['Y', 'N'], node_ids=True);<jupyter_output><empty_output><jupyter_text>Let's start from the first tree and `Sepal length (cm)` feature. This feature is located in two nodes: the root (#0) and the rightmost node (#4). The reduction in impurity for these nodes are:
$${RI}_{{SL}_1}^{(1)}=\frac{150}{150}\cdot 0.482578 - \frac{63}{150}\cdot 0.061476 - \frac{87}{150}\cdot 0.436517 = 0.203578$$
$${RI}_{{SL}_2}^{(1)}=\frac{56}{150}\cdot 0.035077 - \frac{7}{150}\cdot 0.244898 - \frac{49}{150}\cdot 0 = 0.001667$$
Note: The impurity for each node was recalculated to gain more accuracy than given in the picture.
By doing the same calculations we get the following reduction in impurity for `Petal width (cm)`, and `Petal width (cm)` features:
$${RI}_{PL}^{(1)}=0.035785$$
$${RI}_{{PW}_1}^{(1)}=0.025820$$
$${RI}_{{PW}_2}^{(1)}=0.193633$$
Summarizing all numbers in table
$$\begin{array}{c|cc}
\text{Feature}, j & \text{Total }RI_j^{(1)} & {FI}_j^{(1)}\\
\hline
SL & 0.205244 & 0.445716\\
SW & 0.000000 & 0.000000\\
PL & 0.035785 & 0.077712\\
PW & 0.219453 & 0.476572\\
\hline
\sum & 0.460483
\end{array}$$
After performing the same calculations for the second and third tree we average the results for features:
$$\begin{array}{c|ccc|c}
\text{Feature}, j & {FI}_j^{(1)}& {FI}_j^{(2)}& {FI}_j^{(3)} & {FI}_j\\
\hline
SL & 0.445716 & 0.000000 & 0.000000 & 0.148572\\
SW & 0.000000 & 0.039738 & 0.000000 & 0.013246\\
PL & 0.077712 & 0.844925 & 0.162016 & 0.361551\\
PW & 0.476572 & 0.115337 & 0.837984 & 0.476631\\
\end{array}$$Let's compare our result with those stored in the `feature_importances_` attribute.<jupyter_code>print(iris['feature_names'])
print(rfc.feature_importances_)<jupyter_output>['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
[0.14857187 0.01324612 0.36155096 0.47663104]
<jupyter_text>Voila!## 4. Practical example
Let's consider the results of a survey given to visitors of hostels listed on Booking.com and TripAdvisor.com. Our features here are the average ratings for different categories including service quality, room condition, value for money, etc. Our target variable is the hostel's overall rating on the website.<jupyter_code>from sklearn.ensemble.forest import RandomForestRegressor
hostel_data = pd.read_csv("../../data/hostel_factors.csv")
features = {"f1":u"Staff",
"f2":u"Hostel booking",
"f3":u"Check-in and check-out",
"f4":u"Room condition",
"f5":u"Shared kitchen condition",
"f6":u"Shared space condition",
"f7":u"Extra services",
"f8":u"General conditions & conveniences",
"f9":u"Value for money",
"f10":u"Customer Co-creation"}
forest = RandomForestRegressor(n_estimators=1000, max_features=10,
random_state=0)
forest.fit(hostel_data.drop(['hostel', 'rating'], axis=1),
hostel_data['rating'])
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
# Plot the feature importancies of the forest
num_to_plot = 10
feature_indices = [ind+1 for ind in indices[:num_to_plot]]
# Print the feature ranking
print("Feature ranking:")
for f in range(num_to_plot):
print("%d. %s %f " % (f + 1,
features["f"+str(feature_indices[f])],
importances[indices[f]]))
plt.figure(figsize=(15,5))
plt.title(u"Feature Importance")
bars = plt.bar(range(num_to_plot),
importances[indices[:num_to_plot]],
color=([str(i/float(num_to_plot+1))
for i in range(num_to_plot)]),
align="center")
ticks = plt.xticks(range(num_to_plot),
feature_indices)
plt.xlim([-1, num_to_plot])
plt.legend(bars, [u''.join(features["f"+str(i)])
for i in feature_indices]);<jupyter_output>Feature ranking:
1. Staff 0.182757
2. Value for money 0.148373
3. Shared space condition 0.128296
4. Extra services 0.116604
5. Customer Co-creation 0.106668
6. General conditions & conveniences 0.088589
7. Shared kitchen condition 0.074273
8. Check-in and check-out 0.061521
9. Hostel booking 0.053615
10. Room condition 0.039305
|
no_license
|
/jupyter_notebooks/Lecture_05_Ensembles/topic-5-ensembles-part-3-feature-importance.ipynb
|
RisnyK/mlcourse_dubai
| 7 |
<jupyter_start><jupyter_text># Churn Prediction For Bank CustomerWe have a dataset in which there are details of a bank's customers and the target variable is a binary variable reflecting the fact whether the customer left the bank (closed his account) or he continues to be a customer.## Dataset
- **RowNumber:** corresponds to the record (row) number and has no effect on the output.
- **CustomerId:** contains random values and has no effect on customer leaving the bank.
- **Surname:** the surname of a customer has no impact on their decision to leave the bank.
- **CreditScore:** can have an effect on customer churn, since a customer with a higher credit score is less likely to leave the bank.
- **Geography:** a customer’s location can affect their decision to leave the bank.
- **Gender:** it’s interesting to explore whether gender plays a role in a customer leaving the bank.
- **Age:** this is certainly relevant, since older customers are less likely to leave their bank than younger ones.
- **Tenure:** refers to the number of years that the customer has been a client of the bank. Normally, older clients are more loyal and less likely to leave a bank.
- **Balance:** also a very good indicator of customer churn, as people with a higher balance in their accounts are less likely to leave the bank compared to those with lower balances.
- **NumOfProducts:** refers to the number of products that a customer has purchased through the bank.
- **HasCrCard:** denotes whether or not a customer has a credit card. This column is also relevant, since people with a credit card are less likely to leave the bank.
- **IsActiveMember:** active customers are less likely to leave the bank.
- **EstimatedSalary:** as with balance, people with lower salaries are more likely to leave the bank compared to those with higher salaries.
- **Exited:** whether or not the customer left the bank. (0=No,1=Yes)## Improt Libraries<jupyter_code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# %matplotlib notebook
plt.rcParams["figure.figsize"] = (9,6)
# plt.rcParams['figure.dpi'] = 100
sns.set_style("whitegrid")
import warnings
warnings.filterwarnings("ignore")
warnings.warn("this will not show")
pd.set_option('display.float_format', lambda x: '%.3f' % x)<jupyter_output><empty_output><jupyter_text>## Indest Data<jupyter_code>df = pd.read_csv("Churn_Modelling.csv")
df.shape<jupyter_output><empty_output><jupyter_text>The datset has **1000** rows with **14** attributes<jupyter_code>df.head()
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RowNumber 10000 non-null int64
1 CustomerId 10000 non-null int64
2 Surname 10000 non-null object
3 CreditScore 10000 non-null int64
4 Geography 10000 non-null object
5 Gender 10000 non-null object
6 Age 10000 non-null int64
7 Tenure 10000 non-null int64
8 Balance 10000 non-null float64
9 NumOfProducts 10000 non-null int64
10 HasCrCard 10000 non-null int64
11 IsActiveMember 10000 non-null int64
12 EstimatedSalary 10000 non-null float64
13 Exited 10000 non-null int64
dtypes: float64(2), int64(9), object(3)
memory usage: 1.1+ MB
<jupyter_text>## Exploratory Data Analysis and Visualization 1. Implement basic steps to see how is your data looks like
2. Check for missing values
3. Drop the features that not suitable for modelling
4. Implement basic visualization steps such as histogram, countplot, heatmap
5. Convert categorical variables to dummy variables👇👇👇We dropped follow columns. Because these columnns has no effect on customer leaving the bank.👇👇👇<jupyter_code>df = df.drop(["RowNumber","CustomerId","Surname"], axis=1)
print(df.isnull().sum().any())
print(df.duplicated().sum())<jupyter_output>False
0
<jupyter_text>👆👆👆We have no miissing value and duplicated value 👆👆👆<jupyter_code>df.describe().T<jupyter_output><empty_output><jupyter_text>## Exited Column<jupyter_code>df["Exited"].value_counts()<jupyter_output><empty_output><jupyter_text>- Our dataset is imbalanced.
- Nonexited customers more than exited customers.
- Our goal should be identify exited customers accurately<jupyter_code>sns.countplot(df["Exited"]);
col = ["Geography","Gender","Tenure","NumOfProducts","HasCrCard","IsActiveMember"]
for i in col:
print(i.upper())
print(df[i].value_counts())
print("---"*10)
fig, ax = plt.subplots(3, 2, figsize=(16, 12))
sns.countplot(x='Geography', hue = 'Exited',data = df, ax=ax[0][0])
sns.countplot(x='Gender', hue = 'Exited',data = df, ax=ax[0][1])
sns.countplot(x='HasCrCard', hue = 'Exited',data = df, ax=ax[1][0])
sns.countplot(x='IsActiveMember', hue = 'Exited',data = df, ax=ax[1][1]);
sns.countplot(x='NumOfProducts', hue = 'Exited',data = df, ax=ax[2][0])
sns.countplot(x='Tenure', hue = 'Exited',data = df, ax=ax[2][1]);<jupyter_output><empty_output><jupyter_text> 👆👆👆
- Most of the customers are from France. However, the number of customers churned is less in France and more in Germany compared to the population.
- Male customers more than female customers, but more female customers churned
- The majority of the customers that churned are those with credit cards
- The majority of the customers that churned are those inactive memebers<jupyter_code>fig, ax = plt.subplots(3, 2, figsize=(16, 10))
sns.boxplot(y='CreditScore',x = 'Exited', hue = 'Exited',data = df, ax=ax[0][0])
sns.boxplot(y='Age',x = 'Exited', hue = 'Exited',data = df , ax=ax[0][1])
sns.boxplot(y='Tenure',x = 'Exited', hue = 'Exited',data = df, ax=ax[1][0])
sns.boxplot(y='Balance',x = 'Exited', hue = 'Exited',data = df, ax=ax[1][1])
sns.boxplot(y='NumOfProducts',x = 'Exited', hue = 'Exited',data = df, ax=ax[2][0])
sns.boxplot(y='EstimatedSalary',x = 'Exited', hue = 'Exited',data = df, ax=ax[2][1]);<jupyter_output><empty_output><jupyter_text> 👆👆👆
- There is no significant difference in the credit score distribution between retained and churned customers.
- The older customers are churning at more than the younger ones
- Customers with high balance scores leave the bank
- Number of products and salary amount do not have a significant effect on leaving the bank<jupyter_code>fig, ax = plt.subplots(3, 2, figsize=(16, 12))
sns.countplot(x='Gender', hue = 'Gender',data = df, ax=ax[0][0])
sns.countplot(x='Geography', hue = 'Gender',data = df, ax=ax[0][1])
sns.countplot(x='HasCrCard', hue = 'Gender',data = df, ax=ax[1][0])
sns.countplot(x='IsActiveMember', hue = 'Gender',data = df, ax=ax[1][1]);
sns.countplot(x='NumOfProducts', hue = 'Gender',data = df, ax=ax[2][0])
sns.countplot(x='Tenure', hue = 'Gender',data = df, ax=ax[2][1]);<jupyter_output><empty_output><jupyter_text>👆👆👆In all categories shown in the chart, the number of male customers is more than the number of female customers.👆👆👆## Age Column<jupyter_code>plt.figure(figsize = (16, 6))
sns.countplot(x="Age", hue="Exited", data=df);<jupyter_output><empty_output><jupyter_text>👆👆👆The age distribution of active customers and exited customers is almost similar.👆👆👆<jupyter_code>plt.figure(figsize = (12, 8))
sns.histplot(x="CreditScore", data=df,kde = True, hue = "Exited");
plt.figure(figsize = (12, 6))
sns.histplot(x="Balance", data=df, hue = "Exited", kde = True);
plt.figure(figsize=(10, 10))
sns.heatmap(df.corr(), annot=True);
plt.figure(figsize = (10,6))
df.corr()['Exited'].sort_values().drop("Exited").plot(kind = "barh");<jupyter_output><empty_output><jupyter_text>## Preprocessing of Data
- Train | Test Split, Scalling<jupyter_code>df.info()
df = pd.get_dummies(df, drop_first=True)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
X = df.drop('Exited', axis=1)
y = df['Exited']
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = y, test_size = 0.20, random_state = 42)
scaler = MinMaxScaler()
X_train= scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
X_train.shape
X_test.shape<jupyter_output><empty_output><jupyter_text>## Modelling & Model Performance### Import related libraries<jupyter_code>from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.metrics import Accuracy, Recall,Precision
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import plot_roc_curve, roc_auc_score, roc_curve
from sklearn.model_selection import cross_val_score, cross_validate
from tensorflow.keras.optimizers import Adam, SGD,RMSprop
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.callbacks import EarlyStopping<jupyter_output><empty_output><jupyter_text>### Creating Model### without class_weigth<jupyter_code>model = Sequential()
model.add(Dense(44, activation = "relu"))
model.add(Dense(22, activation = "relu"))
model.add(Dense(11, activation = "relu"))
model.add(Dense(1, activation = "sigmoid"))
opt = Adam(lr = 0.005)
model.compile(optimizer = opt, loss = "binary_crossentropy", metrics = ["Recall"])
early_stop = EarlyStopping(monitor = "recall", mode = "auto", verbose = 1, patience = 15, restore_best_weights=True)
model.fit(x = X_train, y = y_train, validation_split = 0.1, batch_size = 32, epochs = 1000, verbose = 1,
callbacks = [early_stop])<jupyter_output>Epoch 1/1000
225/225 [==============================] - 1s 2ms/step - loss: 0.4854 - recall: 0.0650 - val_loss: 0.4291 - val_recall: 0.2500
Epoch 2/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.4201 - recall: 0.2625 - val_loss: 0.3857 - val_recall: 0.2039
Epoch 3/1000
225/225 [==============================] - 0s 1ms/step - loss: 0.3897 - recall: 0.3329 - val_loss: 0.3513 - val_recall: 0.3355
Epoch 4/1000
225/225 [==============================] - 0s 1ms/step - loss: 0.3691 - recall: 0.3755 - val_loss: 0.3493 - val_recall: 0.3092
Epoch 5/1000
225/225 [==============================] - 0s 1ms/step - loss: 0.3644 - recall: 0.3829 - val_loss: 0.3422 - val_recall: 0.4145
Epoch 6/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.3610 - recall: 0.3972 - val_loss: 0.3354 - val_recall: 0.3882
Epoch 7/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.3571 - recall: 0.3890 - val_loss: 0.3311 - val_recall: 0.3684
Epoch 8/1000
225/225[...]<jupyter_text>#### Evaluate<jupyter_code>loss_df = pd.DataFrame(model.history.history)
loss_df.plot();
y_pred = (model.predict(X_test) > 0.5).astype("int32")
#y_pred = model.predict_classes(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))<jupyter_output>[[1519 74]
[ 314 93]]
precision recall f1-score support
0 0.83 0.95 0.89 1593
1 0.56 0.23 0.32 407
accuracy 0.81 2000
macro avg 0.69 0.59 0.61 2000
weighted avg 0.77 0.81 0.77 2000
<jupyter_text>👆👆👆Our results are good, but we need to increase the recall value for 1 values.👆👆👆
👇👇👇For this we will use the class weight parameter in ANN👇👇👇### with class_weigth
Investigate how the "class_weight" hyper-parameter is used in a Neural Network.<jupyter_code>from sklearn.utils import class_weight
class_weights = dict(zip(np.unique(y_train), class_weight.compute_class_weight('balanced', np.unique(y_train),
y_train)))
model = Sequential()
model.add(Dense(44, activation = "relu"))
model.add(Dense(22, activation = "relu"))
model.add(Dense(11, activation = "relu"))
model.add(Dense(1, activation = "sigmoid"))
opt = Adam(lr = 0.005)
model.compile(optimizer = opt, loss = "binary_crossentropy", metrics = ["Recall"])
early_stop = EarlyStopping(monitor = "recall", mode = "auto", verbose = 1, patience = 15)
model.fit(x = X_train, y = y_train, validation_split = 0.1, batch_size = 32, epochs = 1000, verbose = 1, class_weight = class_weights, callbacks = [early_stop])<jupyter_output>Epoch 1/1000
225/225 [==============================] - 1s 2ms/step - loss: 0.6193 - recall: 0.6590 - val_loss: 0.5079 - val_recall: 0.5724
Epoch 2/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.5450 - recall: 0.7300 - val_loss: 0.4622 - val_recall: 0.7632
Epoch 3/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.5046 - recall: 0.7300 - val_loss: 0.4746 - val_recall: 0.7566
Epoch 4/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.4980 - recall: 0.7429 - val_loss: 0.4798 - val_recall: 0.7368
Epoch 5/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.4887 - recall: 0.7375 - val_loss: 0.4234 - val_recall: 0.7368
Epoch 6/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.4806 - recall: 0.7463 - val_loss: 0.5407 - val_recall: 0.8487
Epoch 7/1000
225/225 [==============================] - 0s 2ms/step - loss: 0.4785 - recall: 0.7368 - val_loss: 0.4143 - val_recall: 0.7237
Epoch 8/1000
225/225[...]<jupyter_text>#### Evaluate<jupyter_code>loss_df = pd.DataFrame(model.history.history)
loss_df.plot();
y_pred = (model.predict(X_test) > 0.5).astype("int32")
#y_pred = model.predict_classes(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))<jupyter_output>[[1278 315]
[ 109 298]]
precision recall f1-score support
0 0.92 0.80 0.86 1593
1 0.49 0.73 0.58 407
accuracy 0.79 2000
macro avg 0.70 0.77 0.72 2000
weighted avg 0.83 0.79 0.80 2000
<jupyter_text>👆👆👆Accuracy score dropped after class-weight operation, but reacll score increased considerably👆👆👆## GridSearchCV<jupyter_code>from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
def build_classifier(optimizer):
classifier = Sequential()
classifier.add(Dense(units = 44, activation = 'relu'))
classifier.add(Dense(units = 22, activation = 'relu'))
classifier.add(Dense(units = 11, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = optimizer, loss = 'binary_crossentropy', metrics = ['Recall'])
return classifier
early_stop = EarlyStopping(monitor = "recall", mode = "auto", verbose = 1, patience = 15)
classifier = KerasClassifier(build_fn = build_classifier, epochs = 200)
parameters = {'batch_size': [32, 64],
'optimizer': ['adam', 'rmsprop', "SGD", "adagrad", "adadelta"]}
grid_model = GridSearchCV(estimator = classifier,
param_grid = parameters,
scoring = 'recall',
cv = 10,
n_jobs = -1,
verbose = 1,)
grid_model.fit(X_train, y_train, callbacks = [early_stop], class_weight = class_weights)
grid_model.best_score_
grid_model.best_params_
y_pred = (grid_model.predict(X_test) > 0.5).astype("int32")
#y_pred = model.predict_classes(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))<jupyter_output>[[1381 212]
[ 147 260]]
precision recall f1-score support
0 0.90 0.87 0.88 1593
1 0.55 0.64 0.59 407
accuracy 0.82 2000
macro avg 0.73 0.75 0.74 2000
weighted avg 0.83 0.82 0.83 2000
<jupyter_text>#### Evaluate#### for keras models<jupyter_code>y_pred_proba = model.predict(X_test)
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='ANN')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('ROC curve')
plt.show()<jupyter_output><empty_output><jupyter_text>#### for gridsearchcv model<jupyter_code>y_pred_proba = grid_model.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='ANN')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('ROC curve')
plt.show()
roc_auc_score(y_test, y_pred_proba)<jupyter_output><empty_output><jupyter_text>## Final Model and Model DeploymentWe use the best parameters that we found with Grid Search in the final model.<jupyter_code>import pickle
pickle.dump(scaler, open("scaler_bank", 'wb'))
class_weights = dict(zip(np.unique(y_train), class_weight.compute_class_weight('balanced', np.unique(y_train),
y_train)))
model = Sequential()
model.add(Dense(44, activation = "relu"))
model.add(Dense(22, activation = "relu"))
model.add(Dense(11, activation = "relu"))
model.add(Dense(1, activation = "sigmoid"))
opt = RMSprop(lr = 0.005)
model.compile(optimizer = opt, loss = "binary_crossentropy", metrics = ["Recall"])
early_stop = EarlyStopping(monitor = "recall", mode = "auto", verbose = 1, patience = 15)
model.fit(x = X_train, y = y_train, validation_data = (X_test, y_test), batch_size = 32, epochs = 1000, verbose = 1, class_weight = class_weights, callbacks = [early_stop])
loss_df = pd.DataFrame(model.history.history)
loss_df.plot()
y_pred = (model.predict(X_test) > 0.5).astype("int32")
#y_pred = model.predict_classes(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))<jupyter_output>[[1212 381]
[ 91 316]]
precision recall f1-score support
0 0.93 0.76 0.84 1593
1 0.45 0.78 0.57 407
accuracy 0.76 2000
macro avg 0.69 0.77 0.70 2000
weighted avg 0.83 0.76 0.78 2000
<jupyter_text>👆👆👆At this stage, Adam and RMSprop were tested. Learning rate changed. The highest scoring result has been found👆👆👆<jupyter_code>model.save('model_bank.h5')<jupyter_output><empty_output><jupyter_text>### Prediction<jupyter_code>from tensorflow.keras.models import load_model
model_bank = load_model('model_bank.h5')
scaler_bank = pickle.load(open("scaler_bank", "rb"))
customer = df.drop('Exited', axis = 1).iloc[5:10, :]
customer
customer = scaler_bank.transform(customer)
customer
(model_bank.predict(customer) > 0.5).astype("int32")
df["Exited"].iloc[5:10]<jupyter_output><empty_output><jupyter_text>Our model made an accurate prediction of 80% on the given sample values👏👏👏## Comparison with ML### Logistic Regression<jupyter_code>from sklearn.linear_model import LogisticRegression
log_model=LogisticRegression(class_weight="balanced")
log_model.fit(X_train, y_train)
y_pred = log_model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))<jupyter_output>[[1137 456]
[ 122 285]]
precision recall f1-score support
0 0.90 0.71 0.80 1593
1 0.38 0.70 0.50 407
accuracy 0.71 2000
macro avg 0.64 0.71 0.65 2000
weighted avg 0.80 0.71 0.74 2000
<jupyter_text>### Random Forest<jupyter_code>from sklearn.ensemble import RandomForestClassifier
rf_model = RandomForestClassifier(class_weight="balanced")
rf_model.fit(X_train, y_train)
y_pred = rf_model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))<jupyter_output>[[1549 44]
[ 226 181]]
precision recall f1-score support
0 0.87 0.97 0.92 1593
1 0.80 0.44 0.57 407
accuracy 0.86 2000
macro avg 0.84 0.71 0.75 2000
weighted avg 0.86 0.86 0.85 2000
|
no_license
|
/Classification_Churn_Modelling.ipynb
|
bunyamin-polat/Churn-Prediction-with-DL-For-Bank-Customer
| 25 |
<jupyter_start><jupyter_text>## Note on spells
If child runs away for longer than a week, their spell will be considered ended, and a new spell would start if they return.<jupyter_code># age of child at start of spell
ages = spells.STARTAGE.value_counts()
_ = plt.bar(ages.index, ages)
# primary placement type of spell
_ = sns.countplot(spells.TYPE)<jupyter_output><empty_output><jupyter_text>- AL - Alternative Placement (equiv to PAL)
- FC - Foster Care (equiv to PFC)
- GH Group Home (equiv to PGH)
- IL Independent Living (equiv to PIL)
- KC Relative/Kinship Care (including fictive kin) – licensed & unlicensed (equiv to PKC)
- RC Residential Care (excluding RTF) (equiv to PRC)
- RT Residential Treatment Facilities (equiv to PRT)
- SF Shelter Foster Care (equiv to PSF)
- SG Shelter Group Home (equiv to PSG)
- SK Shelter Kinship Care (equiv to PSK)
- UK Unknown (equiv to PUK)
- MX - Mixed type (When >50% is not equal to particular type)<jupyter_code># number of placements per spell
nplacements = spells.NPLACES.value_counts()
_ = plt.bar(nplacements.index, nplacements)
# CDF of spell durations in days
spell_len = lv.get_cdf(spells.DURAT)
_ = plt.semilogx(spell_len.index, spell_len)
# spell durations vs number of placements
_ = sns.jointplot(spells.DURAT, spells.NPLACES, marker='.', alpha = 0.3)
# spell exit types
_ = sns.countplot(spells.EXIT)<jupyter_output><empty_output><jupyter_text>## EXIT codes in spell_clean.csv
### Good:
- XRF: reunified with family
- XLC: legal custodianship
- XRL: to relative (obsolete code)
- XCA: adopted
### Bad:
- XOT: other
- XOP: other (obsolete code)
- XRM: age out
- XRY: runaway
- XJP: juvenile probation
### Unknown:
- ZTC: spell ongoing
- XUK: unknown<jupyter_code>#map EXITS to outcome: GOOD, BAD, or UNKNOWN
good_val = 'GOOD'
bad_val = 'BAD'
unknown_val = 'UNKNOWN'
exit_mapping = {
'XRF':good_val, 'XLC':good_val, 'XRL':good_val, 'XCA':good_val,
'XOT':bad_val, 'XOP':bad_val, 'XRM':bad_val, 'XRY':bad_val, 'XJP':bad_val,
'ZTC':unknown_val, 'XUK':unknown_val
}
outcomes = [exit_mapping[x] for x in spells.EXIT if x in exit_mapping]
sns.countplot(outcomes)
spells['outcome'] = outcomes<jupyter_output><empty_output><jupyter_text># distributions by outcome<jupyter_code>def pmf_by_outcome(col, data=spells, figsize=(12,5), color = sns.color_palette('muted')):
fig, ax = plt.subplots(1,1, figsize=figsize)
n_total = len(data[col])
# color = ['#77298A', '#94B02A', '#B95B13']
for i,x in enumerate(['UNKNOWN', 'GOOD', 'BAD']):
series = data[data.outcome==x][col]
pct_string = ' ({0:.1f}%)'.format(100*len(series)/n_total)
pmf = lv.get_pmf(series)
is_categorical = pmf.index.dtype=='O'
xs = np.arange(len(pmf.index))+i/3 if is_categorical else pmf.index+i/3
ax.bar(xs, pmf, 1/3, facecolor=color[i], label=x+pct_string)
if is_categorical:
ax.set_xticks(range(len(pmf)))
else:
ax.set_xticks(pmf.index)
ax.set_xticklabels(pmf.index, ha='left')
ax.legend()
def cdf_by_outcome(col, data=spells, figsize=(12,5), scale='lin', color = sns.color_palette('muted')
):
fig, ax = plt.subplots(1,1, figsize=figsize)
n_total = len(data[col])
for i,x in enumerate([ 'UNKNOWN', 'GOOD', 'BAD']):
series = data[data.outcome==x][col]
pct_string = ' ({0:.1f}%)'.format(100*len(series)/n_total)
cdf = lv.get_cdf(series)
plot_fn = ax.plot if scale=='lin' else ax.semilogx
plot_fn(cmf.index, cdf, color=color[i], label=x+pct_string)
ax.legend()
# PMF of number of placements in spell, separated by outcome
pmf_by_outcome('NPLACES')
cdf_by_outcome('DURAT', scale='log')
# PMF of age at first service, separated by outcome
pmf_by_outcome('STARTAGE')
# PMF of age at beginning of spell by outcome
pmf_by_outcome('SPELLAGE')
# PMF of age at end of spell by outcome
pmf_by_outcome('EXITAGE')
# PMFs of spell sequence number by outcome
pmf_by_outcome('SPELL')
# CDFs of time between current and next spell by outcome
cdf_by_outcome('TIMER')
# PMFs of gender by outcome
pmf_by_outcome('GENDER')
# PMFs of ethnicity by outcome
pmf_by_outcome('HISPANIC')
# PMFs of ethnicity by outcome
pmf_by_outcome('ETHNIC')<jupyter_output><empty_output><jupyter_text>- AN - Native American
- AS - Asian and Pacific
- BL - Black/African American
- MU - Multiple
- OT - Other Category
- UK - Unknown
- WH - White<jupyter_code># PMFs of primary placement type by outcome
pmf_by_outcome('TYPE')<jupyter_output><empty_output>
|
no_license
|
/python/spell-vis.ipynb
|
DataGraphito/FosterCareMobility
| 5 |
<jupyter_start><jupyter_text># A Primer on Bayesian Methods for Multilevel ModelingHierarchical or multilevel modeling is a generalization of regression modeling.
*Multilevel models* are regression models in which the constituent model parameters are given **probability models**. This implies that model parameters are allowed to **vary by group**.
Observational units are often naturally **clustered**. Clustering induces dependence between observations, despite random sampling of clusters and random sampling within clusters.
A *hierarchical model* is a particular multilevel model where parameters are nested within one another.
Some multilevel structures are not hierarchical.
* e.g. "country" and "year" are not nested, but may represent separate, but overlapping, clusters of parameters
We will motivate this topic using an environmental epidemiology example.### Example: Radon contamination (Gelman and Hill 2006)
Let's revisit the radon contamination example from the previous section. For hierarchical modeling, we will use more of the data; we will focus on modeling radon levels in Minnesota. The EPA did the radon study in 80,000 houses. There were two important predictors:
* measurement in basement or first floor (radon higher in basements)
* county uranium level (positive correlation with radon levels)
The hierarchy in this example is households within county. ### Data organizationFirst, we import the data from a local file, and extract Minnesota's data.<jupyter_code>%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set_context('notebook')
import warnings
warnings.simplefilter("ignore")
# Import radon data
radon_data = pd.read_csv('../data/radon.csv', index_col=0)
RANDOM_SEED = 20090425
counties = radon_data.county.unique()
n_counties = counties.shape[0]
county = radon_data.county_code.values
log_radon = radon_data.log_radon.values
floor_measure = radon_data.floor.values
log_uranium = np.log(radon_data.Uppm.values)
county_lookup = dict(zip(counties, np.arange(n_counties)))<jupyter_output><empty_output><jupyter_text>Distribution of radon levels in MN (log scale):<jupyter_code>radon_data.activity.apply(lambda x: np.log(x+0.1)).hist(bins=25, grid=False);<jupyter_output><empty_output><jupyter_text>## Conventional approaches
The two conventional alternatives to modeling radon exposure represent the two extremes of the bias-variance tradeoff:
***Complete pooling***:
Treat all counties the same, and estimate a single radon level.
$$y_i = \alpha + \beta x_i + \epsilon_i$$
***No pooling***:
Model radon in each county independently.
$$y_i = \alpha_{j[i]} + \beta x_i + \epsilon_i$$
where $j = 1,\ldots,85$
The errors $\epsilon_i$ may represent measurement error, temporal within-house variation, or variation among houses.Here are the point estimates of the slope and intercept for the complete pooling model:<jupyter_code>from pymc3 import Model, sample, Normal, HalfCauchy, Uniform
floor = radon_data.floor.values
log_radon = radon_data.log_radon.values
with Model() as pooled_model:
β = Normal('β', 0, sigma=10, shape=2)
σ = HalfCauchy('σ', 5)
θ = β[0] + β[1]*floor
y = Normal('y', θ, sigma=σ, observed=log_radon)
<jupyter_output><empty_output><jupyter_text>Before sampling, it is useful to conduct **prior predictive checks** to ensure that our priors are appropriate, and do not unduly constrain inference.<jupyter_code>from pymc3 import sample_prior_predictive
with pooled_model:
prior_checks = sample_prior_predictive(samples=1000)
plt.plot(
[0, 1],
[prior_checks["β"][:, 0], prior_checks["β"][:, 1]],
"ok",
alpha=0.2)
plt.xlabel("Floor measurement location")
plt.xticks([0,1], ["Basement", "Floor"])
plt.ylabel("Mean log radon level");<jupyter_output><empty_output><jupyter_text>These look fine--they allow for any radon values that we would reasonably expect to see in the data.<jupyter_code>with pooled_model:
pooled_trace = sample(1000, tune=1000, cores=2)
b0, m0 = pooled_trace['β'].mean(axis=0)
plt.scatter(radon_data.floor, np.log(radon_data.activity+0.1))
xvals = np.linspace(-0.2, 1.2)
plt.plot(xvals, m0*xvals+b0, 'r--');<jupyter_output><empty_output><jupyter_text>Estimates of county radon levels for the unpooled model:<jupyter_code>with Model() as unpooled_model:
β0 = Normal('β0', 0, sigma=10, shape=n_counties)
β1 = Normal('β1', 0, sigma=10)
σ = HalfCauchy('σ', 5)
θ = β0[county] + β1*floor
y = Normal('y', θ, sigma=σ, observed=log_radon)
with unpooled_model:
unpooled_trace = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)
from arviz import plot_forest
(ax,) = plot_forest(unpooled_trace, var_names=['β0'])
ax.set_yticklabels([""]);
unpooled_estimates = pd.Series(unpooled_trace['β0'].mean(axis=0), index=counties)
unpooled_se = pd.Series(unpooled_trace['β0'].std(axis=0), index=counties)<jupyter_output><empty_output><jupyter_text>We can plot the ordered estimates to identify counties with high radon levels:<jupyter_code>order = unpooled_estimates.sort_values().index
plt.scatter(range(len(unpooled_estimates)), unpooled_estimates[order])
for i, m, se in zip(range(len(unpooled_estimates)), unpooled_estimates[order], unpooled_se[order]):
plt.plot([i,i], [m-se, m+se], 'b-')
plt.xlim(-1,86); plt.ylim(-1,4)
plt.ylabel('Radon estimate');plt.xlabel('Ordered county');<jupyter_output><empty_output><jupyter_text>Here are visual comparisons between the pooled and unpooled estimates for a subset of counties representing a range of sample sizes.<jupyter_code>sample_counties = ('LAC QUI PARLE', 'AITKIN', 'KOOCHICHING',
'DOUGLAS', 'CLAY', 'STEARNS', 'RAMSEY', 'ST LOUIS')
fig, axes = plt.subplots(2, 4, figsize=(12, 6), sharey=True, sharex=True)
axes = axes.ravel()
m = unpooled_trace['β1'].mean()
for i,c in enumerate(sample_counties):
y = radon_data.log_radon[radon_data.county==c]
x = radon_data.floor[radon_data.county==c]
axes[i].scatter(x + np.random.randn(len(x))*0.01, y, alpha=0.4)
# No pooling model
b = unpooled_estimates[c]
# Plot both models and data
xvals = np.linspace(-0.2, 1.2)
axes[i].plot(xvals, m*xvals+b)
axes[i].plot(xvals, m0*xvals+b0, 'r--')
axes[i].set_xticks([0,1])
axes[i].set_xticklabels(['basement', 'floor'])
axes[i].set_ylim(-1, 3)
axes[i].set_title(c)
if not i%2:
axes[i].set_ylabel('log radon level')<jupyter_output><empty_output><jupyter_text>Neither of these models are satisfactory:
* if we are trying to identify high-radon counties, pooling is useless
* we do not trust extreme unpooled estimates produced by models using few observations## Multilevel and hierarchical models
When we pool our data, we imply that they are sampled from the same model. This ignores any variation among sampling units (other than sampling variance):

When we analyze data unpooled, we imply that they are sampled independently from separate models. At the opposite extreme from the pooled case, this approach claims that differences between sampling units are to large to combine them:

In a hierarchical model, parameters are viewed as a sample from a population distribution of parameters. Thus, we view them as being neither entirely different or exactly the same. This is ***parital pooling***.
We can use PyMC to easily specify multilevel models, and fit them using Markov chain Monte Carlo.## Partial pooling model
The simplest partial pooling model for the household radon dataset is one which simply estimates radon levels, without any predictors at any level. A partial pooling model represents a compromise between the pooled and unpooled extremes, approximately a weighted average (based on sample size) of the unpooled county estimates and the pooled estimates.
$$\hat{\alpha} \approx \frac{(n_j/\sigma_y^2)\bar{y}_j + (1/\sigma_{\alpha}^2)\bar{y}}{(n_j/\sigma_y^2) + (1/\sigma_{\alpha}^2)}$$
Estimates for counties with smaller sample sizes will shrink towards the state-wide average.
Estimates for counties with larger sample sizes will be closer to the unpooled county estimates.<jupyter_code>with Model() as partial_pooling:
# Priors
μ_a = Normal('μ_a', mu=0., sigma=10)
σ_a = HalfCauchy('σ_a', 5)
# Random intercepts
a = Normal('a', mu=μ_a, sigma=σ_a, shape=n_counties)
# Model error
σ_y = HalfCauchy('σ_y',5)
# Expected value
y_hat = a[county]
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sigma=σ_y, observed=log_radon)
with partial_pooling:
partial_pooling_trace = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)
sample_trace = partial_pooling_trace['a']
fig, axes = plt.subplots(1, 2, figsize=(14,6), sharex=True, sharey=True)
n_samples, n_counties = sample_trace.shape
jitter = np.random.normal(scale=0.1, size=n_counties)
n_county = radon_data.groupby('county')['county_code'].count()
unpooled_means = radon_data.groupby('county')['log_radon'].mean()
unpooled_sd = radon_data.groupby('county')['log_radon'].std()
unpooled = pd.DataFrame({'n':n_county, 'm':unpooled_means, 'sd':unpooled_sd})
unpooled['se'] = unpooled.sd/np.sqrt(unpooled.n)
axes[0].plot(unpooled.n + jitter, unpooled.m, 'b.')
for j, row in zip(jitter, unpooled.iterrows()):
name, dat = row
axes[0].plot([dat.n+j,dat.n+j], [dat.m-dat.se, dat.m+dat.se], 'b-')
axes[0].set_xscale('log')
axes[0].hlines(sample_trace.mean(), 0.9, 100, linestyles='--')
n_samples, n_counties = sample_trace.shape
means = sample_trace.mean(axis=0)
sd = sample_trace.std(axis=0)
axes[1].scatter(n_county.values + jitter, means)
axes[1].set_xscale('log')
axes[1].set_xlim(1,100)
axes[1].set_ylim(0, 3)
axes[1].hlines(sample_trace.mean(), 0.9, 100, linestyles='--')
for j,n,m,s in zip(jitter, n_county.values, means, sd):
axes[1].plot([n+j]*2, [m-s, m+s], 'b-');<jupyter_output><empty_output><jupyter_text>Notice the difference between the unpooled and partially-pooled estimates, particularly at smaller sample sizes. The former are both more extreme and more imprecise.## Varying intercept model
This model allows intercepts to vary across county, according to a random effect.
$$y_i = \alpha_{j[i]} + \beta x_{i} + \epsilon_i$$
where
$$\epsilon_i \sim N(0, \sigma_y^2)$$
and the intercept random effect:
$$\alpha_{j[i]} \sim N(\mu_{\alpha}, \sigma_{\alpha}^2)$$
As with the the “no-pooling” model, we set a separate intercept for each county, but rather than fitting separate least squares regression models for each county, multilevel modeling **shares strength** among counties, allowing for more reasonable inference in counties with little data.<jupyter_code>with Model() as varying_intercept:
# Priors
μ_a = Normal('μ_a', mu=0., sigma=10)
σ_a = HalfCauchy('σ_a', 5)
# Random intercepts
a = Normal('a', mu=μ_a, sigma=σ_a, shape=n_counties)
# Common slope
b = Normal('b', mu=0., sigma=10)
# Model error
sd_y = HalfCauchy('sd_y', 5)
# Expected value
y_hat = a[county] + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sigma=sd_y, observed=log_radon)
with varying_intercept:
varying_intercept_trace = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)
(ax,) = plot_forest(varying_intercept_trace, var_names=['a'])
ax.set_yticklabels([""]);
from arviz import plot_posterior
plot_posterior(varying_intercept_trace, var_names=['σ_a', 'b']);<jupyter_output><empty_output><jupyter_text>The estimate for the `floor` coefficient is approximately -0.66, which can be interpreted as houses without basements having about half ($\exp(-0.66) = 0.52$) the radon levels of those with basements, after accounting for county.<jupyter_code>from pymc3 import summary
summary(varying_intercept_trace, var_names=['b'])
xvals = np.arange(2)
bp = varying_intercept_trace[a].mean(axis=0)
mp = varying_intercept_trace[b].mean()
for bi in bp:
plt.plot(xvals, mp*xvals + bi, 'bo-', alpha=0.4)
plt.xlim(-0.1,1.1);<jupyter_output><empty_output><jupyter_text>It is easy to show that the partial pooling model provides more objectively reasonable estimates than either the pooled or unpooled models, at least for counties with small sample sizes.<jupyter_code>fig, axes = plt.subplots(2, 4, figsize=(12, 6), sharey=True, sharex=True)
axes = axes.ravel()
for i,c in enumerate(sample_counties):
# Plot county data
y = radon_data.log_radon[radon_data.county==c]
x = radon_data.floor[radon_data.county==c]
axes[i].scatter(x + np.random.randn(len(x))*0.01, y, alpha=0.4)
# No pooling model
b = unpooled_estimates[c]
m = unpooled_trace['β1'].mean()
xvals = np.linspace(-0.2, 1.2)
# Unpooled estimate
axes[i].plot(xvals, m*xvals+b)
# Pooled estimate
axes[i].plot(xvals, m0*xvals+b0, 'r--')
# Partial pooling esimate
axes[i].plot(xvals, mp*xvals+bp[county_lookup[c]], 'k:')
axes[i].set_xticks([0,1])
axes[i].set_xticklabels(['basement', 'floor'])
axes[i].set_ylim(-1, 3)
axes[i].set_title(c)
if not i%2:
axes[i].set_ylabel('log radon level');<jupyter_output><empty_output><jupyter_text>## Varying slope model
Alternatively, we can posit a model that allows the counties to vary according to how the location of measurement (basement or floor) influences the radon reading.
$$y_i = \alpha + \beta_{j[i]} x_{i} + \epsilon_i$$
<jupyter_code>with Model() as varying_slope:
# Priors
μ_b = Normal('μ_b', mu=0., sigma=10)
σ_b = HalfCauchy('σ_b', 5)
# Common intercepts
a = Normal('a', mu=0., sigma=10)
# Random slopes
b = Normal('b', mu=μ_b, sigma=σ_b, shape=n_counties)
# Model error
σ_y = HalfCauchy('σ_y',5)
# Expected value
y_hat = a + b[county] * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sigma=σ_y, observed=log_radon)
with varying_slope:
varying_slope_trace = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)
(ax,) = plot_forest(varying_slope_trace, var_names=['b'])
ax.set_yticklabels([""]);
xvals = np.arange(2)
b = varying_slope_trace['a'].mean()
m = varying_slope_trace['b'].mean(axis=0)
for mi in m:
plt.plot(xvals, mi*xvals + b, 'bo-', alpha=0.4)
plt.xlim(-0.2, 1.2);<jupyter_output><empty_output><jupyter_text>## Exercise: Varying intercept and slope model
The most general model allows both the intercept and slope to vary by county:
$$y_i = \alpha_{j[i]} + \beta_{j[i]} x_{i} + \epsilon_i$$
Combine these two models to create a version with both slope and intercept varying.<jupyter_code>with Model() as varying_intercept_slope:
# Write your model here
with varying_intercept_slope:
varying_intercept_slope_trace = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)<jupyter_output><empty_output><jupyter_text>Forest plot of slopes and intercepts:<jupyter_code>plot_forest(varying_intercept_slope_trace, var_names=['a','b']);
xvals = np.arange(2)
b = varying_intercept_slope_trace['a'].mean(axis=0)
m = varying_intercept_slope_trace['b'].mean(axis=0)
for bi,mi in zip(b,m):
plt.plot(xvals, mi*xvals + bi, 'bo-', alpha=0.4)
plt.xlim(-0.1, 1.1);<jupyter_output><empty_output><jupyter_text>## Adding group-level predictors
A primary strength of multilevel models is the ability to handle predictors on multiple levels simultaneously. If we consider the varying-intercepts model above:
$$y_i = \alpha_{j[i]} + \beta x_{i} + \epsilon_i$$
we may, instead of a simple random effect to describe variation in the expected radon value, specify another regression model with a county-level covariate. Here, we use the county uranium reading $u_j$, which is thought to be related to radon levels:
$$\alpha_j = \gamma_0 + \gamma_1 u_j + \zeta_j$$
$$\zeta_j \sim N(0, \sigma_{\alpha}^2)$$
Thus, we are now incorporating a house-level predictor (floor or basement) as well as a county-level predictor (uranium).
Note that the model has both indicator variables for each county, plus a county-level covariate. In classical regression, this would result in collinearity. In a multilevel model, the partial pooling of the intercepts towards the expected value of the group-level linear model avoids this.
Group-level predictors also serve to reduce group-level variation $\sigma_{\alpha}$. An important implication of this is that the group-level estimate induces stronger pooling.<jupyter_code>from pymc3 import Deterministic
with Model() as hierarchical_intercept:
# Priors
σ_a = HalfCauchy('σ_a', 5)
# County uranium model
γ_0 = Normal('γ_0', mu=0., sigma=10)
γ_1 = Normal('γ_1', mu=0., sigma=10)
# Uranium model for intercept
μ_a = γ_0 + γ_1*log_uranium
# County variation not explained by uranium
ϵ_a = Normal('ϵ_a', mu=0, sigma=1, shape=n_counties)
a = Deterministic('a', μ_a + σ_a*ϵ_a[county])
# Common slope
b = Normal('b', mu=0., sigma=10)
# Model error
σ_y = Uniform('σ_y', lower=0, upper=100)
# Expected value
y_hat = a + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sigma=σ_y, observed=log_radon)
with hierarchical_intercept:
hierarchical_intercept_trace = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)
a_means = hierarchical_intercept_trace['a'].mean(axis=0)
plt.scatter(log_uranium, a_means)
g0 = hierarchical_intercept_trace['γ_0'].mean()
g1 = hierarchical_intercept_trace['γ_1'].mean()
xvals = np.linspace(-1, 0.8)
plt.plot(xvals, g0+g1*xvals, 'k--')
plt.xlim(-1, 0.8)
a_se = hierarchical_intercept_trace['a'].std(axis=0)
for ui, m, se in zip(log_uranium, a_means, a_se):
plt.plot([ui,ui], [m-se, m+se], 'b-')
plt.xlabel('County-level uranium'); plt.ylabel('Intercept estimate');<jupyter_output><empty_output><jupyter_text>The standard errors on the intercepts are narrower than for the partial-pooling model without a county-level covariate.### Correlations among levels
In some instances, having predictors at multiple levels can reveal correlation between individual-level variables and group residuals. We can account for this by including the average of the individual predictors as a covariate in the model for the group intercept.
$$\alpha_j = \gamma_0 + \gamma_1 u_j + \gamma_2 \bar{x} + \zeta_j$$
These are broadly referred to as ***contextual effects***.<jupyter_code># Create new variable for mean of floor across counties
xbar = radon_data.groupby('county')['floor'].mean().rename(county_lookup).values
with Model() as contextual_effect:
# Priors
σ_a = HalfCauchy('σ_a', 5)
# County uranium model for slope
γ = Normal('γ', mu=0., sigma=10, shape=3)
# Uranium model for intercept
μ_a = Deterministic('μ_a', γ[0] + γ[1]*log_uranium + γ[2]*xbar[county])
# County variation not explained by uranium
ϵ_a = Normal('ϵ_a', mu=0, sigma=1, shape=n_counties)
a = Deterministic('a', μ_a + σ_a*ϵ_a[county])
# Common slope
b = Normal('b', mu=0., sigma=10)
# Model error
σ_y = Uniform('σ_y', lower=0, upper=100)
# Expected value
y_hat = a + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sigma=σ_y, observed=log_radon)
with contextual_effect:
contextual_effect_trace = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)
plot_forest(contextual_effect_trace, var_names=['γ']);
summary(contextual_effect_trace, var_names=['γ'])<jupyter_output><empty_output><jupyter_text>So, we might infer from this that counties with higher proportions of houses without basements tend to have higher baseline levels of radon. Perhaps this is related to the soil type, which in turn might influence what type of structures are built.### Prediction
Gelman (2006) used cross-validation tests to check the prediction error of the unpooled, pooled, and partially-pooled models
**root mean squared cross-validation prediction errors**:
* unpooled = 0.86
* pooled = 0.84
* multilevel = 0.79
There are two types of prediction that can be made in a multilevel model:
1. a new individual within an existing group
2. a new individual within a new group
For example, if we wanted to make a prediction for a new house with no basement in St. Louis county, we just need to sample from the radon model with the appropriate intercept.<jupyter_code>county_lookup['ST LOUIS']<jupyter_output><empty_output><jupyter_text>That is,
$$\tilde{y}_i \sim N(\alpha_{69} + \beta (x_i=1), \sigma_y^2)$$
This is a matter of adding a single additional line in PyMC:<jupyter_code>with Model() as contextual_pred:
# Priors
σ_a = HalfCauchy('σ_a', 5)
# County uranium model for slope
γ = Normal('γ', mu=0., sigma=10, shape=3)
# Uranium model for intercept
μ_a = Deterministic('μ_a', γ[0] + γ[1]*log_uranium + γ[2]*xbar[county])
# County variation not explained by uranium
ϵ_a = Normal('ϵ_a', mu=0, sigma=1, shape=n_counties)
a = Deterministic('a', μ_a + σ_a*ϵ_a[county])
# Common slope
b = Normal('b', mu=0., sigma=10)
# Model error
σ_y = Uniform('σ_y', lower=0, upper=100)
# Expected value
y_hat = a + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sigma=σ_y, observed=log_radon)
# St Louis county prediction
stl_pred = Normal('stl_pred', mu=a[69] + b, sigma=σ_y)
with contextual_pred:
contextual_pred_trace = sample(2000, tune=1000, cores=2, random_seed=RANDOM_SEED)
plot_posterior(contextual_pred_trace, var_names=['stl_pred']);<jupyter_output><empty_output><jupyter_text>## Exercise
How would we make a prediction from a new county (*e.g.* one not included in this dataset)?<jupyter_code># Write your answer here<jupyter_output><empty_output>
|
permissive
|
/notebooks/Section1_2-Hierarchical_Models.ipynb
|
Robert-Muil/bayes_course_july2020
| 20 |
<jupyter_start><jupyter_text># Requesting Argo BGC data from Ifremer erddap, expert mode
Using the expert mode, you can have access to all fields retrieved from the erddap, including all QC variables and without any data mode filtering.
***
Script prepared by [Guillaume Maze](http://github.com/gmaze) (Mar. 2020)<jupyter_code>import sys, os
import numpy as np
import xarray as xr
try:
import argopy
except ModuleNotFoundError:
!pip install git+http://github.com/euroargodev/argopy.git@master
import argopy
print("argopy:", argopy.__version__)
from argopy import DataFetcher as ArgoDataFetcher
#
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.cm as cm
import cmocean
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
%matplotlib inline
# Usefull colormaps and colorbar makers:
qcmap = mpl.colors.ListedColormap(['#000000',
'#31FC03',
'#ADFC03',
'#FCBA03',
'#FC1C03',
'#324CA8',
'#000000',
'#000000',
'#B22CC9',
'#000000'])
def colorbar_qc(cmap, **kwargs):
"""Adjust colorbar ticks with discrete colors for QC flags"""
ncolors = 10
mappable = cm.ScalarMappable(cmap=cmap)
mappable.set_array([])
mappable.set_clim(-0.5, ncolors+0.5)
colorbar = plt.colorbar(mappable, **kwargs)
colorbar.set_ticks(np.linspace(0, ncolors, ncolors))
colorbar.set_ticklabels(range(ncolors))
return colorbar
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(['A','D','R'])
dmode_map = mpl.colors.ListedColormap(['#FCBA03','#31FC03','#FF0000'])
def colorbar_dmode(cmap=dmode_map, **kwargs):
"""Adjust colorbar ticks with discrete colors for DATA MODE"""
ncolors = 3
mappable = cm.ScalarMappable(cmap=cmap)
mappable.set_array([])
mappable.set_clim(-0.5, ncolors+0.5)
colorbar = plt.colorbar(mappable, **kwargs)
colorbar.set_ticks(np.linspace(0, ncolors, ncolors))
colorbar.set_ticklabels(le.classes_)
return colorbar<jupyter_output>argopy: 0.1.1
<jupyter_text># Create the Argo data loader instance
If we want to retrieve data without post-processing, we need to specify the ``mode`` option to ``expert`` when creating the data loader instance.<jupyter_code># argo_loader = ArgoDataFetcher(mode='expert')
# argo_loader = ArgoDataFetcher(mode='expert', cachedir='tmp')
argo_loader = ArgoDataFetcher(mode='expert', ds='bgc')
argo_loader<jupyter_output><empty_output><jupyter_text># Example of data fetching for a specific region (Upper Equatorial East-Pacific)
**NOTE**: if the following cell is raising a "Internal Server Error", simply try to reload it (this is due to a long (>1min) delay in preparing the response)<jupyter_code># box = [-120., -85., -10, 10, 0, 1000, '2018-01-01','2018-12-31']
# box = [-120., -85., -10, 10, 0, 500, '2019-01-01','2019-12-31']
box = [-120., -85., -10, 10, 0, 500, '2017-01-01','2017-12-31']
# box = [-120., -85., -30, -10, 0, 500, '2019-01-01','2019-12-31']
ds = argo_loader.region(box).to_xarray()
ds<jupyter_output><empty_output><jupyter_text>## QC figure with unfiltered data<jupyter_code>fig, ax = plt.subplots(nrows=3, ncols=2, figsize=(25,20))
ax = np.array(ax).flatten()
fig.delaxes(ax[-1])
ix = 0
sc = ax[ix].scatter(ds['LONGITUDE'], ds['LATITUDE'], c=ds['POSITION_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].set_xlabel('LONGITUDE')
ax[ix].set_ylabel('LATITUDE')
ax[ix].set_title('POSITION_QC')
ix += 1
sc = ax[ix].scatter(ds['TIME'].values, ds['CYCLE_NUMBER'], c=ds['TIME_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].set_xlabel('TIME')
ax[ix].set_ylabel('Cycle number')
ax[ix].set_title('TIME_QC')
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['TEMP_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel('CYCLE_NUMBER')
ax[ix].set_ylabel('Pressure')
ax[ix].set_title('TEMP_QC')
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['PSAL_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel('CYCLE_NUMBER')
ax[ix].set_ylabel('Pressure')
ax[ix].set_title('PSAL_QC')
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['DOXY_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel('CYCLE_NUMBER')
ax[ix].set_ylabel('Pressure')
ax[ix].set_title('DOXY_QC')
fig.suptitle("RAW variables QC\n%s" % ds.attrs['Fetched_constraints']);<jupyter_output><empty_output><jupyter_text># Filter data according to data mode
But this is (nearly) the raw output of the request. It could be useful to simply select the most appropriate variables according to the data mode (Real time, adjusted and delayed mode).
Note that this is done automatically in ``mode='standard'``.<jupyter_code># Inspect data mode
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20,5))
ax = np.array(ax).flatten()
sc = ax[0].scatter(ds['TIME'].values, ds['CYCLE_NUMBER'], c=le.transform(ds['DATA_MODE']),
vmin=0, vmax=2, cmap=dmode_map)
colorbar_dmode(label='DATA_MODE', ax=ax[0])
ax[0].set_xlabel('TIME')
ax[0].set_ylabel('Cycle number')
ax[0].grid()
ax[0].set_title(ds['DATA_MODE'].attrs['convention']);<jupyter_output><empty_output><jupyter_text>## Filter data according to data mode<jupyter_code>ds_filtered = ds.argo.filter_data_mode()
ds_filtered<jupyter_output><empty_output><jupyter_text>## QC figure with appropriate variables<jupyter_code>fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(20,10))
ax = np.array(ax).flatten()
ix = 0
sc = ax[ix].scatter(ds_filtered['LONGITUDE'], ds_filtered['LATITUDE'], c=ds_filtered['POSITION_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].set_xlabel('LONGITUDE')
ax[ix].set_ylabel('LATITUDE')
ax[ix].set_title('POSITION_QC')
ix += 1
sc = ax[ix].scatter(ds_filtered['TIME'].values, ds_filtered['CYCLE_NUMBER'], c=ds_filtered['TIME_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].set_xlabel('TIME')
ax[ix].set_ylabel('Cycle number')
ax[ix].set_title('TIME_QC')
ix += 1
sc = ax[ix].scatter(ds_filtered['CYCLE_NUMBER'], ds_filtered['PRES'], c=ds_filtered['TEMP_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel('CYCLE_NUMBER')
ax[ix].set_ylabel('Pressure')
ax[ix].set_title('TEMP_QC')
ix += 1
sc = ax[ix].scatter(ds_filtered['CYCLE_NUMBER'], ds_filtered['PRES'], c=ds_filtered['PSAL_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel('CYCLE_NUMBER')
ax[ix].set_ylabel('Pressure')
ax[ix].set_title('PSAL_QC')
fig.suptitle("QC flags of Data mode filtered variables\n%s" % ds.attrs['Fetched_constraints']);<jupyter_output><empty_output><jupyter_text># Example of data fetching for a specific float<jupyter_code>float_fetcher = argo_loader.float(3901530)
# float_fetcher = argo_loader.float(6902746)
# float_fetcher = argo_loader.region([-180,180,0,90,0,1000])
# float_fetcher = argo_loader.float(2903005) # Float with a RBRargo sensor
float_fetcher
ds_unfiltered = float_fetcher.to_xarray()
ds = ds_unfiltered.argo.filter_data_mode()
ds
# Get basic information about the float:
print('This float has performed: %i profiles' % len(np.unique(ds['CYCLE_NUMBER'])))
print('This float operated between %s and %s' % (ds['TIME'].min().values, ds['TIME'].max().values))
print('This float profiles range from %0.1fdb to %0.1fdb' % (ds['PRES'].min(), ds['PRES'].max() ))<jupyter_output><empty_output><jupyter_text>## Plot trajectory<jupyter_code>this = ds.reset_coords().groupby('CYCLE_NUMBER').min()
plt.plot(this['LONGITUDE'], this['LATITUDE'], '-', color=[0.7]*3, zorder=0)
plt.scatter(this['LONGITUDE'], this['LATITUDE'], c=this['CYCLE_NUMBER'])
plt.xlabel('LONGITUDE')
plt.ylabel('LATITUDE')
plt.gca().grid()
plt.colorbar(label='CYCLE_NUMBER')
plt.title(ds.attrs['Fetched_constraints'])
plt.show()<jupyter_output><empty_output><jupyter_text>## QC flags for appropriate variables<jupyter_code>fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(20,10))
ax = np.array(ax).flatten()
ix = 0
ax[ix].plot(ds['LONGITUDE'], ds['LATITUDE'], '-', color=[0.7]*3, zorder=0)
sc = ax[ix].scatter(ds['LONGITUDE'], ds['LATITUDE'], c=ds['POSITION_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].set_xlabel('LONGITUDE')
ax[ix].set_ylabel('LATITUDE')
ax[ix].set_title('POSITION_QC')
ix += 1
sc = ax[ix].scatter(ds['TIME'].values, ds['CYCLE_NUMBER'], c=ds['TIME_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].set_xlabel('TIME')
ax[ix].set_ylabel('Cycle number')
ax[ix].set_title('TIME_QC')
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['TEMP_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel('CYCLE_NUMBER')
ax[ix].set_ylabel('Pressure')
ax[ix].set_title('TEMP_QC')
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['PSAL_QC'], vmin=0, vmax=9, cmap=qcmap)
colorbar_qc(qcmap, ax=ax[ix])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel('CYCLE_NUMBER')
ax[ix].set_ylabel('Pressure')
ax[ix].set_title('PSAL_QC')
fig.suptitle("Data mode filtered variables QC\n%s" % ds.attrs['Fetched_constraints']);<jupyter_output><empty_output><jupyter_text>## DATA MODE<jupyter_code>fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,5))
ax = np.array(ax).flatten()
sc = ax[0].scatter(ds['TIME'].values, ds['CYCLE_NUMBER'], c=le.transform(ds['DATA_MODE']),
vmin=0, vmax=2, cmap=dmode_map)
colorbar_dmode(label='DATA_MODE', ax=ax[0])
ax[0].set_ylabel('CYCLE_NUMBER')
ax[0].set_xlabel('TIME')
ax[0].grid()
ax[0].set_title(ds['DATA_MODE'].attrs['convention']);<jupyter_output><empty_output><jupyter_text>## Hovmoller<jupyter_code>fig, ax = plt.subplots(nrows=3, ncols=1, figsize=(20,10), sharex=True, sharey=True)
ax = np.array(ax).flatten()
ix = 0
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['TEMP'], cmap=cmocean.cm.thermal)
plt.colorbar(sc, ax=ax[ix], label=ds['TEMP'].attrs['units'])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['TEMP'].attrs['long_name'])
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['PSAL'], cmap=cmocean.cm.haline)
plt.colorbar(sc, ax=ax[ix], label=ds['PSAL'].attrs['units'])
ax[ix].grid()
ax[ix].set_xlabel(ds['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['PSAL'].attrs['long_name']);
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['DOXY'], cmap=cmocean.cm.algae)
plt.colorbar(sc, ax=ax[ix], label=ds['DOXY'].attrs['units'])
ax[ix].grid()
ax[ix].set_xlabel(ds['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['DOXY'].attrs['long_name']);
fig.suptitle("%s\n(all data, no QC filter)" % ds.attrs['Fetched_constraints'], fontsize=14);
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(20,10), sharex=True, sharey=True)
ax = np.array(ax).flatten()
ix = 0
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['TEMP'], cmap=cmocean.cm.thermal)
plt.colorbar(sc, ax=ax[ix], label=ds['TEMP'].attrs['units'])
ax[ix].invert_yaxis()
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['TEMP'].attrs['long_name'])
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['TEMP_ERROR'], cmap=cmocean.cm.amp)
plt.colorbar(sc, ax=ax[ix], label=ds['TEMP_ERROR'].attrs['units'])
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['TEMP_ERROR'].attrs['long_name'])
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['PSAL'], cmap=cmocean.cm.haline)
plt.colorbar(sc, ax=ax[ix], label=ds['PSAL'].attrs['units'])
ax[ix].set_xlabel(ds['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['PSAL'].attrs['long_name']);
ix += 1
sc = ax[ix].scatter(ds['CYCLE_NUMBER'], ds['PRES'], c=ds['PSAL_ERROR'], cmap=cmocean.cm.amp)
plt.colorbar(sc, ax=ax[ix], label=ds['PSAL_ERROR'].attrs['units'])
ax[ix].set_xlabel(ds['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['PSAL_ERROR'].attrs['long_name']);<jupyter_output><empty_output><jupyter_text>## Super-imposed profiles<jupyter_code>fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20,10), sharey=True)
ax = np.array(ax).flatten()
ix = 0
sc = ax[ix].scatter(ds['TEMP'], ds['PRES'], c=ds['CYCLE_NUMBER'])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel(ds['TEMP'].attrs['units'])
ax[ix].set_ylabel("%s [%s]" % (ds['PRES'].attrs['long_name'], ds['PRES'].attrs['units']))
ax[ix].set_title(ds['TEMP'].attrs['long_name'])
ix += 1
sc = ax[ix].scatter(ds['PSAL'], ds['PRES'], c=ds['CYCLE_NUMBER'])
plt.colorbar(sc, ax=ax[ix], label=ds['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].grid()
ax[ix].set_xlabel(ds['PSAL'].attrs['units'])
ax[ix].set_title(ds['PSAL'].attrs['long_name']);
<jupyter_output><empty_output><jupyter_text>## T/S diagram<jupyter_code>fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6,6))
ax = np.array(ax).flatten()
ix = 0
sc = ax[ix].scatter(ds['PSAL'], ds['TEMP'], c=ds['CYCLE_NUMBER'])
ax[ix].grid()
ax[ix].set_xlabel("%s [%s]" % (ds['PSAL'].attrs['long_name'], ds['PSAL'].attrs['units']))
ax[ix].set_ylabel("%s [%s]" % (ds['TEMP'].attrs['long_name'], ds['TEMP'].attrs['units']))
plt.title(ds.attrs['Fetched_constraints']);<jupyter_output><empty_output><jupyter_text># Filter data according to QC flags<jupyter_code># ds_ok = float_fetcher.fetcher.filter_qc(ds) # By default, drop all points with QC not equal to 1 or 2 in all variables
ds_ok = float_fetcher.fetcher.filter_qc(ds, mode='any') # Keep points with at least of the variable QC in 1 or 2
ds_ok
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(20,10), sharex=True, sharey=True)
ax = np.array(ax).flatten()
ix = 0
sc = ax[ix].scatter(ds_ok['CYCLE_NUMBER'], ds_ok['PRES'], c=ds_ok['TEMP'], cmap=cmocean.cm.thermal)
plt.colorbar(sc, ax=ax[ix], label=ds_ok['TEMP'].attrs['units'])
ax[ix].invert_yaxis()
ax[ix].set_ylabel("%s [%s]" % (ds_ok['PRES'].attrs['long_name'], ds_ok['PRES'].attrs['units']))
ax[ix].set_title(ds_ok['TEMP'].attrs['long_name'])
ix += 1
sc = ax[ix].scatter(ds_ok['CYCLE_NUMBER'], ds_ok['PRES'], c=ds_ok['TEMP_ERROR'], cmap=cmocean.cm.amp)
plt.colorbar(sc, ax=ax[ix], label=ds_ok['TEMP_ERROR'].attrs['units'])
ax[ix].set_ylabel("%s [%s]" % (ds_ok['PRES'].attrs['long_name'], ds_ok['PRES'].attrs['units']))
ax[ix].set_title(ds_ok['TEMP_ERROR'].attrs['long_name'])
ix += 1
sc = ax[ix].scatter(ds_ok['CYCLE_NUMBER'], ds_ok['PRES'], c=ds_ok['PSAL'], cmap=cmocean.cm.haline)
plt.colorbar(sc, ax=ax[ix], label=ds_ok['PSAL'].attrs['units'])
ax[ix].set_xlabel(ds_ok['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].set_ylabel("%s [%s]" % (ds_ok['PRES'].attrs['long_name'], ds_ok['PRES'].attrs['units']))
ax[ix].set_title(ds_ok['PSAL'].attrs['long_name']);
ix += 1
sc = ax[ix].scatter(ds_ok['CYCLE_NUMBER'], ds_ok['PRES'], c=ds_ok['PSAL_ERROR'], cmap=cmocean.cm.amp)
plt.colorbar(sc, ax=ax[ix], label=ds_ok['PSAL_ERROR'].attrs['units'])
ax[ix].set_xlabel(ds_ok['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].set_ylabel("%s [%s]" % (ds_ok['PRES'].attrs['long_name'], ds_ok['PRES'].attrs['units']))
ax[ix].set_title(ds_ok['PSAL_ERROR'].attrs['long_name']);
fig.suptitle("Data mode and QC filtered variables\n%s" % ds.attrs['Fetched_constraints']);
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20,10), sharey=True)
ax = np.array(ax).flatten()
ix = 0
sc = ax[ix].scatter(ds_ok['TEMP'], ds_ok['PRES'], c=ds_ok['CYCLE_NUMBER'])
ax[ix].grid()
ax[ix].invert_yaxis()
ax[ix].set_xlabel(ds_ok['TEMP'].attrs['units'])
ax[ix].set_ylabel("%s [%s]" % (ds_ok['PRES'].attrs['long_name'], ds_ok['PRES'].attrs['units']))
ax[ix].set_title(ds_ok['TEMP'].attrs['long_name'])
ix += 1
sc = ax[ix].scatter(ds_ok['PSAL'], ds_ok['PRES'], c=ds_ok['CYCLE_NUMBER'])
plt.colorbar(sc, ax=ax[ix], label=ds_ok['CYCLE_NUMBER'].attrs['long_name'])
ax[ix].grid()
ax[ix].set_xlabel(ds_ok['PSAL'].attrs['units'])
ax[ix].set_title(ds_ok['PSAL'].attrs['long_name']);
fig.suptitle("Data mode and QC filtered variables\n%s" % ds.attrs['Fetched_constraints']);<jupyter_output><empty_output>
|
non_permissive
|
/examples/Example-Expert-02.ipynb
|
euroargodev/erddap_usecases
| 15 |
<jupyter_start><jupyter_text># Gapminder上的农用地数据清理
按照练习4最后一个小练习的要求,从[Gapminder](https://www.gapminder.org/data/)上下载感兴趣的数据,并用[lesson5 探索两个变量](https://classroom.udacity.com/nanodegrees/nd002-cn-advanced-vip/parts/7af8e761-54e5-4c80-a284-e402c30a791b)上学到的基本方法绘制2-5张图。
此部分是数据的获取、清理部分。分析部分在此文件夹中的R部分完成。
整个清理步骤为:收集数据—评估数据—清理数据。
值得一提的是,由于是小练习,不准备花费过多时间,所以并不准备全面评估并解决所有问题,当数据清理达到使用目的即可。
## 分析目的
简单浏览了数据后,确定主要来分析农用地面积比例与年份的关系。
准备制作3张图:
(1)散点图。
(2)
## 收集数据
从网站上下载全球农用地数据。
**数据名称:**Agricultural land (% of land area);
**数据格式:**Excel;
**数据原始来源:**world bank;
**数据描述:**1960年代到2009年各国的农用地面积比重。<jupyter_code>import pandas as pd
df = pd.read_csv()<jupyter_output><empty_output>
|
no_license
|
/R basic/lesson4/.ipynb_checkpoints/agriculture_land_data_clean-checkpoint.ipynb
|
Tanghuaizhi/learn_data_analysis
| 1 |
<jupyter_start><jupyter_text># Unsupervised Anomaly Detection Brain-MRI
Jupyter notebook for running all the experiments from our [paper](https://arxiv.org/abs/2004.03271).
Hyperparameters may have to be adjusted!## Preperation
### Imports and installation of the required libraries
<jupyter_code># from google.colab import drive
# from google.colab import files
import os, glob
! pip install pynrrd
! pip install SimpleITK
! pip install bunch
! pip install nibabel
! pip install medpy
! pip install opencv-python<jupyter_output><empty_output><jupyter_text>### Get Code
Clone Code from github.com<jupyter_code># ! git clone https://github.com/StefanDenn3r/Unsupervised_Anomaly_Detection_Brain_MRI
# ! cd Unsupervised_Anomaly_Detection_Brain_MRI/<jupyter_output><empty_output><jupyter_text>### Google Drive mount
Mounting Google Drive to access datasets.<jupyter_code># drive.mount('gdrive')<jupyter_output><empty_output><jupyter_text>Check Directory<jupyter_code>os.getcwd()<jupyter_output><empty_output><jupyter_text>### Tensorboard and tunneling
Install ngrok for tunneling <jupyter_code>if os.path.exists("ngrok-stable-linux-amd64.zip"):
os.remove("ngrok-stable-linux-amd64.zip")
if os.path.exists("ngrok"):
os.remove("ngrok")
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip<jupyter_output>'wget' is not recognized as an internal or external command,
operable program or batch file.
'unzip' is not recognized as an internal or external command,
operable program or batch file.
<jupyter_text>Start tensorboard and forward port with ngrok<jupyter_code>LOG_DIR = 'logs/'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 6006 &')<jupyter_output><empty_output><jupyter_text>Extract ngrok url for accessing tensorboard
**Attention**: Sometimes it throws an error like this:
```
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
If this is the case the easiest way to solve this issue is to delete the ngrok*.zip and ngrok from the Google Drive folder and install them again.
<jupyter_code>! curl -s http://localhost:4040/api/tunnels | python3 -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!pip3 install tensorflow==1.15<jupyter_output><empty_output><jupyter_text>## Training
### Imports<jupyter_code># %tensorflow_version 1.x
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import json
import os
import tensorflow as tf
import cv2
from datetime import datetime
from utils.default_config_setup import get_config, get_options, get_datasets
from trainers.AE import AE
from trainers.VAE import VAE
from trainers.CE import CE
from trainers.ceVAE import ceVAE
from trainers.VAE_You import VAE_You
from trainers.GMVAE import GMVAE
from trainers.GMVAE_spatial import GMVAE_spatial
from trainers.fAnoGAN import fAnoGAN
from trainers.ConstrainedAAE import ConstrainedAAE
from trainers.ConstrainedAE import ConstrainedAE
from trainers.AnoVAEGAN import AnoVAEGAN
from models import autoencoder, variational_autoencoder, context_encoder_variational_autoencoder, variational_autoencoder_Zimmerer, context_encoder_variational_autoencoder, context_encoder_variational_autoencoder_Zimmerer, gaussian_mixture_variational_autoencoder_You, gaussian_mixture_variational_autoencoder_spatial, gaussian_mixture_variational_autoencoder, fanogan, fanogan_schlegl, constrained_autoencoder, constrained_adversarial_autoencoder, constrained_adversarial_autoencoder_Chen, anovaegan
from utils import Evaluation
from utils.default_config_setup import get_config, get_options, get_datasets, Dataset
<jupyter_output><empty_output><jupyter_text>Set paths to datasets and where to save checkpoints and evaluations.<jupyter_code>def get_CONFIG(timestamp=None):
current_time = datetime.now().strftime('%Y%m%d_%H%M%S')
if timestamp:
current_time=timestamp
dataset_root = "C:/Users/Irfixq/Desktop/P2/Code 1/Unsupervised_Anomaly_Detection_Brain_MRI-master"
save_dir = "C:/Users/Irfixq/Desktop/P2/Code 1/Unsupervised_Anomaly_Detection_Brain_MRI-master"
CONFIG = {
"BRAINWEBDIR": os.path.join(dataset_root, 'Brainweb'),
"MSSEG2008DIR": os.path.join(dataset_root, 'MSSEG2008'),
"MSISBI2015DIR": os.path.join(dataset_root, 'ISBIMSlesionChallenge'),
"MSLUBDIR": os.path.join(dataset_root, 'MSlub'),
"CHECKPOINTDIR": os.path.join(save_dir, 'checkpoints', current_time),
"SAMPLEDIR": os.path.join(save_dir, 'sample_dir', current_time),
}
return CONFIG<jupyter_output><empty_output><jupyter_text>### Manual Training
#### Baseline**AE**<jupyter_code>import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=AE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
# Create an instance of the model and train it
model = AE(tf.Session(), config, network=autoencoder.autoencoder)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>**VAE**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=VAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
# Create an instance of the model and train it
model = VAE(tf.Session(), config, network=variational_autoencoder.variational_autoencoder)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>#### ceVAE - Variations
Paper: [Context-encoding Variational Autoencoder for Unsupervised Anomaly Detection](https://arxiv.org/abs/1812.05941)
**CE**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.Brainweb
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=CE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
# Create an instance of the model and train it
model = CE(tf.Session(), config, network=autoencoder.autoencoder)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>**ceVAE**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.Brainweb
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=ceVAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.use_gradient_based_restoration = 0.002
# Create an instance of the model and train it
model = ceVAE(tf.Session(), config, network=context_encoder_variational_autoencoder.context_encoder_variational_autoencoder)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>**VAE-Zimmerer**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=64, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=VAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
# Create an instance of the model and train it
model = VAE(tf.Session(), config, network=variational_autoencoder_Zimmerer.variational_autoencoder_Zimmerer)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>**ceVAE-Zimmerer**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.Brainweb
options = get_options(batchsize=64, learningrate=0.0001, numEpochs=1, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=ceVAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
# Create an instance of the model and train it
model = ceVAE(tf.Session(), config, network=context_encoder_variational_autoencoder_Zimmerer.context_encoder_variational_autoencoder_Zimmerer)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>#### GMVAE-(Restoration)-Variations
Paper: [Unsupervised Lesion Detection via Image Restoration with a Normative Prior](https://openreview.net/forum?id=S1xg4W-leV)
**VAE-You**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=VAE_You, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.restore_lr = 1e-3
config.restore_steps = 10
config.tv_lambda = 0.0
# Create an instance of the model and train it
model = VAE_You(tf.Session(), config, network=variational_autoencoder.variational_autoencoder)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>**GMVAE-You**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=GMVAE_spatial, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.dim_c = 9
config.dim_z = 1
config.dim_w = 1
config.c_lambda = 1
config.restore_lr = 1e-3
config.restore_steps = 10
config.tv_lambda = 1
# Create an instance of the model and train it
model = GMVAE_spatial(tf.Session(), config, network=gaussian_mixture_variational_autoencoder_You.gaussian_mixture_variational_autoencoder_You)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>**GMVAE**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=GMVAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.dim_c = 9
config.dim_z = 128
config.dim_w = 1
config.c_lambda = 1
config.restore_lr = 1e-3
config.restore_steps = 10
config.tv_lambda = 0.0
# Create an instance of the model and train it
model = GMVAE(tf.Session(), config, network=gaussian_mixture_variational_autoencoder.gaussian_mixture_variational_autoencoder)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))<jupyter_output><empty_output><jupyter_text>**GMVAE-spatial**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=GMVAE_spatial, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.dim_c = 9
config.dim_z = 1
config.dim_w = 1
config.c_lambda = 1
config.restore_lr = 1e-3
config.restore_steps = 10
config.tv_lambda = 0.0
# Create an instance of the model and train it
model = GMVAE_spatial(tf.Session(), config, network=gaussian_mixture_variational_autoencoder_spatial.gaussian_mixture_variational_autoencoder_spatial)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))<jupyter_output><empty_output><jupyter_text>#### f-AnoGAN
Paper: [f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks.](https://www.ncbi.nlm.nih.gov/pubmed/30831356)**Unified f-AnoGan**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=fAnoGAN, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.kappa = 1.0
config.scale = 10.0
# Create an instance of the model and train it
model = fAnoGAN(tf.Session(), config, network=fanogan.fanogan)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))<jupyter_output><empty_output><jupyter_text>**f-AnoGAN - Schlegl**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=8, learningrate=0.0001, numEpochs=2, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=fAnoGAN, options=options, optimizer='ADAM', intermediateResolutions=[16, 16], dropout_rate=0.1, dataset=datasetHC)
config.kappa = 1.0
config.scale = 10.0
# Create an instance of the model and train it
model = fAnoGAN(tf.Session(), config, network=fanogan_schlegl.fanogan_schlegl)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>#### Constrained Adversarial AE
Paper: [Unsupervised Detection of Lesions in Brain MRI using constrained adversarial auto-encoders](https://arxiv.org/abs/1806.04972)**constrained AAE**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=ConstrainedAAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.scale = 10.0
config.rho = 1.0
# Create an instance of the model and train it
model = ConstrainedAAE(tf.Session(), config, network=constrained_adversarial_autoencoder.constrained_adversarial_autoencoder)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output><jupyter_text>**constrained AAE Chen**<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=8, learningrate=0.0001, numEpochs=2, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=ConstrainedAAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
config.kappa = 1.0
config.scale = 10.0
config.rho = 1.0
# Create an instance of the model and train it
model = ConstrainedAAE(tf.Session(), config, network=constrained_adversarial_autoencoder_Chen.constrained_adversarial_autoencoder_Chen)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))<jupyter_output><empty_output><jupyter_text>#### AnoVAEGAN
Paper: [Deep autoencoding models for unsupervised anomaly segmentation in brain MR images](https://arxiv.org/abs/1804.04488)<jupyter_code>tf.reset_default_graph()
dataset = Dataset.BRAINWEB
options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())
options['data']['dir'] = options["globals"][dataset.value]
datasetHC, datasetPC = get_datasets(options, dataset)
config = get_config(trainer=AnoVAEGAN, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)
# Create an instance of the model and train it
model = AnoVAEGAN(tf.Session(), config, network=anovaegan.anovaegan)
# Train it
model.train(datasetHC)
# Evaluate
Evaluation.evaluate(datasetPC, model, options, description=f"{type(datasetHC).__name__}-{options['threshold']}", epoch=str(options['train']['numEpochs']))
<jupyter_output><empty_output>
|
non_permissive
|
/Unsupervised Anomaly Detection Brain-MRI.ipynb
|
irfixq/AE
| 24 |
<jupyter_start><jupyter_text>Installing (updating) the following libraries for your Sagemaker
instance.<jupyter_code>!pip install .. # installing d2l
<jupyter_output><empty_output><jupyter_text># 自我注意力和位置编码
:label:`sec_self-attention-and-positional-encoding`
在深度学习中,我们经常使用 CNN 或 RNN 对序列进行编码。现在请注意机制。想象一下,我们将一系列令牌输入注意力池,以便同一组令牌充当查询、键和值。具体来说,每个查询都会关注所有键值对并生成一个注意力输出。由于查询、键和值来自同一个地方,因此执行
*自我关注 * :cite:`Lin.Feng.Santos.ea.2017,Vaswani.Shazeer.Parmar.ea.2017`,也称为 * 内心注意 * :cite:`Cheng.Dong.Lapata.2016,Parikh.Tackstrom.Das.ea.2016,Paulus.Xiong.Socher.2017`。
在本节中,我们将讨论使用自我注意的序列编码,包括使用序列顺序的其他信息。
<jupyter_code>import math
import torch
from torch import nn
from d2l import torch as d2l<jupyter_output><empty_output><jupyter_text>## 自我注意
给定一系列输入令牌 $\mathbf{x}_1, \ldots, \mathbf{x}_n$,其中任何 $\mathbf{x}_i \in \mathbb{R}^d$ ($1 \leq i \leq n$),它的自我注意力输出一个长度相同的序列 $\mathbf{y}_1, \ldots, \mathbf{y}_n$,其中
$$\mathbf{y}_i = f(\mathbf{x}_i, (\mathbf{x}_1, \mathbf{x}_1), \ldots, (\mathbf{x}_n, \mathbf{x}_n)) \in \mathbb{R}^d$$
根据 :eqref:`eq_attn-pooling` 中关注集中 $f$ 的定义。使用多头注意力,以下代码片段计算具有形状的张量的自我注意力(批量大小、时间步长或令牌中的序列长度,$d$)。输出张量的形状相同。
<jupyter_code>num_hiddens, num_heads = 100, 5
attention = d2l.MultiHeadAttention(num_hiddens, num_hiddens, num_hiddens,
num_hiddens, num_heads, 0.5)
attention.eval()
batch_size, num_queries, valid_lens = 2, 4, torch.tensor([3, 2])
X = torch.ones((batch_size, num_queries, num_hiddens))
attention(X, X, X, valid_lens).shape<jupyter_output><empty_output><jupyter_text>## 比较 CNN、RNN 和自我注意
:label:`subsec_cnn-rnn-self-attention`
让我们比较将 $n$ 令牌序列映射到另一个相等长度序列的架构,其中每个输入或输出令牌由 $d$ 维矢量表示。具体来说,我们将考虑 CNN、RNN 和自我注意力。我们将比较它们的计算复杂性、顺序操作和最大路径长度。请注意,顺序操作会阻止并行计算,而任意序列位置组合之间的路径较短,可以更轻松地学习序列 :cite:`Hochreiter.Bengio.Frasconi.ea.2001` 中的远距离依赖关系。

:label:`fig_cnn-rnn-self-attention`
考虑一个内核大小为 $k$ 的卷积层。我们将在后面的章节中提供有关使用 CNN 处理序列的更多详细信息目前,我们只需要知道,由于序列长度是 $n$,所以输入和输出通道的数量都是 $d$,卷积层的计算复杂度为 $\mathcal{O}(knd^2)$。如 :numref:`fig_cnn-rnn-self-attention` 所示,CNN 是分层的,因此有 $\mathcal{O}(1)$ 个顺序操作,最大路径长度为 $\mathcal{O}(n/k)$。例如,$\mathbf{x}_1$ 和 $\mathbf{x}_5$ 处于 :numref:`fig_cnn-rnn-self-attention` 中内核大小为 3 的双层 CNN 的接受范围内。
更新 rnN 的隐藏状态时,$d \times d$ 权重矩阵和 $d$ 维隐藏状态的乘法计算复杂度为 $\mathcal{O}(d^2)$。由于序列长度为 $n$,因此循环层的计算复杂度为 $\mathcal{O}(nd^2)$。根据 :numref:`fig_cnn-rnn-self-attention`,有 $\mathcal{O}(n)$ 个顺序操作无法并行化,最大路径长度也是 $\mathcal{O}(n)$。
在自我注意中,查询、键和值都是 $n \times d$ 矩阵。考虑 :eqref:`eq_softmax_QK_V` 中的扩展点-产品关注点,其中 $n \times d$ 矩阵乘以 $d \times n$ 矩阵,然后输出 $n \times n$ 矩阵乘以 $n \times d$ 矩阵。因此,自我注意力具有 $\mathcal{O}(n^2d)$ 计算复杂性。正如我们在 :numref:`fig_cnn-rnn-self-attention` 中看到的那样,每个令牌都通过自我注意直接连接到任何其他令牌。因此,计算可以与 $\mathcal{O}(1)$ 顺序操作并行,最大路径长度也是 $\mathcal{O}(1)$。
总而言之,CNN 和自我注意力都可以享受并行计算,而且自我注意力的最大路径长度最短。但是,相对于序列长度的二次计算复杂性使得自我注意力在很长的序列中非常缓慢。
## 位置编码
:label:`subsec_positional-encoding`
与逐个重复处理序列令牌的 RNN 不同,自我注意力会放弃顺序操作,而倾向于并行计算。要使用序列顺序信息,我们可以通过在输入表示中添加 * 位置编码 * 来注入绝对或相对位置信息。可以学习或修复位置编码。在下面,我们描述了基于正弦和余弦函数 :cite:`Vaswani.Shazeer.Parmar.ea.2017` 的固定位置编码。
假设输入表示 $\mathbf{X} \in \mathbb{R}^{n \times d}$ 包含一个序列中 $n$ 令牌的 $d$ 维嵌入。位置编码使用相同形状的位置嵌入矩阵 $\mathbf{P} \in \mathbb{R}^{n \times d}$ 输出 $\mathbf{X} + \mathbf{P}$,该矩阵在 $i^\mathrm{th}$ 行和 $(2j)^\mathrm{th}$ 或 $(2j + 1)^\mathrm{th}$ 列上的元素为
$$\begin{aligned} p_{i, 2j} &= \sin\left(\frac{i}{10000^{2j/d}}\right),\\p_{i, 2j+1} &= \cos\left(\frac{i}{10000^{2j/d}}\right).\end{aligned}$$
:eqlabel:`eq_positional-encoding-def`
乍一看,这种三角函数设计看起来很奇怪。在解释这个设计之前,让我们首先在下面的 `PositionalEncoding` 课中实现它。
<jupyter_code>#@save
class PositionalEncoding(nn.Module):
def __init__(self, num_hiddens, dropout, max_len=1000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# Create a long enough `P`
self.P = torch.zeros((1, max_len, num_hiddens))
X = torch.arange(max_len, dtype=torch.float32).reshape(
-1, 1) / torch.pow(
10000,
torch.arange(0, num_hiddens, 2, dtype=torch.float32) /
num_hiddens)
self.P[:, :, 0::2] = torch.sin(X)
self.P[:, :, 1::2] = torch.cos(X)
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].to(X.device)
return self.dropout(X)<jupyter_output><empty_output><jupyter_text>在位置嵌入矩阵 $\mathbf{P}$ 中,行对应于序列中的位置,列表示不同的位置编码维度。在下面的示例中,我们可以看到位置嵌入矩阵的 $6^{\mathrm{th}}$ 和 $7^{\mathrm{th}}$ 列的频率高于 $8^{\mathrm{th}}$ 和 $9^{\mathrm{th}}$ 列。$6^{\mathrm{th}}$ 和 $7^{\mathrm{th}}$ 列之间的偏移量($8^{\mathrm{th}}$ 和 $9^{\mathrm{th}}$ 列相同)是由于正弦函数和余弦函数的交替。
<jupyter_code>encoding_dim, num_steps = 32, 60
pos_encoding = PositionalEncoding(encoding_dim, 0)
pos_encoding.eval()
X = pos_encoding(torch.zeros((1, num_steps, encoding_dim)))
P = pos_encoding.P[:, :X.shape[1], :]
d2l.plot(torch.arange(num_steps), P[0, :, 6:10].T, xlabel='Row (position)',
figsize=(6, 2.5), legend=["Col %d" % d for d in torch.arange(6, 10)])<jupyter_output><empty_output><jupyter_text>### 绝对位置信息
要了解沿编码维度单调降低的频率与绝对位置信息的关系,让我们打印出 $0, 1, \ldots, 7$ 的二进制表示形式。正如我们所看到的,每个数字、每两个数字和每四个数字上的最低位、第二位和第三位最低位分别交替。
<jupyter_code>for i in range(8):
print(f'{i} in binary is {i:>03b}')<jupyter_output>0 in binary is 000
1 in binary is 001
2 in binary is 010
3 in binary is 011
4 in binary is 100
5 in binary is 101
6 in binary is 110
7 in binary is 111
<jupyter_text>在二进制表示中,较高的位比特的频率低于较低的位。同样,如下面的热图所示,位置编码通过使用三角函数在编码维度下降频率。由于输出是浮点数,因此此类连续表示比二进制表示法更节省空间。
<jupyter_code>P = P[0, :, :].unsqueeze(0).unsqueeze(0)
d2l.show_heatmaps(P, xlabel='Column (encoding dimension)',
ylabel='Row (position)', figsize=(3.5, 4), cmap='Blues')<jupyter_output><empty_output>
|
no_license
|
/chapter_attention-mechanisms/self-attention-and-positional-encoding.ipynb
|
Global19/d2l-zh-pytorch-sagemaker
| 7 |
<jupyter_start><jupyter_text>
# BERT 모델
---
## KorQuAD 데이터셋
KorQuAD(The Korean Question Answering Dataset, 한국어 질의응답 데이터셋) 을 가지고
자연어처리(Natural Language Processing) 분야의 기계독해(Machine Reading Comprehension, MRC) 태스크 처리
KorQuAD 데이터셋은 스탠포드 대학에서 구축한 대용량 데이터셋 SQuAD 데이터셋을 벤치마크.
SQuAD 데이터셋은 '기계독해(MRC)' 태스크 처리를 테스트 하기 위한 데이터 셋으로,
언어 모델 성능 측정의 가장 표준적인 벤치마크입니다.
자연어처리 분야에는 '감성분석(Sentimental Analysis)', '번역(Translation)' 등 여러 태스크가 있지만,
사람의 질문에 대해서 자연어의 의미를 정확히 이해하고 답할 수 있는지를 측정하는 '기계독해(MRC)' 태스크가 매우 중요합니다.
참고.
[SQuAD 공식페이지](https://rajpurkar.github.io/SQuAD-explorer/)
[KorQuAD 공식페이지](https://korquad.github.io/)
[MRC 모델, 어떻게 개발하고 평가하나요?](https://blog.naver.com/skelterlabs/222025030327)
### KorQuAD1.0 과 KorQuAD2.0 주요 차이
1. 문서의 길이
KorQuAD 1.0 : 한 두 문단 정도의 지문 길이
KorQuAD 2.0 : 위키백과 한 페이지 전체 지문 길이
2. 문서의 구조
KorQuAD 1.0 : 단순 string 으로 구성된 텍스트 문서 (?)
KorQuAD 2.0: 지문에 리스트와 표가 포함. MRC 모델이 html tag 로 구조화된 문서를 이해할 수 있어야 합니다.
3. 답변의 길이 및 구조
KorQuAD 1.0 : 단순 단어나 구의 문장 범위 (?)
KorQuAD 2.0 : 단어나 구의 단위를 넘어서 문단, 표, 리스트 전체를 포괄하는 긴 영역 포함.
KorQuAD1.0 데이터 다운로드
```
$ wget https://korquad.github.io/dataset/KorQuAD_v1.0_train.json
$ wget https://korquad.github.io/dataset/KorQuAD_v1.0_dev.json
```
model, vocab, text corpus 데이터 다운로드
```
$ wget https://aiffelstaticprd.blob.core.windows.net/media/documents/ko_32000.model
$ wget https://aiffelstaticprd.blob.core.windows.net/media/documents/ko_32000.vocab
$ wget https://aiffelstaticprd.blob.core.windows.net/media/documents/bert_pretrain_32000.hdf5
$ wget https://aiffelstaticprd.blob.core.windows.net/media/documents/kowiki.txt.zip
```
<jupyter_code># imports
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import tensorflow.keras.backend as K
import tensorflow_addons as tfa
import os
import re
import numpy as np
import pandas as pd
import pickle
import random
import collections
import json
from datetime import datetime
import sentencepiece as spm
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud
# 랜덤시드 고정
random_seed = 1234
random.seed(random_seed)
np.random.seed(random_seed)
tf.random.set_seed(random_seed)<jupyter_output><empty_output><jupyter_text>
## 데이터 확인
### json 포맷 데이터
1. 파이썬 json 모듈의 dumps() 메소드로 확인
2. json 포맷의 파일을 리스트로 읽어와 리스트의 길이와 첫 번째 아이템만 확인하는 함수 작성
<jupyter_code># 파이썬 json 모듈의 dumps 메소드 사용
# 내용이 길기 때문에 주석처리
# print(json.dumps(train_json["data"][0], indent=2, ensure_ascii=False))
# json 포맷의 파일을 간략히 확인하기 위한 함수 작성
def print_json_tree(data, indent=""):
for key, value in data.items():
# list 형태의 item 일 경우
if type(value) == list:
print(f'{indent}- {key}: [{len(value)}]') # 리스트의 길이 출력
print_json_tree(value[0], indent + " ") # 리스트의 첫 번째 데이터 출력
# list 형태의 item 아닐 경우
else:
print(f'{indent}- {key}: {value}') # 데이터 출력
# 데이터 불러오기
data_dir = os.getenv('HOME')+'/aiffel/bert_qna/data'
model_dir = os.getenv('HOME')+'/aiffel/bert_qna/models'
# Train 데이터 확인
train_json_path = data_dir + '/KorQuAD_v1.0_train.json'
with open(train_json_path) as f:
train_json = json.load(f)
print_json_tree(train_json)
# Test 데이터 확인
dev_json_path = data_dir + '/KorQuAD_v1.0_dev.json'
with open(dev_json_path) as f:
dev_json = json.load(f)
print_json_tree(dev_json)<jupyter_output>- version: KorQuAD_v1.0_dev
- data: [140]
- paragraphs: [2]
- qas: [7]
- answers: [1]
- text: 1989년 2월 15일
- answer_start: 0
- id: 6548850-0-0
- question: 임종석이 여의도 농민 폭력 시위를 주도한 혐의로 지명수배 된 날은?
- context: 1989년 2월 15일 여의도 농민 폭력 시위를 주도한 혐의(폭력행위등처벌에관한법률위반)으로 지명수배되었다. 1989년 3월 12일 서울지방검찰청 공안부는 임종석의 사전구속영장을 발부받았다. 같은 해 6월 30일 평양축전에 임수경을 대표로 파견하여 국가보안법위반 혐의가 추가되었다. 경찰은 12월 18일~20일 사이 서울 경희대학교에서 임종석이 성명 발표를 추진하고 있다는 첩보를 입수했고, 12월 18일 오전 7시 40분 경 가스총과 전자봉으로 무장한 특공조 및 대공과 직원 12명 등 22명의 사복 경찰을 승용차 8대에 나누어 경희대학교에 투입했다. 1989년 12월 18일 오전 8시 15분 경 서울청량리경찰서는 호위 학생 5명과 함께 경희대학교 학생회관 건물 계단을 내려오는 임종석을 발견, 검거해 구속을 집행했다. 임종석은 청량리경찰서에서 약 1시간 동안 조사를 받은 뒤 오전 9시 50분 경 서울 장안동의 서울지방경찰청 공안분실로 인계되었다.
- title: 임종석
<jupyter_text>
## 데이터 전처리 1 : 띄어쓰기 단위 (어절) Tokenize 하여 정보관리
'기계독해(MRC)' 태스크 처리를 위한 데이터 전처리는 다른 자연어처리 태스크와 다른 점이 있습니다.
- 감성분석이나 번역 태스크의 자연어 전처리
띄어쓰기 등을 기준으로 어휘(또는 단어)를 나누고, 어휘에 속하지 않는 불용어(,.!?)를 제거한 뒤,
이를 Embedding vectorization 하여 벡터 공간상의 데이터로 변환해주는 작업을 주로 하였습니다.
- 기계독해 태스크의 자연어 전처리
띄어쓰기 단위로 어휘(또는 단어)를 나누고, 나누어진 영역별로 위치(순서)정보를 인코딩해 줍니다.
이를 통해 문장 내 어휘의 위치정보를 학습할 수 있습니다.
### 문장을 '어절' 별로 나누어 '문자(character)' 마다 tokenize (인코딩) 값 저장
띄어쓰기 단위로 token을 정리한 후 word token 영역별로 유니크한 숫자(어절 번호)를 부여하는 함수를 작성합니다.
이후에 __형태소에 따라__ 문자(charater) 혹은 subword 단위로 token이 분리되는 것에 대비해서
원래 데이터가 띄어쓰기 단위로 어떠했었는지 word token 영역별로 추가 정보를 관리하면 도움이 됩니다.
문자(character)별로 word_token 영역을 표시해 주는 char_to_word 리스트를 생성해 관리해 둡니다.
이 값은 현재 글자가 몇 번째 어절에 포함된 것이었는지를 말해 줍니다.
<jupyter_code># whitespace
def _is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
# whitespace가 2개인 경우도 아래와 같이 동일하게 출력되도록 처리해야 합니다.
string1 = '1839년 파우스트을 읽었다.'
string2 = '1839년 파우스트을 읽었다.'
string1[6:10], string2[7:11]
# 글자별로 띄어쓰기 영역 정보 관리시 문제점
word_tokens = []
char_to_word = []
prev_is_whitespace = True
# 첫번째 문장(string1)에 대해 띄어쓰기 영역 정보를 표시
for c in string1:
if _is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
word_tokens.append(c)
else:
word_tokens[-1] += c
prev_is_whitespace = False
char_to_word.append(len(word_tokens) - 1)
print(f'\'{c}\' : {word_tokens} : {char_to_word}')
# 글자별로 띄어쓰기 영역 정보 관리시 문제점
word_tokens = []
char_to_word = []
prev_is_whitespace = True
# 두번째 문장(string2)에 대해 띄어쓰기 영역 정보를 표시
for c in string2:
if _is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
word_tokens.append(c)
else:
word_tokens[-1] += c
prev_is_whitespace = False
char_to_word.append(len(word_tokens) - 1)
print(f'\'{c}\' : {word_tokens} : {char_to_word}')<jupyter_output>'1' : ['1'] : [0]
'8' : ['18'] : [0, 0]
'3' : ['183'] : [0, 0, 0]
'9' : ['1839'] : [0, 0, 0, 0]
'년' : ['1839년'] : [0, 0, 0, 0, 0]
' ' : ['1839년'] : [0, 0, 0, 0, 0, 0]
' ' : ['1839년'] : [0, 0, 0, 0, 0, 0, 0]
'파' : ['1839년', '파'] : [0, 0, 0, 0, 0, 0, 0, 1]
'우' : ['1839년', '파우'] : [0, 0, 0, 0, 0, 0, 0, 1, 1]
'스' : ['1839년', '파우스'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1]
'트' : ['1839년', '파우스트'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]
'을' : ['1839년', '파우스트을'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
' ' : ['1839년', '파우스트을'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
'읽' : ['1839년', '파우스트을', '읽'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2]
'었' : ['1839년', '파우스트을', '읽었'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2]
'다' : ['1839년', '파우스트을', '읽었다'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2]
'.' : ['1839년', '파우스트을', '읽었다.'] : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2]
<jupyter_text>
공백 길이에 따라 두 문장의 영역표시 결과가 조금 달라집니다.
이러한 문제를 해결하기 위해 위에서 정의한 ```_is_whitespace()``` 함수를 활용하여
띄어쓰기 단위로 어휘를 나누는 인코딩(?) 함수를 작성해 줍니다.
<jupyter_code># 띄어쓰기 단위로 어휘의 위치정보를 나누는 인코딩 함수 작성
# _is_whitespace() 적용
def _tokenize_whitespace(string):
word_tokens = []
char_to_word = []
prev_is_whitespace = True
for c in string:
if _is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
word_tokens.append(c)
else:
word_tokens[-1] += c
prev_is_whitespace = False
char_to_word.append(len(word_tokens) - 1)
return word_tokens, char_to_word
# 첫번째 문장(string1)에 대해 띄어쓰기 영역 정보를 표시
word_tokens, char_to_word = _tokenize_whitespace(string1)
for c, i in zip(list(string1), char_to_word):
print(f'\'{c}\' : {i}')
word_tokens, char_to_word
# 두번째 문장(string2)에 대해 띄어쓰기 영역 정보를 표시
word_tokens, char_to_word = _tokenize_whitespace(string2)
for c, i in zip(list(string2), char_to_word):
print(f'\'{c}\' : {i}')
word_tokens, char_to_word<jupyter_output>'1' : 0
'8' : 0
'3' : 0
'9' : 0
'년' : 0
' ' : 0
' ' : 0
'파' : 1
'우' : 1
'스' : 1
'트' : 1
'을' : 1
' ' : 1
'읽' : 2
'었' : 2
'다' : 2
'.' : 2
<jupyter_text>
## 데이터 전처리 2 : 단어사전(Vocab)으로 Tokenize
### Subword Segmentation (형태소 분할)
어절을 또 다시 '[__형태소__](https://terms.naver.com/entry.naver?docId=3559671&cid=58583&categoryId=58729)' 단위로 나누어 Tokenize 합니다.
'읽다', '읽었다', '읽으려고', '읽고' 등의 많은 데이터를
의미 형태소 '읽' 과 의존 형태소 '다' 등으로 나누어 저장합니다.
### SentencePiece 모델 Vocab
BERT에는 WordPiece 모델 사용이 일반적이지만,
여기에서는 SentencePiece 모델을 이용해 Subword(형태소) 기반의 텍스트 전처리를 진행합니다.
SentencePiece 모델은 구글에서 오픈소스로 제공하는 형태소 분할 모델입니다.
파이썬에서 손쉽게 사용가능하며, WordPiece 등 다른 모델들을 통합하여 제공하므로 최근 널리 사용되고 있습니다.
SentencePiece 같은 모델들은
언어마다 다른 문법규칙을 활용하지 않고 어떤 언어에든 보편적으로 적용 가능하다는 장점이 있습니다.
- 적절한 Subword(형태소) 분절 규칙을 학습하거나,
- 자주 사용되는 구문을 하나의 단어로 묶어내는 등의 통계적인 방법을 사용합니다.
<jupyter_code># sentencepiece 모듈을 이용한 형태소 단위 분할 확인
# vocab 모델 loading
vocab = spm.SentencePieceProcessor()
vocab.load(f"{model_dir}/ko_32000.model")
# word(어절)를 subword(형태소)로 변경하면서 index 저장
word_to_token = []
context_tokens = []
for (i, word) in enumerate(word_tokens):
word_to_token.append(len(context_tokens)) # 형태소로 쪼개지기 전의 어절 단위의 시작지점 토큰 정보 저장
tokens = vocab.encode_as_pieces(word) # SentencePiece 를 사용해 Subword(형태소) 로 분할
for token in tokens:
context_tokens.append(token)
context_tokens, word_to_token<jupyter_output><empty_output><jupyter_text>
### 띄어쓰기에 따른 '어절 정보' 압축 저장
위 전처리 1 단계 '어절 정보에 따른 tokenize' 단계에서
우리는 tokenize 된 정보를 '문자(character)' 단위로 인코딩해 두었습니다.
이를 이용하여 전처리 2 단계 '형태소 정보에 따른 tokenize' 단계에서
어절 단위에 따른 띄어쓰기 정보가 소실되지 않도록, 형태소에 따라 tokenize 된 데이터에 띄어쓰기 위치 정보를 함께 저장합니다.
word_to_token의 \[0, 2, 5\] 는
context_tokens 에 쪼개져 담긴 0번, 2번, 5번 토큰인 '▁1839', '▁', '▁읽' 이
어절단위의 첫번째 토큰이 된다는 (띄어쓰기) 정보를 담아둔 것입니다.
<jupyter_code># sentencepiece 모듈을 활용한 형태소 분할 함수 작성
def _tokenize_vocab(vocab, context_words):
word_to_token = []
context_tokens = []
for (i, word) in enumerate(context_words):
word_to_token.append(len(context_tokens))
tokens = vocab.encode_as_pieces(word)
for token in tokens:
context_tokens.append(token)
return context_tokens, word_to_token
print(word_tokens) # 처리해야 할 word 단위 입력
context_tokens, word_to_token = _tokenize_vocab(vocab, word_tokens)
context_tokens, word_to_token # Subword 단위로 토큰화한 결과<jupyter_output>['1839년', '파우스트을', '읽었다.']
<jupyter_text>
## 데이터 전처리 3 : Improve Span
KorQuAD 데이터셋은
- 질문(question)과 지문(context)을 주고,
- 지문 영역에서 정답(answer)을 찾도록 구성
그러므로 정답에 해당하는 지문 영역을 정확히 찾아내는 것이 전처리의 핵심적인 작업이 됩니다.
KorQuAD 데이터셋에서 context, question, answer 를 추출해 봅시다.
<jupyter_code># KorQuAD 데이터셋에서 context, question, answer 정보 추출
context = train_json['data'][0]['paragraphs'][0]['context']
question = train_json['data'][0]['paragraphs'][0]['qas'][0]['question']
answer_text = train_json['data'][0]['paragraphs'][0]['qas'][0]['answers'][0]['text']
answer_start = train_json['data'][0]['paragraphs'][0]['qas'][0]['answers'][0]['answer_start']
answer_end = answer_start + len(answer_text) - 1
print('[context] ', context)
print('[question] ', question)
print('[answer] ', answer_text, '\n')
# context 에 포함된 answer 의 문자단위 시작 인덱스와 종료 인덱스 저장 !
print('[answer_start] index: ', answer_start, 'character: ', context[answer_start])
print('[answer_end]index: ', answer_end, 'character: ', context[answer_end])
# answer_text에 해당하는 context 영역을 정확히 찾아내야 합니다.
assert context[answer_start:answer_end + 1] == answer_text<jupyter_output>[context] 1839년 바그너는 괴테의 파우스트을 처음 읽고 그 내용에 마음이 끌려 이를 소재로 해서 하나의 교향곡을 쓰려는 뜻을 갖는다. 이 시기 바그너는 1838년에 빛 독촉으로 산전수전을 다 걲은 상황이라 좌절과 실망에 가득했으며 메피스토펠레스를 만나는 파우스트의 심경에 공감했다고 한다. 또한 파리에서 아브네크의 지휘로 파리 음악원 관현악단이 연주하는 베토벤의 교향곡 9번을 듣고 깊은 감명을 받았는데, 이것이 이듬해 1월에 파우스트의 서곡으로 쓰여진 이 작품에 조금이라도 영향을 끼쳤으리라는 것은 의심할 여지가 없다. 여기의 라단조 조성의 경우에도 그의 전기에 적혀 있는 것처럼 단순한 정신적 피로나 실의가 반영된 것이 아니라 베토벤의 합창교향곡 조성의 영향을 받은 것을 볼 수 있다. 그렇게 교향곡 작곡을 1839년부터 40년에 걸쳐 파리에서 착수했으나 1악장을 쓴 뒤에 중단했다. 또한 작품의 완성과 동시에 그는 이 서곡(1악장)을 파리 음악원의 연주회에서 연주할 파트보까지 준비하였으나, 실제로는 이루어지지는 않았다. 결국 초연은 4년 반이 지난 후에 드레스덴에서 연주되었고 재연도 이루어졌지만, 이후에 그대로 방치되고 말았다. 그 사이에 그는 리엔치와 방황하는 네덜란드인을 완성하고 탄호이저에도 착수하는 등 분주한 시간을 보냈는데, 그런 바쁜 생활이 이 곡을 잊게 한 것이 아닌가 하는 의견도 있다.
[question] 바그너는 괴테의 파우스트를 읽고 무엇을 쓰고자 했는가?
[answer] 교향곡
[answer_start] index: 54 character: 교
[answer_end]index: 56 character: 곡
<jupyter_text>
### context 데이터 전처리 1, 2 단계 처리 확인
<jupyter_code># context 를 띄어쓰기 (어절,word) 단위로 토큰화한 결과 확인
# 위에서 정의한 _tokenize_whitespace() 함수 사용
word_tokens, char_to_word = _tokenize_whitespace(context)
print( word_tokens[:20])
char_to_word[:20], context[:20]
# 띄어쓰기 (어절,word) 단위로 쪼개진 context(word_tokens) 를 형태소(Subword)로 토큰화한 결과 확인
# 위에서 정의한 _tokenize_vocab() 함수 사용
context_tokens, word_to_token = _tokenize_vocab(vocab, word_tokens)
for i in range(min(20, len(word_to_token) - 1)):
print(word_to_token[i], context_tokens[word_to_token[i]:word_to_token[i + 1]])<jupyter_output>0 ['▁1839', '년']
2 ['▁바그너', '는']
4 ['▁괴테', '의']
6 ['▁', '파우스트', '을']
9 ['▁처음']
10 ['▁읽고']
11 ['▁그']
12 ['▁내용에']
13 ['▁마음이']
14 ['▁끌려']
15 ['▁이를']
16 ['▁소재로']
17 ['▁해서']
18 ['▁하나의']
19 ['▁교향곡', '을']
21 ['▁쓰', '려는']
23 ['▁뜻을']
24 ['▁갖는다', '.']
26 ['▁이']
27 ['▁시기']
<jupyter_text>
### answer 데이터 전처리 1, 2 단계 처리 확인
위에서 우리는 context에 포함된 answer의 글자단위 시작 인덱스(answer_start)와 종료 인덱스(answer_end)를 구했습니다.
이 위치를 어절(word) 단위로 변환하면 어떻게 될까요?
<jupyter_code># answer_start 와 answer_end 로부터 word_start 와 word_end 를 생성
word_start = char_to_word[answer_start]
word_end = char_to_word[answer_end]
word_start, word_end, answer_text, word_tokens[word_start:word_end + 1]<jupyter_output><empty_output><jupyter_text>
우리가 찾는 정답은 15번째 어절(index=14)에 있었군요. 하지만 우리가 원하는 정답은 '교향곡'이지, '교향곡을'은 아닙니다.
그래서 이번에는 word_start로부터 word_end까지의 context를 Subword 단위로 토큰화한 결과를 살펴봅시다.
<jupyter_code>#
token_start = word_to_token[word_start]
if word_end < len(word_to_token) - 1:
token_end = word_to_token[word_end + 1] - 1
else:
token_end = len(context_tokens) - 1
token_start, token_end, context_tokens[token_start:token_end + 1]<jupyter_output><empty_output><jupyter_text>
거의 정답에 근접했습니다.
Subword 단위로 토큰화한 결과 중에는 우리가 찾는 정답과 정확히 일치하는 답이 있는것 같습니다.
<jupyter_code># 실제 정답인 answer_text 도 Subword 기준으로 토큰화
token_answer = " ".join(vocab.encode_as_pieces(answer_text))
token_answer<jupyter_output><empty_output><jupyter_text>
이제 눈으로 봐도 어디가 정확히 정답인지 알 수 있게 되었지만,
좀더 일반적인 방법으로 정답 토큰 범위를 찾는 코드를 작성해 보겠습니다.
KorQuAD 문제의 정답은 이번처럼 단답형만 있는 것은 아니기 때문입니다.
<jupyter_code># 정답이 될수 있는 new_start 와 new_end 의 경우를 순회탐색하여 확인
for new_start in range(token_start, token_end + 1):
for new_end in range(token_end, new_start - 1, -1):
text_span = " ".join(context_tokens[new_start : (new_end + 1)])
if text_span == token_answer: # 정답과 일치하는 경우
print("O >>", (new_start, new_end), text_span)
else:
print("X >>", (new_start, new_end), text_span)<jupyter_output>X >> (19, 20) ▁교향곡 을
O >> (19, 19) ▁교향곡
X >> (20, 20) 을
<jupyter_text>
context에서 answer의 위치를 토큰화된 상태에서 찾는 함수를 아래와 같이 작성
<jupyter_code># context_tokens에서 char_answer의 위치를 찾아 리턴하는 함수 작성
def _improve_span(vocab, context_tokens, token_start, token_end, char_answer):
token_answer = " ".join(vocab.encode_as_pieces(char_answer))
for new_start in range(token_start, token_end + 1):
for new_end in range(token_end, new_start - 1, -1):
text_span = " ".join(context_tokens[new_start : (new_end + 1)])
if text_span == token_answer:
return (new_start, new_end)
return (token_start, token_end)
# 확인
token_start, token_end = _improve_span(vocab, context_tokens, token_start, token_end, answer_text)
print('token_start:', token_start, ' token_end:', token_end)
context_tokens[token_start:token_end + 1]<jupyter_output>token_start: 19 token_end: 19
<jupyter_text>
## 데이터 전처리 4 : 데이터셋 분리
train 데이터셋, dev 데이터셋을 분리하여,
위에서 작성한 ```_improve_span()``` 함수를 이용해 전처리 후 파일로 저장합니다.
<jupyter_code># 데이터셋 분리하는 함수 작성
def dump_korquad(vocab, json_data, out_file):
with open(out_file, "w") as f:
for data in tqdm(json_data["data"]):
title = data["title"]
for paragraph in data["paragraphs"]:
context = paragraph["context"]
context_words, char_to_word = _tokenize_whitespace(context)
for qa in paragraph["qas"]:
assert len(qa["answers"]) == 1
qa_id = qa["id"]
question = qa["question"]
answer_text = qa["answers"][0]["text"]
answer_start = qa["answers"][0]["answer_start"]
answer_end = answer_start + len(answer_text) - 1
assert answer_text == context[answer_start:answer_end + 1]
word_start = char_to_word[answer_start]
word_end = char_to_word[answer_end]
word_answer = " ".join(context_words[word_start:word_end + 1])
char_answer = " ".join(answer_text.strip().split())
assert char_answer in word_answer
context_tokens, word_to_token = _tokenize_vocab(vocab, context_words)
token_start = word_to_token[word_start]
if word_end < len(word_to_token) - 1:
token_end = word_to_token[word_end + 1] - 1
else:
token_end = len(context_tokens) - 1
token_start, token_end = _improve_span(vocab, context_tokens, token_start, token_end, char_answer)
data = {"qa_id": qa_id, "title": title, "question": vocab.encode_as_pieces(question), "context": context_tokens, "answer": char_answer, "token_start": token_start, "token_end":token_end}
f.write(json.dumps(data, ensure_ascii=False))
f.write("\n")
# 전처리를 수행하여 파일로 생성
dump_korquad(vocab, train_json, f"{data_dir}/korquad_train.json")
dump_korquad(vocab, dev_json, f"{data_dir}/korquad_dev.json")
# 전처리가 잘 되었는지 실제로 파일 내용을 확인
def print_file(filename, count=10):
"""
파일 내용 출력
:param filename: 파일 이름
:param count: 출력 라인 수
"""
with open(filename) as f:
for i, line in enumerate(f):
if count <= i:
break
print(line.strip())
print_file(f"{data_dir}/korquad_train.json")<jupyter_output>{"qa_id": "6566495-0-0", "title": "파우스트_서곡", "question": ["▁바그너", "는", "▁괴테", "의", "▁", "파우스트", "를", "▁읽고", "▁무엇을", "▁쓰고", "자", "▁", "했", "는", "가", "?"], "context": ["▁1839", "년", "▁바그너", "는", "▁괴테", "의", "▁", "파우스트", "을", "▁처음", "▁읽고", "▁그", "▁내용에", "▁마음이", "▁끌려", "▁이를", "▁소재로", "▁해서", "▁하나의", "▁교향곡", "을", "▁쓰", "려는", "▁뜻을", "▁갖는다", ".", "▁이", "▁시기", "▁바그너", "는", "▁1838", "년에", "▁빛", "▁독", "촉", "으로", "▁산", "전", "수", "전을", "▁다", "▁", "걲", "은", "▁상황이", "라", "▁좌절", "과", "▁실망", "에", "▁가득", "했으며", "▁메", "피스", "토", "펠", "레스", "를", "▁만나는", "▁", "파우스트", "의", "▁심", "경에", "▁공감", "했다고", "▁한다", ".", "▁또한", "▁파리에서", "▁아브", "네", "크의", "▁지휘", "로", "▁파리", "▁음악원", "▁관현악단", "이", "▁연주하는", "▁베토벤", "의", "▁교향곡", "▁9", "번을", "▁듣고", "▁깊은", "▁감", "명을", "▁받았는데", ",", "▁이것이", "▁이듬해", "▁1", "월에", "▁", "파우스트", "의", "▁서", "곡으로", "▁쓰여진", "▁이", "▁작품에", "▁조금", "이라도", "▁영향을", "▁끼", "쳤", "으리라", "는", "▁것은", "▁의심", "할", "▁여지가", "▁없다", ".", "▁여기", "의", "▁라", "단", "조", "▁조성", "의", "▁경우에도", "▁그의", "▁전기", "에", "▁적혀", "▁있는", [...]<jupyter_text>
## 데이터 전처리 5 : 데이터 분석
원본 데이터셋을 전처리하여 우리의 모델이 다루게 될 데이터셋으로 가공하는 과정을 진행하였습니다.
그러나 이 데이터셋을 그대로 사용할 수 있을지, 혹은 이상(abnormal) 데이터가 존재하지는 않는지 분석하는 과정이 필요합니다.
<jupyter_code># 데이터 분석을 위해 전처리된 데이터를 question, context, answer 별로 불러오기
questions = []
contexts = []
token_starts = []
with open(f"{data_dir}/korquad_train.json") as f:
for i, line in enumerate(f):
data = json.loads(line)
questions.append(data["question"])
contexts.append(data["context"])
token_starts.append(data["token_start"])
if i < 10:
print(data["token_start"], data["question"])<jupyter_output>19 ['▁바그너', '는', '▁괴테', '의', '▁', '파우스트', '를', '▁읽고', '▁무엇을', '▁쓰고', '자', '▁', '했', '는', '가', '?']
168 ['▁바그너', '는', '▁교향곡', '▁작곡', '을', '▁어디', '까지', '▁쓴', '▁뒤에', '▁중단', '했', '는', '가', '?']
80 ['▁바그너', '가', '▁', '파우스트', '▁서', '곡을', '▁쓸', '▁때', '▁어떤', '▁곡', '의', '▁영향을', '▁받았', '는', '가', '?']
6 ['▁1839', '년', '▁바그너', '가', '▁교향곡', '의', '▁소재로', '▁쓰', '려고', '▁했던', '▁책은', '?']
143 ['▁', '파우스트', '▁서', '곡', '의', '▁라', '단', '조', '▁조성', '이', '▁영향을', '▁받은', '▁베토벤', '의', '▁곡은', '?']
0 ['▁바그너', '가', '▁', '파우스트', '를', '▁처음으로', '▁읽', '은', '▁', '년', '도', '는', '?']
165 ['▁바그너', '가', '▁처음', '▁교향곡', '▁작곡', '을', '▁한', '▁장소', '는', '?']
216 ['▁바그너', '의', '▁1', '악장', '의', '▁초연', '은', '▁어디서', '▁연주', '되었', '는', '가', '?']
164 ['▁바그너', '의', '▁작품을', '▁시인', '의', '▁피로', '▁쓰여', '졌다', '고', '▁극찬', '한', '▁것은', '▁누구', '인', '가', '?']
7 ['▁잊', '혀', '져', '▁있는', '▁', '파우스트', '▁서', '곡', '▁1', '악장', '을', '▁부활', '시킨', '▁것은', '▁누구', '인', '가', '?']
<jupyter_text>
### 전체 데이터에서 question 항목의 길이 분포 확인
<jupyter_code># token count
train_question_counts = [len(question) for question in questions]
train_question_counts[:10]
# 그래프에 대한 이미지 사이즈 선언
# figsize: (가로, 세로) 형태의 튜플로 입력
plt.figure(figsize=(8, 4))
# histogram 선언
# bins: 히스토그램 값들에 대한 버켓 범위,
# range: x축 값의 범위
# facecolor: 그래프 색상
# label: 그래프에 대한 라벨
plt.hist(train_question_counts, bins=100, range=[0, 100], facecolor='b', label='train')
# 그래프 제목
plt.title('Count of question')
# 그래프 x 축 라벨
plt.xlabel('Number of question')
# 그래프 y 축 라벨
plt.ylabel('Count of question')
plt.show()
# 데이터 길이
print(f"question 길이 최대: {np.max(train_question_counts):4d}")
print(f"question 길이 최소: {np.min(train_question_counts):4d}")
print(f"question 길이 평균: {np.mean(train_question_counts):7.2f}")
print(f"question 길이 표준편차: {np.std(train_question_counts):7.2f}")
# https://ko.wikipedia.org/wiki/%EB%B0%B1%EB%B6%84%EC%9C%84%EC%88%98
# 백분위수(Percentile)는 크기가 있는 값들로 이뤄진 자료를 순서대로 나열했을 때 백분율로 나타낸 특정 위치의 값을 이르는 용어이다.
# 일반적으로 크기가 작은 것부터 나열하여 가장 작은 것을 0, 가장 큰 것을 100으로 한다.
# 100개의 값을 가진 어떤 자료의 20 백분위수는 그 자료의 값들 중 20번째로 작은 값을 뜻한다. 50 백분위수는 중앙값과 같다.
percentile25 = np.percentile(train_question_counts, 25)
percentile50 = np.percentile(train_question_counts, 50)
percentile75 = np.percentile(train_question_counts, 75)
percentileIQR = percentile75 - percentile25
percentileMAX = percentile75 + percentileIQR * 1.5
print(f"question 25/100분위: {percentile25:7.2f}")
print(f"question 50/100분위: {percentile50:7.2f}")
print(f"question 75/100분위: {percentile75:7.2f}")
print(f"question IQR: {percentileIQR:7.2f}")
print(f"question MAX/100분위: {percentileMAX:7.2f}")
plt.figure(figsize=(4, 6))
# 박스플롯 생성
# 첫번째 파라메터: 여러 분포에 대한 데이터 리스트를
# labels: 입력한 데이터에 대한 라벨
# showmeans: 평균값을 표현
# 참고: https://leebaro.tistory.com/entry/%EB%B0%95%EC%8A%A4-%ED%94%8C%EB%A1%AFbox-plot-%EC%84%A4%EB%AA%85
plt.boxplot(train_question_counts, labels=['token counts'], showmeans=True)
plt.show()<jupyter_output><empty_output><jupyter_text>
### 전체 데이터에서 context 항목의 길이 분포 확인
<jupyter_code># token count
train_context_counts = [len(context) for context in contexts]
train_context_counts[:10]
# 그래프에 대한 이미지 사이즈 선언
# figsize: (가로, 세로) 형태의 튜플로 입력
plt.figure(figsize=(8, 4))
# histogram 선언
# bins: 히스토그램 값들에 대한 버켓 범위,
# range: x축 값의 범위
# facecolor: 그래프 색상
# label: 그래프에 대한 라벨
plt.hist(train_context_counts, bins=900, range=[100, 1000], facecolor='r', label='train')
# 그래프 제목
plt.title('Count of context')
# 그래프 x 축 라벨
plt.xlabel('Number of context')
# 그래프 y 축 라벨
plt.ylabel('Count of context')
plt.show()
# 데이터 길이
print(f"context 길이 최대: {np.max(train_context_counts):4d}")
print(f"context 길이 최소: {np.min(train_context_counts):4d}")
print(f"context 길이 평균: {np.mean(train_context_counts):7.2f}")
print(f"context 길이 표준편차: {np.std(train_context_counts):7.2f}")
# https://ko.wikipedia.org/wiki/%EB%B0%B1%EB%B6%84%EC%9C%84%EC%88%98
# 백분위수(Percentile)는 크기가 있는 값들로 이뤄진 자료를 순서대로 나열했을 때 백분율로 나타낸 특정 위치의 값을 이르는 용어이다.
# 일반적으로 크기가 작은 것부터 나열하여 가장 작은 것을 0, 가장 큰 것을 100으로 한다.
# 100개의 값을 가진 어떤 자료의 20 백분위수는 그 자료의 값들 중 20번째로 작은 값을 뜻한다. 50 백분위수는 중앙값과 같다.
percentile25 = np.percentile(train_context_counts, 25)
percentile50 = np.percentile(train_context_counts, 50)
percentile75 = np.percentile(train_context_counts, 75)
percentileIQR = percentile75 - percentile25
percentileMAX = percentile75 + percentileIQR * 1.5
print(f"context 25/100분위: {percentile25:7.2f}")
print(f"context 50/100분위: {percentile50:7.2f}")
print(f"context 75/100분위: {percentile75:7.2f}")
print(f"context IQR: {percentileIQR:7.2f}")
print(f"context MAX/100분위: {percentileMAX:7.2f}")
plt.figure(figsize=(4, 6))
# 박스플롯 생성
# 첫번째 파라메터: 여러 분포에 대한 데이터 리스트를
# labels: 입력한 데이터에 대한 라벨
# showmeans: 평균값을 표현
# 참고: https://leebaro.tistory.com/entry/%EB%B0%95%EC%8A%A4-%ED%94%8C%EB%A1%AFbox-plot-%EC%84%A4%EB%AA%85
plt.boxplot(train_context_counts, labels=['token counts'], showmeans=True)
plt.show()<jupyter_output><empty_output><jupyter_text>
### 전체 데이터에서 answer 항목의 길이 분포 확인
<jupyter_code># token count
train_answer_starts = token_starts
train_answer_starts[:10]
# 그래프에 대한 이미지 사이즈 선언
# figsize: (가로, 세로) 형태의 튜플로 입력
plt.figure(figsize=(8, 4))
# histogram 선언
# bins: 히스토그램 값들에 대한 버켓 범위,
# range: x축 값의 범위
# facecolor: 그래프 색상
# label: 그래프에 대한 라벨
plt.hist(train_answer_starts, bins=500, range=[0, 500], facecolor='g', label='train')
# 그래프 제목
plt.title('Count of answer')
# 그래프 x 축 라벨
plt.xlabel('Number of answer')
# 그래프 y 축 라벨
plt.ylabel('Count of answer')
plt.show()
# 데이터 길이
print(f"answer 위치 최대: {np.max(train_answer_starts):4d}")
print(f"answer 위치 최소: {np.min(train_answer_starts):4d}")
print(f"answer 위치 평균: {np.mean(train_answer_starts):7.2f}")
print(f"answer 위치 표준편차: {np.std(train_answer_starts):7.2f}")
# https://ko.wikipedia.org/wiki/%EB%B0%B1%EB%B6%84%EC%9C%84%EC%88%98
# 백분위수(Percentile)는 크기가 있는 값들로 이뤄진 자료를 순서대로 나열했을 때 백분율로 나타낸 특정 위치의 값을 이르는 용어이다.
# 일반적으로 크기가 작은 것부터 나열하여 가장 작은 것을 0, 가장 큰 것을 100으로 한다.
# 100개의 값을 가진 어떤 자료의 20 백분위수는 그 자료의 값들 중 20번째로 작은 값을 뜻한다. 50 백분위수는 중앙값과 같다.
percentile25 = np.percentile(train_answer_starts, 25)
percentile50 = np.percentile(train_answer_starts, 50)
percentile75 = np.percentile(train_answer_starts, 75)
percentileIQR = percentile75 - percentile25
percentileMAX = percentile75 + percentileIQR * 1.5
print(f"answer 25/100분위: {percentile25:7.2f}")
print(f"answer 50/100분위: {percentile50:7.2f}")
print(f"answer 75/100분위: {percentile75:7.2f}")
print(f"answer IQR: {percentileIQR:7.2f}")
print(f"answer MAX/100분위: {percentileMAX:7.2f}")
plt.figure(figsize=(4, 6))
# 박스플롯 생성
# 첫번째 파라메터: 여러 분포에 대한 데이터 리스트를
# labels: 입력한 데이터에 대한 라벨
# showmeans: 평균값을 표현
# 참고: https://leebaro.tistory.com/entry/%EB%B0%95%EC%8A%A4-%ED%94%8C%EB%A1%AFbox-plot-%EC%84%A4%EB%AA%85
plt.boxplot(train_answer_starts, labels=['token counts'], showmeans=True)
plt.show()<jupyter_output><empty_output><jupyter_text>
## 데이터 전처리 5 : Word Cloud (자료 빈도수 시각화)
워드 클라우드(Word Cloud)란 자료의 빈도수를 시각화해서 나타내는 방법입니다.
문서의 핵심 단어를 한눈에 파악할 수 있고, 빅데이터를 분석할 때 데이터의 특징을 도출하기 위해서 활용됩니다.
빈도수가 높은 단어일수록 글씨 크기가 큰 특징이 있습니다.
<jupyter_code># train documents
documents = []
# 전체 데이터에서 title, context, question 문장을 모두 추출합니다.
for data in tqdm(train_json["data"]):
title = data["title"]
documents.append(title)
for paragraph in data["paragraphs"]:
context = paragraph["context"]
documents.append(context)
for qa in paragraph["qas"]:
assert len(qa["answers"]) == 1
question = qa["question"]
documents.append(question)
documents[:10] # 그중 맨 앞 10개만 확인해 봅니다.
# documents를 전부 이어 하나의 문장으로 만들면 이렇게 보입니다.
" ".join(documents[:10])
# WordCloud로 " ".join(documents)를 처리해 봅니다.
wordcloud = WordCloud(width=800, height=800, font_path='/usr/share/fonts/truetype/nanum/NanumBarunGothic.ttf').generate(" ".join(documents))
plt.figure(figsize=(10, 10))
# image 출력, interpolation 이미지 시각화 옵션
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()<jupyter_output><empty_output><jupyter_text>
## 데이터 전처리 5 : 데이터 로드
지금까지 만든 데이터셋을 메모리에 로드합니다.
<jupyter_code>train_json = os.path.join(data_dir, "korquad_train.json")
dev_json = os.path.join(data_dir, "korquad_dev.json")
class Config(dict):
"""
json을 config 형태로 사용하기 위한 Class
:param dict: config dictionary
"""
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
args = Config({
'max_seq_length': 384,
'max_query_length': 64,
})
args
# 생성한 데이터셋 파일을 메모리에 로딩하는 함수
def load_data(args, filename):
inputs, segments, labels_start, labels_end = [], [], [], []
n_discard = 0
with open(filename, "r") as f:
for i, line in enumerate(tqdm(f, desc=f"Loading ...")):
data = json.loads(line)
token_start = data.get("token_start")
token_end = data.get("token_end")
question = data["question"][:args.max_query_length]
context = data["context"]
answer_tokens = " ".join(context[token_start:token_end + 1])
context_len = args.max_seq_length - len(question) - 3
if token_end >= context_len:
# 최대 길이내에 token이 들어가지 않은 경우 처리하지 않음
n_discard += 1
continue
context = context[:context_len]
assert len(question) + len(context) <= args.max_seq_length - 3
tokens = ['[CLS]'] + question + ['[SEP]'] + context + ['[SEP]']
ids = [vocab.piece_to_id(token) for token in tokens]
ids += [0] * (args.max_seq_length - len(ids))
inputs.append(ids)
segs = [0] * (len(question) + 2) + [1] * (len(context) + 1)
segs += [0] * (args.max_seq_length - len(segs))
segments.append(segs)
token_start += (len(question) + 2)
labels_start.append(token_start)
token_end += (len(question) + 2)
labels_end.append(token_end)
print(f'n_discard: {n_discard}')
return (np.array(inputs), np.array(segments)), (np.array(labels_start), np.array(labels_end))
# train data load
train_inputs, train_labels = load_data(args, train_json)
print(f"train_inputs: {train_inputs[0].shape}")
print(f"train_inputs: {train_inputs[1].shape}")
print(f"train_labels: {train_labels[0].shape}")
print(f"train_labels: {train_labels[1].shape}")
# dev data load
dev_inputs, dev_labels = load_data(args, dev_json)
print(f"dev_inputs: {dev_inputs[0].shape}")
print(f"dev_inputs: {dev_inputs[1].shape}")
print(f"dev_labels: {dev_labels[0].shape}")
print(f"dev_labels: {dev_labels[1].shape}")
train_inputs[:10], train_labels[:10]<jupyter_output><empty_output><jupyter_text>
### 우리가 만든 데이터셋 확인
<jupyter_code># Question과 Context가 포함된 입력데이터 1번째
train_inputs[0][0]
# Question을 0으로, Context를 1로 구분해 준 Segment 데이터 1번째
train_inputs[1][0]
# Answer위치의 시작점과 끝점 라벨 1번째
train_labels[0][0], train_labels[1][0]<jupyter_output><empty_output><jupyter_text>
## LSTM 모델 학습
KorQuAD 태스크를 LSTM 모델을 활용하여 학습해 봅시다.
다소 복잡해 보이겠지만 Input이 2개, Output이 2개인 모델이라는 점에 주목해 주십시오.
2개의 Input은 이전 스텝에서 보았던 ```train_inputs[0], train_inputs[1]``` 이 들어갑니다.
이들은 각각 Question+Context의 데이터와 Segment입니다.
그리고 Output은 Answer의 시작점과 끝점의 위치입니다.
<jupyter_code># LSTM 모델 생성 함수 작성
def build_model_lstm(n_vocab, n_seq, d_model):
tokens = tf.keras.layers.Input((None,), name='tokens')
segments = tf.keras.layers.Input((None,), name='segments')
hidden = tf.keras.layers.Embedding(n_vocab, d_model)(tokens) + tf.keras.layers.Embedding(2, d_model)(segments) # (bs, n_seq, d_model)
hidden = tf.keras.layers.LSTM(d_model, return_sequences=True)(hidden) # (bs, n_seq, d_model)
hidden = tf.keras.layers.LSTM(d_model, return_sequences=True)(hidden) # (bs, n_seq, d_model)
hidden = tf.keras.layers.Dense(2)(hidden) # (bs, n_seq, 2)
start_logits, end_logits = tf.split(hidden, 2, axis=-1) # (bs, n_seq, 1), (bs, n_seq, 1)
start_logits = tf.squeeze(start_logits, axis=-1) # (bs, n_seq)
start_outputs = tf.keras.layers.Softmax(name="start")(start_logits)
end_logits = tf.squeeze(end_logits, axis=-1) # (bs, n_seq)
end_outputs = tf.keras.layers.Softmax(name="end")(end_logits)
model = tf.keras.Model(inputs=(tokens, segments), outputs=(start_outputs, end_outputs))
return model
# 모델 생성
model = build_model_lstm(n_vocab=len(vocab), n_seq=512, d_model=512)
tf.keras.utils.plot_model(model, 'model.png', show_shapes=True)
# 모델 컴파일
model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy, optimizer=tf.keras.optimizers.Adam(learning_rate=5e-4), metrics=["accuracy"])
# 모델 학습
# 1 Epoch 에 3분 가량 소요
# 20 Epoch 진행. 5 Epoch 이상 val_start_accuracy 향상이 없으면 early stopping
# early stopping
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_start_accuracy', patience=5)
# save weights
save_weights = tf.keras.callbacks.ModelCheckpoint(os.path.join(data_dir, "korquad_lstm.hdf5"), monitor='val_start_accuracy', verbose=1, save_best_only=True, mode='max', save_freq='epoch', save_weights_only=True)
history = model.fit(train_inputs, train_labels, epochs=20, batch_size=128, validation_data=(dev_inputs, dev_labels), callbacks=[early_stopping, save_weights])<jupyter_output>Epoch 1/20
469/469 [==============================] - ETA: 0s - loss: 9.1249 - start_loss: 4.4189 - end_loss: 4.7060 - start_accuracy: 0.0685 - end_accuracy: 0.0526
Epoch 00001: val_start_accuracy improved from -inf to 0.09147, saving model to /home/ssac29/aiffel/bert_qna/data/korquad_lstm.hdf5
469/469 [==============================] - 246s 525ms/step - loss: 9.1249 - start_loss: 4.4189 - end_loss: 4.7060 - start_accuracy: 0.0685 - end_accuracy: 0.0526 - val_loss: 8.2432 - val_start_loss: 3.9167 - val_end_loss: 4.3265 - val_start_accuracy: 0.0915 - val_end_accuracy: 0.0797
Epoch 2/20
469/469 [==============================] - ETA: 0s - loss: 7.2974 - start_loss: 3.4785 - end_loss: 3.8189 - start_accuracy: 0.1254 - end_accuracy: 0.1168
Epoch 00002: val_start_accuracy improved from 0.09147 to 0.09867, saving model to /home/ssac29/aiffel/bert_qna/data/korquad_lstm.hdf5
469/469 [==============================] - 247s 528ms/step - loss: 7.2974 - start_loss: 3.4785 - end_loss: 3.8189 - star[...]<jupyter_text>
### 모델 학습 결과 시각화
<jupyter_code># training result
plt.figure(figsize=(16, 4))
plt.subplot(1, 3, 1)
plt.plot(history.history['loss'], 'b-', label='loss')
plt.plot(history.history['val_loss'], 'r--', label='val_loss')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 3, 2)
plt.plot(history.history['start_accuracy'], 'g-', label='start_accuracy')
plt.plot(history.history['val_start_accuracy'], 'k--', label='val_start_accuracy')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 3, 3)
plt.plot(history.history['end_accuracy'], 'b-', label='end_accuracy')
plt.plot(history.history['val_end_accuracy'], 'g--', label='val_end_accuracy')
plt.xlabel('Epoch')
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text>
### LSTM 모델 학습 결과
LSTM 모델을 통한 학습 결과 val_loss가 낮아지지 않고, val_accuracy들도 크게 좋아지지 않는 것을 보실 수 있습니다.
KorQuAD 태스크는 데이터셋만 가지고 사전 준비 없이 학습했을 때 일정 이상 좋아지지 않는다는 것을 알 수 있습니다.
모델을 다양하게 바꾸어 보아도 결과는 비슷할 것입니다.
## BERT 모델의 구조
### Transformer 모델
Transformer 모델은
Self-Attention 으로 이루어진 __Encoder Network__ 와 __Decoder Network__ 의 구조를 갖고 있습니다.
이 Encoder-Decoder 구조로 인해 '번역기' 구현에 적합한 모델 형태입니다.
### BERT 모델
BERT 모델은
Self-Attention 으로 이루어진 Transformer __Encoder Network__ 구조만을 갖습니다.
Layer 개수를 12개 이상으로 늘리고, 전체적으로 파라미터의 크기가 훨씬 큽니다.
## BERT 모델의 특징
참고.
[BERT 논문정리](https://mino-park7.github.io/nlp/2018/12/12/bert-%EB%85%BC%EB%AC%B8%EC%A0%95%EB%A6%AC/?fbclid=IwAR3S-8iLWEVG6FGUVxoYdwQyA-zG0GpOUzVEsFBd0ARFg4eFXqCyGLznu7w#33-pre-training-tasks)
### Transfer Learning (전이학습) 을 활용하는 방법론
Transfer Learning, 전이학습은 대용량 unlabeld data 로 모델을 미리 학습 시킨 후,
이 학습된 모델을 불러와 특정 태스크를 갖고 있는 labeled data 로 다시한번 모델을 학습시키는 방법을 말합니다.
전이학습을 통해 token-level 태스크 부터 question-answer 태스크 까지 광범한 NLP 태스크들이 많을 성능 향상을 보였습니다.
- __feature-based approach 전이학습__
OpenAI GPT 나 ELMo 등 기존의 전이학습은
두 개의 Network 를 붙여서 pre-trained lenguage representation 을 적용하였습니다.
pre-trained 모델을 불러와 뒤쪽에 특정 태스크를 처리하는 network 를 더하여 새로운 모델을 생성하여 학습하였습니다.
이러한 접근 방식은 입출력 sequence 에 대해 단방향적(unidirctional) 성질로 인해 language representation 이 부족합니다.
- __fine-tuning approach 전이학습__
이와 달리 BERT 모델의 전이학습은
한 개의 Network 안에서 pre-trained 된 parameter 들을 downstream 학습을 통해 적용하였습니다.
pre-trained 모델에 특정 태스크를 처리하기위한 network 를 더하지 않고,
BERT 모델 자체의 (parameter) fine-tuning 을 통해 특정 태스크를 처리하기 때문에
학습 parameter 의 개수가 획기적으로 줄어들 수 있습니다 (?)
### BERT 모델의 pre-training 방법 : fine-tuning approach
pre-training 은
NLP 의 여러 태스크들 중 '두 문장 사이의 관계를 이해하는 것이 중요한 경우'에 필요합니다.
기존의 OpenAI GPT 나 ELMo 모델의 경우 pre-trining 방법은
앞의 n 개 단어로 뒤의 단어를 예측하는 n-gram 모델인 일반적인 Language Model 을 사용합니다.
이러한 방식은 단방향적인 (unidirectional) 단점이 있습니다.
BERT pre-training 방법은
__Masked Language Model (MLM)__ 과
__Next Sentence Prediction (NSP)__ 두 가지로 이루어집니다.
- __Masked Language Model(MLM)__
입력데이터가 \[나는\] \[MASK\] \[먹었다\] 일 때 BERT 모델이 \[MASK\] 가 \[밥을\] 임을 맞출 수 있도록 하는 언어 모델입니다.
Next Token Prediction Language Model 과 대비해 '__다음 빈칸에 알맞은 말은 문제__'를 학습하는 언어 모델을 구현한 것입니다.
MLM 은 Transformer Network 의 input data 에서 무작위하게 몇개의 token 을 mask 시킵니다.
그리고 이를 Transformer 구조에 넣어서 주변 단어의 context만을 보고 mask 된 단어 token 을 예측하는 모델입니다.
이는 left-to-right 를 통해 문장 전체를 predict 하는 방법과 달리,
mask 된 toekn 만을 predict 하는 pre-training 태스크를 수행합니다.
\[MASK\] token은 pre-training 에만 사용되고, fine-tuning시에는 사용되지 않습니다.
해당 token 을 맞추어 내는 태스크를 수행하면서, 문맥을 파악하는 능력을 길러내게 됩니다.
input data 전체와 mask된 token을 한번에 Transformer encoder에 넣고 원래 token 값을 예측하므로
deep bidirectional 하다고 할 수 있습니다.
- __Next Sentence Prediction__
입력데이터가 \[나는\] \[밥을\] \[먹었다.\] \[SEP\] \[그래서\] \[지금\] \[배가\] \[부르다.\] 가 주어졌을 때,
\[SEP\] 를 경계로 좌우 두 문장이 순서대로 이어지는 문장이 맞는지를 맞추는 문제입니다.
BERT 모델은 이 두 문장을 입력으로 받았을 때 첫 번째 바이트에 NSP 결과값 \[CLS\](?) 을 리턴하게 됩니다.
NSP 는 pre-training 시에 두 문장을 같이 넣어줘서 두 문장이 이어지는 문장인지 아닌지 맞추는 것입니다.
50:50 비율로 실제로 이어지는 두 문장과 랜덤하게 추출된 두 문장을 넣어줘서 BERT 모델이 맞추게 학습 시킵니다.
input data 에 들어가는 두 문장의 token 들 사이에 \[SEP\] 토큰을 넣어 줍니다.
### BERT 모델의 입력 (input date)
BERT 모델의 입력 부분을 주목해 볼 필요가 있습니다.
__Input data__ $\quad$ | \[CLS\] | my | dog | is | cute | \[SEP\] | he | likes | play | ##ing | \[SEP\]
--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ---
$\quad$__TokenEmbeddings__ $\quad$ | E$_{[CLS]}$ | E$_{my}$ | E$_{dog}$ | E$_{is}$ | E$_{cuet}$ | E$_{[ESP]}$ | E$_{he}$ | E$_{lieks}$ | E$_{play}$ | E$_{\#\#ing}$ | E$_{[SEP]}$
$\quad$__SegmentEmbeddings__ $\quad$ | E$_{A}$ | E$_{A}$ | E$_{A}$ | E$_{A}$ | E$_{A}$ | E$_{A}$ | E$_{B}$ | E$_{B}$ | E$_{B}$ | E$_{B}$ | E$_{B}$
$\quad$__PositionEmbeddings__ $\quad$ | E$_{0}$ | E$_{1}$ | E$_{2}$ | E$_{3}$ | E$_{4}$ | E$_{5}$ | E$_{6}$ | E$_{7}$ | E$_{8}$ | E$_{9}$ | E$_{10}$
텍스트 입력이 위 그림의 \[Input\] 처럼 주어졌을 때,
실제로 모델에 입력되는 것은 __Token Embedding__, __Segment Embedding__, __Position Embedding__ 3가지가 더해진 형태입니다.
( 실제로는 그 이후 Layer Normalization 과 Dropout 이 추가로 적용됩니다 )
- __Token Embedding__
BERT는 텍스트의 tokenizer 로 Word Piece model 이라는 subword(형태소) tokenizer를 사용합니다.
문자(char) 단위로 임베딩하는 것이 기본이지만, 자주 등장하는 긴 길이의 subword(형태소)도 하나의 단위로 만들어 줍니다.
자주 등장하지 않는 단어는 다시 subword(형태소) 단위로 쪼개집니다.
이것은 자주 등장하지 않는 단어가 OOV(Out-of-vocabulary) 처리되는 것을 방지해 주는 장점도 있습니다.
최종적으로 Word Piece 모델의 각 임베딩이 입력됩니다.
- __Segment Embedding__
기존 Transformer 에 없던 독특한 임베딩입니다. 이것은 각 단어가 어느 문장에 포함되는지 그 역할을 규정하는 것입니다.
QA 문제처럼 이 단어가 Question 문장에 속하는지, Context 문장에 속하는지 구분이 필요한 경우 이 임베딩은 유용하게 사용됩니다.
- __Position Embedding__
단위 토큰이 문장 내 어느 위치에 있는지를 임베딩합니다.
이 임베딩은 기존의 Transformer에서 사용되던 position embedding과 동일합니다.
## BERT 모델 구현
### BERT 모델을 구성하는 레이어 준비
<jupyter_code># 유틸리티 함수들
def get_pad_mask(tokens, i_pad=0):
"""
pad mask 계산하는 함수
:param tokens: tokens (bs, n_seq)
:param i_pad: id of pad
:return mask: pad mask (pad: 1, other: 0)
"""
mask = tf.cast(tf.math.equal(tokens, i_pad), tf.float32)
mask = tf.expand_dims(mask, axis=1)
return mask
def get_ahead_mask(tokens, i_pad=0):
"""
ahead mask 계산하는 함수
:param tokens: tokens (bs, n_seq)
:param i_pad: id of pad
:return mask: ahead and pad mask (ahead or pad: 1, other: 0)
"""
n_seq = tf.shape(tokens)[1]
ahead_mask = 1 - tf.linalg.band_part(tf.ones((n_seq, n_seq)), -1, 0)
ahead_mask = tf.expand_dims(ahead_mask, axis=0)
pad_mask = get_pad_mask(tokens, i_pad)
mask = tf.maximum(ahead_mask, pad_mask)
return mask
@tf.function(experimental_relax_shapes=True)
def gelu(x):
"""
gelu activation 함수
:param x: 입력 값
:return: gelu activation result
"""
return 0.5 * x * (1 + K.tanh(x * 0.7978845608 * (1 + 0.044715 * x * x)))
def kernel_initializer(stddev=0.02):
"""
parameter initializer 생성
:param stddev: 생성할 랜덤 변수의 표준편차
"""
return tf.keras.initializers.TruncatedNormal(stddev=stddev)
def bias_initializer():
"""
bias initializer 생성
"""
return tf.zeros_initializer
class Config(dict):
"""
json을 config 형태로 사용하기 위한 Class
:param dict: config dictionary
"""
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
@classmethod
def load(cls, file):
"""
file에서 Config를 생성 함
:param file: filename
"""
with open(file, 'r') as f:
config = json.loads(f.read())
return Config(config)
# mode == "embedding" 일 경우 Token Embedding Layer 로 사용되는 layer 클래스입니다.
class SharedEmbedding(tf.keras.layers.Layer):
"""
Weighed Shared Embedding Class
"""
def __init__(self, config, name="weight_shared_embedding"):
"""
생성자
:param config: Config 객체
:param name: layer name
"""
super().__init__(name=name)
self.n_vocab = config.n_vocab
self.d_model = config.d_model
def build(self, input_shape):
"""
shared weight 생성
:param input_shape: Tensor Shape (not used)
"""
with tf.name_scope("shared_embedding_weight"):
self.shared_weights = self.add_weight(
"weights",
shape=[self.n_vocab, self.d_model],
initializer=kernel_initializer()
)
def call(self, inputs, mode="embedding"):
"""
layer 실행
:param inputs: 입력
:param mode: 실행 모드
:return: embedding or linear 실행 결과
"""
# mode가 embedding일 경우 embedding lookup 실행
if mode == "embedding":
return self._embedding(inputs)
# mode가 linear일 경우 linear 실행
elif mode == "linear":
return self._linear(inputs)
# mode가 기타일 경우 오류 발생
else:
raise ValueError(f"mode {mode} is not valid.")
def _embedding(self, inputs):
"""
embedding lookup
:param inputs: 입력
"""
embed = tf.gather(self.shared_weights, tf.cast(inputs, tf.int32))
return embed
def _linear(self, inputs): # (bs, n_seq, d_model)
"""
linear 실행
:param inputs: 입력
"""
n_batch = tf.shape(inputs)[0]
n_seq = tf.shape(inputs)[1]
inputs = tf.reshape(inputs, [-1, self.d_model]) # (bs * n_seq, d_model)
outputs = tf.matmul(inputs, self.shared_weights, transpose_b=True)
outputs = tf.reshape(outputs, [n_batch, n_seq, self.n_vocab]) # (bs, n_seq, n_vocab)
return outputs
class PositionalEmbedding(tf.keras.layers.Layer):
"""
Positional Embedding Class
"""
def __init__(self, config, name="position_embedding"):
"""
생성자
:param config: Config 객체
:param name: layer name
"""
super().__init__(name=name)
self.embedding = tf.keras.layers.Embedding(config.n_seq, config.d_model, embeddings_initializer=kernel_initializer())
def call(self, inputs):
"""
layer 실행
:param inputs: 입력
:return embed: positional embedding lookup 결과
"""
position = tf.cast(tf.math.cumsum(tf.ones_like(inputs), axis=1, exclusive=True), tf.int32)
embed = self.embedding(position)
return embed
class ScaleDotProductAttention(tf.keras.layers.Layer):
"""
Scale Dot Product Attention Class
"""
def __init__(self, name="scale_dot_product_attention"):
"""
생성자
:param name: layer name
"""
super().__init__(name=name)
def call(self, Q, K, V, attn_mask):
"""
layer 실행
:param Q: Q value
:param K: K value
:param V: V value
:param attn_mask: 실행 모드
:return attn_out: attention 실행 결과
"""
attn_score = tf.matmul(Q, K, transpose_b=True)
scale = tf.math.sqrt(tf.cast(tf.shape(K)[-1], tf.float32))
attn_scale = tf.math.divide(attn_score, scale)
attn_scale -= 1.e9 * attn_mask
attn_prob = tf.nn.softmax(attn_scale, axis=-1)
attn_out = tf.matmul(attn_prob, V)
return attn_out
class MultiHeadAttention(tf.keras.layers.Layer):
"""
Multi Head Attention Class
"""
def __init__(self, config, name="multi_head_attention"):
"""
생성자
:param config: Config 객체
:param name: layer name
"""
super().__init__(name=name)
self.d_model = config.d_model
self.n_head = config.n_head
self.d_head = config.d_head
# Q, K, V input dense layer
self.W_Q = tf.keras.layers.Dense(config.n_head * config.d_head, kernel_initializer=kernel_initializer(), bias_initializer=bias_initializer())
self.W_K = tf.keras.layers.Dense(config.n_head * config.d_head, kernel_initializer=kernel_initializer(), bias_initializer=bias_initializer())
self.W_V = tf.keras.layers.Dense(config.n_head * config.d_head, kernel_initializer=kernel_initializer(), bias_initializer=bias_initializer())
# Scale Dot Product Attention class
self.attention = ScaleDotProductAttention(name="self_attention")
# output dense layer
self.W_O = tf.keras.layers.Dense(config.d_model, kernel_initializer=kernel_initializer(), bias_initializer=bias_initializer())
def call(self, Q, K, V, attn_mask):
"""
layer 실행
:param Q: Q value
:param K: K value
:param V: V value
:param attn_mask: 실행 모드
:return attn_out: attention 실행 결과
"""
# reshape Q, K, V, attn_mask
batch_size = tf.shape(Q)[0]
Q_m = tf.transpose(tf.reshape(self.W_Q(Q), [batch_size, -1, self.n_head, self.d_head]), [0, 2, 1, 3]) # (bs, n_head, Q_len, d_head)
K_m = tf.transpose(tf.reshape(self.W_K(K), [batch_size, -1, self.n_head, self.d_head]), [0, 2, 1, 3]) # (bs, n_head, K_len, d_head)
V_m = tf.transpose(tf.reshape(self.W_V(V), [batch_size, -1, self.n_head, self.d_head]), [0, 2, 1, 3]) # (bs, n_head, K_len, d_head)
attn_mask_m = tf.expand_dims(attn_mask, axis=1)
# Scale Dot Product Attention with multi head Q, K, V, attn_mask
attn_out = self.attention(Q_m, K_m, V_m, attn_mask_m) # (bs, n_head, Q_len, d_head)
# transpose and liner
attn_out_m = tf.transpose(attn_out, perm=[0, 2, 1, 3]) # (bs, Q_len, n_head, d_head)
attn_out = tf.reshape(attn_out_m, [batch_size, -1, config.n_head * config.d_head]) # (bs, Q_len, d_model)
attn_out = self.W_O(attn_out) # (bs, Q_len, d_model)
return attn_out
class PositionWiseFeedForward(tf.keras.layers.Layer):
"""
Position Wise Feed Forward Class
"""
def __init__(self, config, name="feed_forward"):
"""
생성자
:param config: Config 객체
:param name: layer name
"""
super().__init__(name=name)
self.W_1 = tf.keras.layers.Dense(config.d_ff, activation=gelu, kernel_initializer=kernel_initializer(), bias_initializer=bias_initializer())
self.W_2 = tf.keras.layers.Dense(config.d_model, kernel_initializer=kernel_initializer(), bias_initializer=bias_initializer())
def call(self, inputs):
"""
layer 실행
:param inputs: inputs
:return ff_val: feed forward 실행 결과
"""
ff_val = self.W_2(self.W_1(inputs))
return ff_val
class EncoderLayer(tf.keras.layers.Layer):
"""
Encoder Layer Class
"""
def __init__(self, config, name="encoder_layer"):
"""
생성자
:param config: Config 객체
:param name: layer name
"""
super().__init__(name=name)
self.self_attention = MultiHeadAttention(config)
self.norm1 = tf.keras.layers.LayerNormalization(epsilon=config.layernorm_epsilon)
self.ffn = PositionWiseFeedForward(config)
self.norm2 = tf.keras.layers.LayerNormalization(epsilon=config.layernorm_epsilon)
self.dropout = tf.keras.layers.Dropout(config.dropout)
def call(self, enc_embed, self_mask):
"""
layer 실행
:param enc_embed: enc_embed 또는 이전 EncoderLayer의 출력
:param self_mask: enc_tokens의 pad mask
:return enc_out: EncoderLayer 실행 결과
"""
self_attn_val = self.self_attention(enc_embed, enc_embed, enc_embed, self_mask)
norm1_val = self.norm1(enc_embed + self.dropout(self_attn_val))
ffn_val = self.ffn(norm1_val)
enc_out = self.norm2(norm1_val + self.dropout(ffn_val))
return enc_out<jupyter_output><empty_output><jupyter_text>
### BERT 모델 구현
BERT 모델의 구현에서 위에서 생성한 레이어들이 서로 결합 !
<jupyter_code>class BERT(tf.keras.layers.Layer):
"""
BERT Class
"""
def __init__(self, config, name="bert"):
"""
생성자
:param config: Config 객체
:param name: layer name
"""
super().__init__(name=name)
self.i_pad = config.i_pad
self.embedding = SharedEmbedding(config)
self.position = PositionalEmbedding(config)
self.segment = tf.keras.layers.Embedding(2, config.d_model, embeddings_initializer=kernel_initializer())
self.norm = tf.keras.layers.LayerNormalization(epsilon=config.layernorm_epsilon)
self.encoder_layers = [EncoderLayer(config, name=f"encoder_layer_{i}") for i in range(config.n_layer)]
self.dropout = tf.keras.layers.Dropout(config.dropout)
def call(self, enc_tokens, segments):
"""
layer 실행
:param enc_tokens: encoder tokens
:param segments: token segments
:return logits_cls: CLS 결과 logits
:return logits_lm: LM 결과 logits
"""
enc_self_mask = get_pad_mask(enc_tokens, self.i_pad)
enc_embed = self.get_embedding(enc_tokens, segments)
enc_out = self.dropout(enc_embed)
for encoder_layer in self.encoder_layers:
enc_out = encoder_layer(enc_out, enc_self_mask)
logits_cls = enc_out[:,0]
logits_lm = enc_out
return logits_cls, logits_lm
def get_embedding(self, tokens, segments):
"""
token embedding, position embedding lookup
:param tokens: 입력 tokens
:param segments: 입력 segments
:return embed: embedding 결과
"""
embed = self.embedding(tokens) + self.position(tokens) + self.segment(segments)
embed = self.norm(embed)
return embed<jupyter_output><empty_output><jupyter_text>
## BERT 모델 이용한 학습
아래는 BERT 레이어에 Fully Connected layer 를 붙어 KorQuAD 용으로 finetune 하기 위한 모델 클래스입니다.
<jupyter_code>class BERT4KorQuAD(tf.keras.Model):
def __init__(self, config):
super().__init__(name='BERT4KorQuAD')
self.bert = BERT(config)
self.dense = tf.keras.layers.Dense(2)
def call(self, enc_tokens, segments):
logits_cls, logits_lm = self.bert(enc_tokens, segments)
hidden = self.dense(logits_lm) # (bs, n_seq, 2)
start_logits, end_logits = tf.split(hidden, 2, axis=-1) # (bs, n_seq, 1), (bs, n_seq, 1)
start_logits = tf.squeeze(start_logits, axis=-1)
start_outputs = tf.keras.layers.Softmax(name="start")(start_logits)
end_logits = tf.squeeze(end_logits, axis=-1)
end_outputs = tf.keras.layers.Softmax(name="end")(end_logits)
return start_outputs, end_outputs
config = Config({"d_model": 256, "n_head": 4, "d_head": 64, "dropout": 0.1, "d_ff": 1024, "layernorm_epsilon": 0.001, "n_layer": 3, "n_seq": 384, "n_vocab": 0, "i_pad": 0})
config.n_vocab = len(vocab)
config.i_pad = vocab.pad_id()
config<jupyter_output><empty_output><jupyter_text>
모델의 크기가 다르고, 사용할 수 있는 배치 사이즈가 달라지므로, 배치 구성만 다시 진행
메모리 한계 때문에 배치 사이즈는 32 정도가 적당
<jupyter_code>bert_batch_size = 32
train_dataset = tf.data.Dataset.from_tensor_slices((train_inputs, train_labels)).shuffle(10000).batch(bert_batch_size)
dev_dataset = tf.data.Dataset.from_tensor_slices((dev_inputs, dev_labels)).batch(bert_batch_size)
model = BERT4KorQuAD(config)<jupyter_output><empty_output><jupyter_text>
### pretraining 없이 학습
BERT 는 pretrained 모델을 활용하는 데 의의가 있습니다.
pretraining을 수행에는 워크스테이션급 하드웨어를 동원해서 1달 가까이 학습을 시켜야 성능이 나오는 거대한 모델입니다.
따라서 이번에는 BERT 모델만 구성한 후 전혀 pretraining 없이 학습을 진행해 보겠습니다.
<jupyter_code>def train_epoch(model, dataset, loss_fn, acc_fn, optimizer):
metric_start_loss = tf.keras.metrics.Mean(name='start_loss')
metric_end_loss = tf.keras.metrics.Mean(name='end_loss')
metric_start_acc = tf.keras.metrics.Mean(name='start_acc')
metric_end_acc = tf.keras.metrics.Mean(name='end_acc')
p_bar = tqdm(dataset)
for batch, ((enc_tokens, segments), (start_labels, end_labels)) in enumerate(p_bar):
with tf.GradientTape() as tape:
start_outputs, end_outputs = model(enc_tokens, segments)
start_loss = loss_fn(start_labels, start_outputs)
end_loss = loss_fn(end_labels, end_outputs)
loss = start_loss + end_loss
start_acc = acc_fn(start_labels, start_outputs)
end_acc = acc_fn(end_labels, end_outputs)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
metric_start_loss(start_loss)
metric_end_loss(end_loss)
metric_start_acc(start_acc)
metric_end_acc(end_acc)
if batch % 10 == 9:
p_bar.set_description(f'loss: {metric_start_loss.result():0.4f}, {metric_end_loss.result():0.4f}, acc: {metric_start_acc.result():0.4f}, {metric_end_acc.result():0.4f}')
p_bar.close()
return metric_start_loss.result(), metric_end_loss.result(), metric_start_acc.result(), metric_end_acc.result()
def eval_epoch(model, dataset, loss_fn, acc_fn):
metric_start_loss = tf.keras.metrics.Mean(name='start_loss')
metric_end_loss = tf.keras.metrics.Mean(name='end_loss')
metric_start_acc = tf.keras.metrics.Mean(name='start_acc')
metric_end_acc = tf.keras.metrics.Mean(name='end_acc')
for batch, ((enc_tokens, segments), (start_labels, end_labels)) in enumerate(dataset):
start_outputs, end_outputs = model(enc_tokens, segments)
start_loss = loss_fn(start_labels, start_outputs)
end_loss = loss_fn(end_labels, end_outputs)
start_acc = acc_fn(start_labels, start_outputs)
end_acc = acc_fn(end_labels, end_outputs)
metric_start_loss(start_loss)
metric_end_loss(end_loss)
metric_start_acc(start_acc)
metric_end_acc(end_acc)
return metric_start_loss.result(), metric_end_loss.result(), metric_start_acc.result(), metric_end_acc.result()
# 모델 학습
# 1 Epoch 에 5분 가량 소요
# 20 Epoch 진행. 5 Epoch 이상 val_start_accuracy 향상이 없으면 early stopping
train_history = []
eval_history = []
loss_fn = tf.keras.losses.sparse_categorical_crossentropy
acc_fn = tf.keras.metrics.sparse_categorical_accuracy
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-4)
best_acc = .0
patience = 0
for epoch in range(20):
train_start_loss, train_end_loss, train_start_acc, train_end_acc = train_epoch(model, train_dataset, loss_fn, acc_fn, optimizer)
start_loss, end_loss, start_acc, end_acc = eval_epoch(model, dev_dataset, loss_fn, acc_fn)
train_result = [epoch, train_start_loss, train_end_loss, train_start_acc, train_end_acc]
eval_result = [epoch, start_loss, end_loss, start_acc, end_acc]
print(f'train {epoch} >> loss: {train_start_loss:0.4f}, {train_end_loss:0.4f}, acc: {train_start_acc:0.4f}, {train_end_acc:0.4f}')
print(f'eval {epoch} >> loss: {start_loss:0.4f}, {end_loss:0.4f}, acc: {start_acc:0.4f}, {end_acc:0.4f}')
acc = start_acc + end_acc
if best_acc < acc:
patience = 0
best_acc = acc
model.save_weights(os.path.join(data_dir, "korquad_bert_pretrained.hdf5"))
print(f'save best model')
else:
patience += 1
if 5 <= patience:
print(f'early stopping')
break<jupyter_output><empty_output><jupyter_text>
### 모델 학습 결과 시각화
<jupyter_code># 학습 과정에서의 모델 매트릭 log 확인
# vanilla BERT 모델 학습결과 train loss 와 acc, validation loss 와 acc 를 저장
vanilla_train_start_loss = [4.0624, 3.4155, 3.0915, 2.8805, 2.7345, 2.4980, 2.2642, 1.9725, 1.6739, 1.3955]
vanilla_train_end_loss = [4.5772, 3.9612, 3.5802, 3.2991, 3.0948, 2.7852, 2.4885, 2.1004, 1.7605, 1.4605]
vanilla_train_start_acc = [0.0812, 0.1397, 0.1945, 0.2313, 0.2593, 0.3089, 0.3651, 0.4346, 0.5152, 0.5880]
vanilla_train_end_acc = [0.0610, 0.1158, 0.1663, 0.2105, 0.2469, 0.3039, 0.3682, 0.4570, 0.5328, 0.6045]
vanilla_eval_start_loss = [3.9064, 3.6582, 3.6936, 3.6640, 3.9409, 4.0885, 4.3037, 4.8327, 5.1923, 5.3824]
vanilla_eval_end_loss = [4.4222, 4.2336, 4.2645, 4.2235, 4.4810, 4.5909, 4.7771, 5.3338, 6.1183, 6.0056]
vanilla_eval_start_acc = [0.0967, 0.1410, 0.1584, 0.1619, 0.1573, 0.1592, 0.1533, 0.1573, 0.1496, 0.1466]
vanilla_eval_end_acc = [0.0720, 0.1178, 0.1390, 0.1440, 0.1520, 0.1480, 0.1448, 0.1519, 0.1443, 0.1422]
# training result
# 위 학습 log 의 loss, acc 데이터 리스트 만들어서 시각화 합시다
# 다음번 모델 학습 때는 리스트 따로 만들어서 관리 합시다. 저장파일을 만들던 ...
plt.figure(figsize=(16, 4))
plt.subplot(1, 4, 1)
plt.plot(vanilla_train_start_loss, 'b--', label='vanilla_train_start_loss')
plt.plot(vanilla_eval_start_loss, 'r-', label='vanilla_eval_start_loss')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 4, 2)
plt.plot(vanilla_train_end_loss, 'b--', label='vanilla_train_end_loss')
plt.plot(vanilla_eval_end_loss, 'r-', label='vanilla_eval_end_loss')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 4, 3)
plt.plot(vanilla_train_start_acc, 'b--', label='vanilla_train_start_acc')
plt.plot(vanilla_eval_start_acc, 'r-', label='vanilla_eval_start_acc')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 4, 4)
plt.plot(vanilla_train_end_acc, 'b--', label='vanilla_train_end_acc')
plt.plot(vanilla_eval_end_acc, 'r-', label='vanilla_eval_end_acc')
plt.xlabel('Epoch')
plt.legend()
plt.show()
dev_json = os.path.join(data_dir, "korquad_dev.json")
with open(dev_json) as f:
cnt = 0
for i, line in enumerate(f):
data = json.loads(line)
question = vocab.decode_pieces(data['question'])
context = vocab.decode_pieces(data['context'])
answer = data['answer']
answer_predict = do_predict(model, question, context)
if answer in answer_predict:
print(i)
print("질문 : ", question)
print("지문 : ", context)
print("정답 : ", answer)
print("예측 : ", answer_predict, "\n")
else:
print('예측 실패!', "\n")
if 100 < i:
break<jupyter_output>예측 실패!
예측 실패!
2
질문 : 임종석이 여의도 농민 폭력 시위를 주도한 혐의로 지명수배된 연도는?
지문 : 1989년 2월 15일 여의도 농민 폭력 시위를 주도한 혐의(폭력행위등처벌에관한법률위반)으로 지명수배되었다. 1989년 3월 12일 서울지방검찰청 공안부는 임종석의 사전구속영장을 발부받았다. 같은 해 6월 30일 평양축전에 임수경을 대표로 파견하여 국가보안법위반 혐의가 추가되었다. 경찰은 12월 18일~20일 사이 서울 경희대학교에서 임종석이 성명 발표를 추진하고 있다는 첩보를 입수했고, 12월 18일 오전 7시 40분 경 가스총과 전자봉으로 무장한 특공조 및 대공과 직원 12명 등 22명의 사복 경찰을 승용차 8대에 나누어 경희대학교에 투입했다. 1989년 12월 18일 오전 8시 15분 경 서울청량리경찰서는 호위 학생 5명과 함께 경희대학교 학생회관 건물 계단을 내려오는 임종석을 발견, 검거해 구속을 집행했다. 임종석은 청량리경찰서에서 약 1시간 동안 조사를 받은 뒤 오전 9시 50분 경 서울 장안동의 서울지방경찰청 공안분실로 인계되었다.
정답 : 1989년
예측 : 1989년
예측 실패!
예측 실패!
예측 실패!
예측 실패!
7
질문 : 정부의 헌법개정안 준비 과정에 대해서 청와대 비서실이 아니라 국무회의 중심으로 이뤄졌어야 했다고 지적한 원로 헌법학자는?
지문 : "내각과 장관들이 소외되고 대통령비서실의 권한이 너무 크다", "행보가 비서 본연의 역할을 벗어난다"는 의견이 제기되었다. 대표적인 예가 10차 개헌안 발표이다. 원로 헌법학자인 허영 경희대 석좌교수는 정부의 헌법개정안 준비 과정에 대해 "청와대 비서실이 아닌 국무회의 중심으로 이뤄졌어야 했다"고 지적했다. '국무회의의 심의를 거쳐야 한다'(제89조)는 헌법 규정에 충실하지 않았다는 것이다. 그러면서 "법무부 장관을 제쳐놓고 민정수석이 개정안을 설명하는 게 이해가 안 된다"고 지적했다. 민정수석은 국회의원에 대해 책임지는 법무부 장관도 아니고, [...]<jupyter_text>
### pretraining 없이 BERT 모델 학습 결과
pretraining 없이 BERT 모델을 통한 학습 결과
Epoch 가 진행될 수록 train accuracy 는 0.45 까지 좋아지는 반면
validation accuracy 는 0.13 대에 머물며 나아지지 않는 것을 보실 수 있습니다.
이 모델에는 수많은 코퍼스를 통해 정교하게 얻어진 Word Embedding 이 반영되지 않았기 때문입니다.
## Pretrained BERT 모델 사용
### pretrained model 불러오기
<jupyter_code>checkpoint_file = os.path.join(model_dir, 'bert_pretrain_32000.hdf5')
model = BERT4KorQuAD(config)
if os.path.exists(checkpoint_file):
# pretrained model 을 로드하기 위해 먼저 모델이 생성되어 있어야 한다.
enc_tokens = np.random.randint(0, len(vocab), (4, 10))
segments = np.random.randint(0, 2, (4, 10))
model(enc_tokens, segments)
# checkpoint 파일로부터 필요한 layer를 불러온다.
model.load_weights(os.path.join(model_dir, "bert_pretrain_32000.hdf5"), by_name=True)
model.summary()
else:
print('NO Pretrained Model')<jupyter_output>Model: "BERT4KorQuAD"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bert (BERT) multiple 10662400
_________________________________________________________________
dense_76 (Dense) multiple 514
=================================================================
Total params: 10,662,914
Trainable params: 10,662,914
Non-trainable params: 0
_________________________________________________________________
<jupyter_text>
### pretrained model finetune
학습을 진행하는 코드는 이전 스텝과 동일합니다.
단지 학습해야 할 모델이 랜덤 초기화된 것이 아니라 pretrained model 을 로드한 것일 뿐입니다.
### fintune 학습이 완료된 모델로 Test (inference 수행)
finetune 학습이 완료된 model을 활용하여 실제 퀴즈 풀이 결과를 확인해 봅니다.
<jupyter_code>def do_predict(model, question, context):
"""
입력에 대한 답변 생성하는 함수
:param model: model
:param question: 입력 문자열
:param context: 입력 문자열
"""
q_tokens = vocab.encode_as_pieces(question)[:args.max_query_length]
c_tokens = vocab.encode_as_pieces(context)[:args.max_seq_length - len(q_tokens) - 3]
tokens = ['[CLS]'] + q_tokens + ['[SEP]'] + c_tokens + ['[SEP]']
token_ids = [vocab.piece_to_id(token) for token in tokens]
segments = [0] * (len(q_tokens) + 2) + [1] * (len(c_tokens) + 1)
y_start, y_end = model(np.array([token_ids]), np.array([segments]))
# print(y_start, y_end)
y_start_idx = K.argmax(y_start, axis=-1)[0].numpy()
y_end_idx = K.argmax(y_end, axis=-1)[0].numpy()
answer_tokens = tokens[y_start_idx:y_end_idx + 1]
return vocab.decode_pieces(answer_tokens)<jupyter_output><empty_output><jupyter_text>
### 학습된 모델로 예측 테스트
<jupyter_code>dev_json = os.path.join(data_dir, "korquad_dev.json")
with open(dev_json) as f:
for i, line in enumerate(f):
data = json.loads(line)
question = vocab.decode_pieces(data['question'])
context = vocab.decode_pieces(data['context'])
answer = data['answer']
answer_predict = do_predict(model, question, context)
if answer in answer_predict:
print(i)
print("질문 : ", question)
print("지문 : ", context)
print("정답 : ", answer)
print("예측 : ", answer_predict, "\n")
if 100 < i:
break
# 모델 학습
# 1 Epoch 에 5분 가량 소요
# 20 Epoch 진행. 5 Epoch 이상 val_start_accuracy 향상이 없으면 early stopping
loss_fn = tf.keras.losses.sparse_categorical_crossentropy
acc_fn = tf.keras.metrics.sparse_categorical_accuracy
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-4)
best_acc = .0
patience = 0
for epoch in range(20):
train_start_loss, train_end_loss, train_start_acc, train_end_acc = train_epoch(model, train_dataset, loss_fn, acc_fn, optimizer)
start_loss, end_loss, start_acc, end_acc = eval_epoch(model, dev_dataset, loss_fn, acc_fn)
train_result = [epoch, train_start_loss, train_end_loss, train_start_acc, train_end_acc]
eval_result = [epoch, start_loss, end_loss, start_acc, end_acc]
print(f'train {epoch} >> loss: {train_start_loss:0.4f}, {train_end_loss:0.4f}, acc: {train_start_acc:0.4f}, {train_end_acc:0.4f}')
print(f'eval {epoch} >> loss: {start_loss:0.4f}, {end_loss:0.4f}, acc: {start_acc:0.4f}, {end_acc:0.4f}')
acc = start_acc + end_acc
if best_acc < acc:
patience = 0
best_acc = acc
model.save_weights(os.path.join(data_dir, "korquad_bert_none_pretrain.hdf5"))
print(f'save best model')
else:
patience += 1
if 5 <= patience:
print(f'early stopping')
break<jupyter_output><empty_output><jupyter_text>
### 모델 학습 결과 시각화
<jupyter_code># 학습 과정에서의 모델 매트릭 log 확인
# Tensor.numpy() 메서드를 사용하여 tf.Tensor 에서 값을 가져올 수 있습니다. (.array() 나 .data() 는?)
pretrained_train_start_loss = [3.9208, 3.2375, 2.9960, 2.8156, 2.6055, 2.3674, 2.0659, 1.7587]
pretrained_train_end_loss = [4.4334, 3.7766, 3.4050, 3.0423, 2.6455, 2.2312, 1.8056, 1.4262]
pretrained_train_start_acc = [0.1028, 0.1709, 0.2071, 0.2412, 0.2829, 0.3392, 0.4116, 0.4946]
pretrained_train_end_acc = [0.0839, 0.1478, 0.1990, 0.2520, 0.3219, 0.4041, 0.4988, 0.5913]
pretrained_eval_start_loss = [3.6453, 3.5813, 3.6662, 3.7338, 3.8957, 4.1137, 4.5747, 4.6807]
pretrained_eval_end_loss = [4.2447, 4.1463, 4.2593, 4.3728, 4.6429, 5.1066, 5.9167, 5.7736]
pretrained_eval_start_acc = [0.1232, 0.1536, 0.1589, 0.1656, 0.1605, 0.1499, 0.1361, 0.1390]
pretrained_eval_end_acc = [0.1066, 0.1327, 0.1434, 0.1331, 0.1338, 0.1320, 0.1229, 0.1241]
# training result
# 위 학습 log 의 loss, acc 데이터 리스트 만들어서 시각화 합시다
# 다음번 모델 학습 때는 리스트 따로 만들어서 관리 합시다. 저장파일을 만들던 ...
plt.figure(figsize=(16, 4))
plt.subplot(1, 4, 1)
plt.plot(pretrained_train_start_loss, 'b--', label='pretrained_train_start_loss')
plt.plot(pretrained_eval_start_loss, 'r-', label='pretrained_eval_start_loss')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 4, 2)
plt.plot(pretrained_train_end_loss, 'b--', label='pretrained_train_end_loss')
plt.plot(pretrained_eval_end_loss, 'r-', label='pretrained_eval_end_loss')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 4, 3)
plt.plot(pretrained_train_start_acc, 'b--', label='pretrained_train_start_acc')
plt.plot(pretrained_eval_start_acc, 'r-', label='pretrained_eval_start_acc')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 4, 4)
plt.plot(pretrained_train_end_acc, 'b--', label='pretrained_train_end_acc')
plt.plot(pretrained_eval_end_acc, 'r-', label='pretrained_eval_end_acc')
plt.xlabel('Epoch')
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text>
### 학습된 모델로 예측 테스트
<jupyter_code>dev_json = os.path.join(data_dir, "korquad_dev.json")
with open(dev_json) as f:
fail_cnt = 0
for i, line in enumerate(f):
data = json.loads(line)
question = vocab.decode_pieces(data['question'])
context = vocab.decode_pieces(data['context'])
answer = data['answer']
answer_predict = do_predict(model, question, context)
if answer in answer_predict:
print(i)
print("질문 : ", question)
print("지문 : ", context)
print("정답 : ", answer)
print("예측 : ", answer_predict, "\n")
else:
fail_cnt += 1
print('예측 실패!', "\n")
if 99 <= i:
pretrained_sample_test_acc = (100 - fail_cnt) / 100
break
# 학습된 모델로 100 개의 샘플 테스트 예측한 것의 accuracy
print(pretrained_sample_test_acc)<jupyter_output>0.18
|
no_license
|
/bert_qna.ipynb
|
doutoury/bert_qnd
| 34 |
<jupyter_start><jupyter_text># Preprocess Docs<jupyter_code>from preprocesing import process_data_files
process_data_files.prep_anorexia_data(man_dir+"original_chunks/", man_dir+"train_golden_truth.txt",
dest_dir+"prep_chunks/")
man_dir="D:/corpus/DepresionEriskCollections/2017/test/"
dest_dir="D:/corpus/DepresionEriskCollections/2017V2/test/"
process_data_files.prep_anorexia_data(man_dir+"original_chunks/", man_dir+"test_golden_truth.txt",
dest_dir+"prep_chunks/")
man_dir="D:/corpus/DepresionEriskCollections/2018/"
dest_dir="D:/corpus/DepresionEriskCollections/2018V2/"
process_data_files.prep_anorexia_data(man_dir+"original_prep_chunks/", man_dir+"test_golden_truth.txt",
dest_dir+"prep_chunks/")<jupyter_output><empty_output><jupyter_text># Read training docs<jupyter_code>from preprocesing.load_datasets import Dataset
data= Dataset(key="erisk18_dev", chunking=False, remove_end=True)<jupyter_output><empty_output><jupyter_text>## Positive docs<jupyter_code>pos_docs=data.get_dataset(folder_name="prep_chunks", truth_name="train_golden_truth.txt",
partition="training", chunks=False)
len(pos_docs[0])
from classifier import explore_data
import pandas as pd
df = pd.DataFrame(pos_docs[0],index=[i for i in range (len(pos_docs[0]))], columns=["doc"])
df["truth"]=pos_docs[1]
pos_docs_1=list (df[df["truth"]==1]["doc"])
len(pos_docs_1)
explore_data.plot_sample_length_distribution(pos_docs_1)
explore_data.plot_frequency_distribution_of_ngrams(pos_docs_1, (1,1), 20)<jupyter_output><empty_output><jupyter_text>## Negative docs<jupyter_code>pos_docs_0=list (df[df["truth"]==0]["doc"])
explore_data.plot_sample_length_distribution(pos_docs_0)
explore_data.plot_frequency_distribution_of_ngrams(pos_docs_0, (1,1), 20)<jupyter_output><empty_output><jupyter_text>## Both clases<jupyter_code>explore_data.plot_class_distribution(pos_docs[1])
explore_data.plot_sample_length_distribution(pos_docs[0])
explore_data.plot_frequency_distribution_of_ngrams(pos_docs[0], (1,1), 20)
explore_data.plot_frequency_distribution_of_ngrams(pos_docs[0], (2,2), 20)
explore_data.plot_frequency_distribution_of_ngrams(pos_docs[0], (3,3), 20)<jupyter_output><empty_output><jupyter_text># Clasificacion y ganancia de informacion<jupyter_code>from classifier.SVM_Text import SVM_text
from classifier.FeactureExtraction import feature_extraction
svm_obj=SVM_text(pos_docs[0], pos_docs[1], weighted=True)<jupyter_output><empty_output><jupyter_text>## Cargar conjunto de test<jupyter_code>svm_obj.extract_features(pos_docs[0], idf=True, stop_words="english", norm="l2")
svm_obj.train_and_test(pos_docs[1], p_label=1)
svm_obj.ft.get_chi_2(pos_docs[1], 100)<jupyter_output><empty_output><jupyter_text>## Validation<jupyter_code>val=data.get_dataset(folder_name="prep_chunks", truth_name="test_golden_truth.txt",
partition="test", chunks=False)
len(val[1])
svm_obj.extract_features(val[0], idf=True, stop_words="english", norm="l2")
svm_obj.train_and_test(val[1], p_label=1)<jupyter_output>Confusion matrix: [[328 21]
[ 23 29]]
Scores: [136467, 0.8902743142144638, 0.58, 0.5576923076923077, 0.5686274509803922]
<jupyter_text># Test real<jupyter_code>data2= Dataset(key="erisk18_test", chunking=False, remove_end=True)
test=data2.get_dataset(folder_name="prep_chunks", truth_name="test_golden_truth.txt",
partition="test", chunks=False)
svm_obj.extract_features(test[0], idf=True, stop_words="english", norm="l2")
svm_obj.train_and_test(test[1], p_label=1)<jupyter_output>Confusion matrix: [[707 34]
[ 37 42]]
Scores: [136467, 0.9134146341463415, 0.5526315789473685, 0.5316455696202531, 0.5419354838709678]
|
no_license
|
/notebooks/preprocesamiento_depresion.ipynb
|
v1ktop/data_augmentation_for_author_profiling
| 9 |
<jupyter_start><jupyter_text>Let's get coordinates!
<jupyter_code>!wget -q -O 'Geospatial_Coordinates.csv' http://cocl.us/Geospatial_data
print('Data downloaded!')
df_geo_coor = pd.read_csv("./Geospatial_Coordinates.csv")
df_geo_coor.head()
df_toronto = pd.merge(df, df_geo_coor, how='left', left_on = 'PostalCode', right_on = 'Postal Code')
df_toronto.drop("Postal Code", axis=1, inplace=True)
df_toronto.head()
address = "Toronto, ON"
geolocator = Nominatim(user_agent="toronto_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto city are {}, {}.'.format(latitude, longitude))<jupyter_output>The geograpical coordinate of Toronto city are 43.6534817, -79.3839347.
<jupyter_text>Let's create the DataFrame!<jupyter_code>df_toronto_denc = df_toronto[df_toronto['Borough'].str.contains("Toronto")].reset_index(drop=True)
df_toronto_denc.head()<jupyter_output><empty_output>
|
no_license
|
/Clustering Toronto.ipynb
|
savana2019x/Coursera_Capstone
| 2 |
<jupyter_start><jupyter_text>### Numpy learning from various tutorialsFirst tutorial: http://cs231n.github.io/python-numpy-tutorial/#numpy1. Array Creation<jupyter_code>import numpy as np
a = np.array([1, 2, 3])
print(type(a))
print(a.shape)
b = np.zeros((3,4,5))
print(b)
c = np.ones((3,4))
print(c)
d = np.full((2,2), 3)
print(d)
e = np.eye(4)
print(e)
f= np.random.rand(3)
print(f)<jupyter_output>[0.21019164 0.90557905 0.71941015]
<jupyter_text>2. Array Indexing<jupyter_code>np.random.seed(19)
arr = np.random.rand(3,3)
print(arr)
print(arr[:2])
print(arr[:2,:1])<jupyter_output>[[0.0975336 ]
[0.13813169]]
<jupyter_text>Skipping integer array indexing from the tutorial.
The basic idea from indexing is:
1. we can get single elements
2. we can get sub-arrays
- through slicing
- through integer arraysSkipping arithmetic from the tutuorial
Skipping broadcasting too for now - will do it later in this tutuorial<jupyter_code>i = np.random.random((3,1))
print(i.shape)
print(i)
z = np.array([2,3]).reshape(2,1)
z
x = np.array([5,6]).reshape(2,1)
x
np.dot(x.T,z).shape<jupyter_output><empty_output>
|
no_license
|
/6.86x/tutorials/numpy.ipynb
|
ssinghaldev/data_science
| 3 |
<jupyter_start><jupyter_text># [【SOTA】マイナビ × SIGNATE Student Cup 2019: 賃貸物件の家賃予測](https://signate.jp/competitions/264)## 1. データ読み込み<jupyter_code>import pandas as pd
import numpy as np
import pathlib
import os
# 学習データ、テストデータの読み込み
train_path = pathlib.Path("./DATA/train.csv")
test_path = pathlib.Path("./DATA/test.csv")
train_data = pd.read_csv(train_path)
test_data = pd.read_csv(test_path)
test_data.head()<jupyter_output><empty_output><jupyter_text>## 2. 前処理### 2.1. データ選択<jupyter_code>train_data_1 = train_data[["id", "アクセス", "所在地", "賃料", "間取り", "築年数", "面積", "所在階", "建物構造"]]
test_data_1 = test_data[["id", "アクセス", "所在地", "間取り", "築年数", "面積", "所在階", "建物構造"]]<jupyter_output><empty_output><jupyter_text>### 2.2. 間取りの数値化<jupyter_code># 間取りにlabel encodingを適用
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(pd.concat([train_data_1["間取り"],test_data_1["間取り"]]))
train_data_1["間取りID"] = le.transform(train_data_1["間取り"])
test_data_1["間取りID"] = le.transform(test_data_1["間取り"])<jupyter_output>C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
import sys
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:8: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
<jupyter_text>### 2.3. 面積の数値化<jupyter_code># 面積のカラムタイトルを面積[m2]に置換
train_data_1 = train_data_1.rename(columns={"面積": "面積m2"})
test_data_1 = test_data_1.rename(columns={"面積": "面積m2"})
# カラムタイトル面積[m2]の要素からm2を削除
train_data_1['面積m2'] = train_data_1['面積m2'].str.replace('m2', '').astype(float)
test_data_1['面積m2'] = test_data_1['面積m2'].str.replace('m2', '').astype(float)<jupyter_output><empty_output><jupyter_text>### 2.4. 所在階の数値化<jupyter_code># 所在階の中身を"/"で2つの列に分割
train_data_1 = pd.concat([train_data_1, train_data_1['所在階'].str.split('/', expand=True)], axis=1)
test_data_1 = pd.concat([test_data_1, test_data_1['所在階'].str.split('/', expand=True)], axis=1)
# 分割した所在階のカラム名変更 0:所在、1:階層
train_data_1 = train_data_1.rename(columns={0:"所在", 1:"階層"})
test_data_1 = test_data_1.rename(columns={0:"所在", 1:"階層"})
# 所在と階層の要素を整形
train_data_1['所在'] = train_data_1['所在'].str.replace('階', '')
train_data_1['所在'] = train_data_1['所在'].str.replace('建', '')
train_data_1['所在'] = train_data_1['所在'].str.replace('地下', '-')
train_data_1['所在'] = train_data_1['所在'].str.replace('\(.*\)', '', regex=True)
train_data_1['階層'] = train_data_1['階層'].str.replace('階建', '')
train_data_1['階層'] = train_data_1['階層'].str.replace('\(.*\)', '', regex=True)
train_data_1['所在'] = train_data_1['所在'].replace('', np.nan)
test_data_1['所在'] = test_data_1['所在'].str.replace('階', '')
test_data_1['所在'] = test_data_1['所在'].str.replace('建', '')
test_data_1['所在'] = test_data_1['所在'].str.replace('地下', '-')
test_data_1['所在'] = test_data_1['所在'].str.replace('\(.*\)', '', regex=True)
test_data_1['階層'] = test_data_1['階層'].str.replace('階建', '')
test_data_1['階層'] = test_data_1['階層'].str.replace('\(.*\)', '', regex=True)
test_data_1['所在'] = test_data_1['所在'].replace('', np.nan)
# # "階層"がNoneの箇所を"所在"の値で埋める
# train_data_1['階層'].fillna(train_data_1['所在'], inplace=True)
# test_data_1['階層'].fillna(test_data_1['所在'], inplace=True)
# 所在と階層の要素をfloat型に変換する
train_data_1['所在'] = train_data_1['所在'].astype(float)
train_data_1['階層'] = train_data_1['階層'].astype(float)
test_data_1['所在'] = test_data_1['所在'].astype(float)
test_data_1['階層'] = test_data_1['階層'].astype(float)
# 所在階のカラムを削除
train_data_1 = train_data_1.drop('所在階', axis=1)
test_data_1 = test_data_1.drop('所在階', axis=1)
# # 所在も階層も空欄のデータは間取りと面積の近いデータで埋める
# print(test_data_1[(test_data_1["間取りID"] == 21) & (test_data_1["面積m2"] > 90) & (test_data_1["面積m2"] < 95)].mean())
# test_data_1.loc[test_data_1["id"]==40675, "所在"] = float(6)
# test_data_1.loc[test_data_1["id"]==40675, "階層"] = float(9)<jupyter_output><empty_output><jupyter_text>### 2.5. 築年数の数値化<jupyter_code># 新築の場合は全て0にする
train_data_1.loc[train_data_1["築年数"]=="新築", "築年数"] = float(0)
test_data_1.loc[test_data_1["築年数"]=="新築", "築年数"] = float(0)
# 築年数を数値に変換する
train_data_1 = pd.concat([train_data_1, train_data_1['築年数'].str.split('年', expand=True)], axis=1)
test_data_1 = pd.concat([test_data_1, test_data_1['築年数'].str.split('年', expand=True)], axis=1)
# ヶ月を消す
train_data_1[1] = train_data_1[1].str.replace('ヶ月', '')
test_data_1[1] = test_data_1[1].str.replace('ヶ月', '')
# 築年数をfloat変換
train_data_1["築年数"] = train_data_1[0].astype(float) + (train_data_1[1].astype(float) / float(10))
test_data_1["築年数"] = test_data_1[0].astype(float) + (test_data_1[1].astype(float) / float(10))
test_data_1.head()
print(test_data_1[test_data_1["築年数"].isnull()])
# 0と1の列を消す
train_data_1 = train_data_1.drop(0, axis=1)
train_data_1 = train_data_1.drop(1, axis=1)
test_data_1 = test_data_1.drop(0, axis=1)
test_data_1 = test_data_1.drop(1, axis=1)<jupyter_output><empty_output><jupyter_text>### 2.6. 緯度経度情報の追加#### [このサイト](https://ktgis.net/gcode/geocoding.html)を利用する<jupyter_code># IDと住所をファイル出力
train_coordinate = train_data[["id", "所在地"]]
test_coordinate = test_data[["id", "所在地"]]
train_coordinate.to_csv("train_coordinate.csv", header=False, index=False)
test_coordinate.to_csv("test_coordinate.csv", header=False, index=False)
# 緯度、経度情報の読み込み(id, 所在地, 経度, 緯度)
train_coordinate_addvalue = pd.read_excel("train_coordinate_addvalue.xlsx")
test_coordinate_addvalue = pd.read_excel("test_coordinate_addvalue.xlsx")
train_data_1 = pd.merge(train_data_1, train_coordinate_addvalue[["id", "経度", "緯度"]], on='id')
test_data_1 = pd.merge(test_data_1, test_coordinate_addvalue[["id", "経度", "緯度"]], on='id')<jupyter_output><empty_output><jupyter_text>### 2.7. 部屋数の追加<jupyter_code># 部屋数のマージ
number_of_rooms = pd.read_excel("number_of_rooms.xlsx")
train_data_1 = pd.merge(train_data_1, number_of_rooms[["間取り", "部屋数"]], on='間取り')
test_data_1 = pd.merge(test_data_1, number_of_rooms[["間取り", "部屋数"]], on='間取り')
# インデックスの振り直し
train_data_1 = train_data_1.sort_values("id")
train_data_1 = train_data_1.reset_index(drop=True)
test_data_1 = test_data_1.sort_values("id")
test_data_1 = test_data_1.reset_index(drop=True)<jupyter_output><empty_output><jupyter_text>### 2.8. 1部屋当たりの面積追加<jupyter_code># 面積と部屋数から1部屋当たりの面積を算出
train_data_1["1部屋当たり面積m2"] = (train_data_1["面積m2"] / train_data_1["部屋数"]).astype(float)
test_data_1["1部屋当たり面積m2"] = (test_data_1["面積m2"] / test_data_1["部屋数"]).astype(float)
# 間取りと間取りIDは削除する
train_data_1 = train_data_1.drop('間取り', axis=1)
test_data_1 = test_data_1.drop('間取り', axis=1)
# train_data_1 = train_data_1.drop('間取りID', axis=1)
# test_data_1 = test_data_1.drop('間取りID', axis=1)<jupyter_output><empty_output><jupyter_text>#### 2.9. 建物の高さ率を追加(高さ率=所在/階層)<jupyter_code># 高さ率の算出
train_data_1["高さ率"] = (train_data_1["所在"] / train_data_1["階層"]).astype(float)
test_data_1["高さ率"] = (test_data_1["所在"] / test_data_1["階層"]).astype(float)<jupyter_output><empty_output><jupyter_text>### 2.10. 建物構造の数値化<jupyter_code>train_data_1["建物構造"].value_counts()
# 建物構造にlabel encodingを適用
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(pd.concat([train_data_1["建物構造"],test_data_1["建物構造"]]))
train_data_1["建物構造ID"] = le.transform(train_data_1["建物構造"])
test_data_1["建物構造ID"] = le.transform(test_data_1["建物構造"])
# 不要になった建物構造カラムを削除する
train_data_1 = train_data_1.drop('建物構造', axis=1)
test_data_1 = test_data_1.drop('建物構造', axis=1)<jupyter_output><empty_output><jupyter_text>### 2.11. 新規カラムに「賃料/面積」を追加<jupyter_code>train_data_1["賃料/面積"] = train_data_1["賃料"] / train_data_1["面積m2"]<jupyter_output><empty_output><jupyter_text>### 2.12. 東京23区の地域ごとに、住宅地の平均地価情報を追加<jupyter_code># 区の情報の確認(23区だけだった)
# sample = train_data["所在地"].str.extract("東京都(.+区)", expand=True)
# sample.value_counts()
print(len(train_data_1))
print(len(test_data_1))
# 訓練データとテストデータに、エリア情報を追加
train_data_1["エリア"] = ""
test_data_1["エリア"] = ""
train_data_1["エリア"] = train_data_1["所在地"].str.extract("東京都((.+区.+?)[0-9]|(.+区.+?)[0-9]|(.+区.+))", expand=True)
test_data_1["エリア"] = test_data_1["所在地"].str.extract("東京都((.+区.+?)[0-9]|(.+区.+?)[0-9]|(.+区.+))", expand=True)
print(len(train_data_1))
print(len(test_data_1))
# 一部エリア情報に数値が残ってしまうので、消す
train_data_1["エリア"] = train_data_1["エリア"].str.replace("[0-9]|[0-9]", "")
train_data_1["エリア"] = train_data_1["エリア"].str.replace("一丁目", "")
train_data_1["エリア"] = train_data_1["エリア"].str.replace("-", "")
test_data_1["エリア"] = test_data_1["エリア"].str.replace("[0-9]|[0-9]", "")
test_data_1["エリア"] = test_data_1["エリア"].str.replace("一丁目", "")
test_data_1["エリア"] = test_data_1["エリア"].str.replace("二丁目", "")
test_data_1["エリア"] = test_data_1["エリア"].str.replace("四丁目", "")
print(len(train_data_1))
print(len(test_data_1))
# 坪単価情報の読込~結合
wards_value = pd.read_excel("23区_地域毎_坪単価.xlsx")
train_data_1 = pd.merge(train_data_1, wards_value, on="エリア", how="left")
test_data_1 = pd.merge(test_data_1, wards_value, on="エリア", how="left")
train_data_1 = train_data_1.sort_values("id")
test_data_1 = test_data_1.sort_values("id")
train_data_1 = train_data_1.drop("Unnamed: 4", axis=1)
test_data_1 = test_data_1.drop("Unnamed: 4", axis=1)
# 地価の抜けデータを平均値で補間
# 江東区
train_data_1.loc[train_data_1["id"]==2767, "区"] = "江東区"
train_data_1.loc[train_data_1["id"]==7559, "区"] = "江東区"
train_data_1.loc[train_data_1["id"]==2767, "平均坪単価(万円)"] = 206.6
train_data_1.loc[train_data_1["id"]==7559, "平均坪単価(万円)"] = 206.6
# 港区
train_data_1.loc[train_data_1["id"]==17654, "区"] = "港区"
train_data_1.loc[train_data_1["id"]==17654, "平均坪単価(万円)"] = 1321.4
print(len(train_data_1))
print(len(test_data_1))
print(train_data_1[train_data_1["平均坪単価(万円)"].isnull()])
print(test_data_1[test_data_1["平均坪単価(万円)"].isnull()])<jupyter_output>Empty DataFrame
Columns: [id, アクセス, 所在地, 賃料, 築年数, 面積m2, 間取りID, 所在, 階層, 経度, 緯度, 部屋数, 1部屋当たり面積m2, 高さ率, 建物構造ID, 賃料/面積, エリア, 区, 地名, 平均坪単価(万円)]
Index: []
Empty DataFrame
Columns: [id, アクセス, 所在地, 築年数, 面積m2, 間取りID, 所在, 階層, 経度, 緯度, 部屋数, 1部屋当たり面積m2, 高さ率, 建物構造ID, エリア, 区, 地名, 平均坪単価(万円)]
Index: []
<jupyter_text>### 2.13. 六本木駅からの距離情報を追加<jupyter_code># 六本木駅からの距離を追加
# 六本木駅 緯度: 35.662725 経度: 139.731217
from geopy import Point
from geopy.distance import geodesic
from geopy.distance import distance
train_data_1["Roppongi_longtitude"] = 139.731217
train_data_1["Roppongi_latitude"] = 35.662725
train_data_1['point'] = train_data_1.apply(lambda row: Point(latitude=row['緯度'], longitude=row['経度']), axis=1)
train_data_1['point_next'] = train_data_1.apply(lambda row: Point(latitude=row['Roppongi_latitude'], longitude=row['Roppongi_longtitude']), axis=1)
test_data_1["Roppongi_longtitude"] = 139.731217
test_data_1["Roppongi_latitude"] = 35.662725
test_data_1['point'] = test_data_1.apply(lambda row: Point(latitude=row['緯度'], longitude=row['経度']), axis=1)
test_data_1['point_next'] = test_data_1.apply(lambda row: Point(latitude=row['Roppongi_latitude'], longitude=row['Roppongi_longtitude']), axis=1)
train_data_1['distance_km'] = train_data_1.apply(lambda row: distance(row['point'], row['point_next']).km if row['point_next'] is not None else float('nan'), axis=1)
test_data_1['distance_km'] = test_data_1.apply(lambda row: distance(row['point'], row['point_next']).km if row['point_next'] is not None else float('nan'), axis=1)
# 距離値を逆数にする Reciprocal
train_data_1["Recciprocal_distance_km"] = (1 / train_data_1["distance_km"]).astype(float)
test_data_1["Recciprocal_distance_km"] = (1 / test_data_1["distance_km"]).astype(float)
# 不要なカラムを削除する
train_data_1 = train_data_1.drop(["Roppongi_longtitude", "Roppongi_latitude", "point", "point_next", "distance_km"], axis=1)
test_data_1 = test_data_1.drop(["Roppongi_longtitude", "Roppongi_latitude", "point", "point_next", "distance_km"], axis=1)<jupyter_output><empty_output><jupyter_text>### 2.14. 欠損値を区の平均値で埋める<jupyter_code>train_data_1["築年数"] = train_data_1.groupby('区')["築年数"].transform(lambda x:x.fillna(x.mean()))
train_data_1["所在"] = train_data_1.groupby('区')["所在"].transform(lambda x:x.fillna(x.mean()))
train_data_1["階層"] = train_data_1.groupby('区')["階層"].transform(lambda x:x.fillna(x.mean()))
test_data_1["築年数"] = test_data_1.groupby('区')["築年数"].transform(lambda x:x.fillna(x.mean()))
test_data_1["所在"] = test_data_1.groupby('区')["所在"].transform(lambda x:x.fillna(x.mean()))
test_data_1["階層"] = test_data_1.groupby('区')["階層"].transform(lambda x:x.fillna(x.mean()))
# 高さ率の更新
train_data_1["高さ率"] = (train_data_1["所在"] / train_data_1["階層"]).astype(float)
test_data_1["高さ率"] = (test_data_1["所在"] / test_data_1["階層"]).astype(float)<jupyter_output><empty_output><jupyter_text>### 2.15. 区IDと区ごとの統計情報の追加<jupyter_code># 区にlabel encodingを適用
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(pd.concat([train_data_1["区"],test_data_1["区"]]))
train_data_1["区ID"] = le.transform(train_data_1["区"])
test_data_1["区ID"] = le.transform(test_data_1["区"])
# 区ごとの統計情報の追加
# 例)新宿区のとある物件の築年数の統計値:(【1】-【2】)*【1】/【2】
# 【1】該当物件の築年数、【2】新宿区の物件の平均築年数:=(該当物件の築年数-)*
# train_data_1 = train_data_1.assign(target_enc=train_data_1.groupby("区ID")["賃料/面積"].transform("mean").copy())<jupyter_output><empty_output><jupyter_text>### 2.16. 最寄りの路線ID、駅IDと、そこからの徒歩分数を追加<jupyter_code># 徒歩分数の整理のために初回のみ出力
# test_data_1["路線情報"] = test_data_1["アクセス"].str.extract("(\d{1,2}分)", expand=True)
# train_data_1 = pd.concat([train_data_1, train_data_1['アクセス'].str.split('\t', expand=True)], axis=1).drop('アクセス', axis=1)
# test_data_1 = pd.concat([test_data_1, test_data_1['アクセス'].str.split('\t', expand=True)], axis=1).drop('アクセス', axis=1)
# train_access_1 = train_data_1[[0, 1, 2]]
# train_access_2 = train_data_1[[4, 5, 6]]
# train_access_3 = train_data_1[[8, 9, 10]]
# test_access_1 = test_data_1[[0, 1, 2]]
# test_access_2 = test_data_1[[4, 5, 6]]
# test_access_3 = test_data_1[[8, 9, 10]]
# train_access_1.to_excel("train_access_1.xlsx", header=True, index=False)
# train_access_2.to_excel("train_access_2.xlsx", header=True, index=False)
# train_access_3.to_excel("train_access_3.xlsx", header=True, index=False)
# test_access_1.to_excel("test_access_1.xlsx", header=True, index=False)
# test_access_2.to_excel("test_access_2.xlsx", header=True, index=False)
# test_access_3.to_excel("test_access_3.xlsx", header=True, index=False)
# 最寄り駅データの読み込み
train_access_result = pd.read_excel("./アクセス/train_access_result.xlsx")
test_access_result = pd.read_excel("./アクセス/test_access_result.xlsx")
# データ結合
train_data_1 = pd.concat([train_data_1, train_access_result], axis=1)
test_data_1 = pd.concat([test_data_1, test_access_result], axis=1)
# 徒歩分数とfloat型に変換する
train_data_1["徒歩分数"] = (train_data_1["徒歩分数"]).astype(float)
test_data_1["徒歩分数"] = (test_data_1["徒歩分数"]).astype(float)
# 路線名にlabel encodingを適用
le = LabelEncoder()
le.fit(pd.concat([train_data_1["路線名"],test_data_1["路線名"]]))
train_data_1["路線ID"] = le.transform(train_data_1["路線名"])
test_data_1["路線ID"] = le.transform(test_data_1["路線名"])
# 駅名にlabel encodingを適用
le = LabelEncoder()
le.fit(pd.concat([train_data_1["駅名"],test_data_1["駅名"]]))
train_data_1["駅ID"] = le.transform(train_data_1["駅名"])
test_data_1["駅ID"] = le.transform(test_data_1["駅名"])<jupyter_output><empty_output><jupyter_text>### 2.17. 重複物件の抽出・削除<jupyter_code># 重複確認用のデータ出力
# train_duplicate = pd.concat([train_data[["id", "賃料", "所在地"]], train_data_1[["間取りID", "面積m2", "所在", "階層"]]], axis=1)
# test_duplicate = pd.concat([test_data[["id", "所在地"]], test_data_1[["間取りID", "面積m2", "所在", "階層"]]], axis=1)
# 結果の出力
# train_duplicate.to_excel("train_duplicate.xlsx", header=True, index=False)
# test_duplicate.to_excel("test_duplicate.xlsx", header=True, index=False)
# 重複対象のデータ読み込み
train_duplicate_flag = pd.read_excel("train_duplicate_flag.xlsx")
test_duplicate_flag = pd.read_excel("test_duplicate_flag.xlsx")
# データ数のチェック
print("Train:", len(train_data_1), "-", len(train_duplicate_flag), "=", len(train_data_1)-len(train_duplicate_flag))
print("Test:", len(test_data_1), "-", len(test_duplicate_flag), "=", len(test_data_1)-len(test_duplicate_flag))
# 重複フラグを結合
train_data_1 = pd.merge(train_data_1, train_duplicate_flag, on="id")
train_data_1 = train_data_1.rename(columns={"賃料_x":"賃料"})
train_data_1 = train_data_1.drop('賃料_y', axis=1)
test_data_1 = pd.merge(test_data_1, test_duplicate_flag, on="id")
# testデータからのみ、flag=1.0の列を削除
print("【削除前】", "train:", len(train_data_1), ", ", "test:", len(test_data_1))
#train_data_1 = train_data_1.loc[train_data_1["flag"] != 1.0]
test_data_1 = test_data_1.loc[test_data_1["flag"] != 1.0]
print("【削除後】", "train:", len(train_data_1), ", ", "test:", len(test_data_1))
# 不要なカラムを削除する
train_data_1 = train_data_1.drop(["アクセス", "所在地", "エリア", "区", "No.", "地名", "路線名", "駅名"], axis=1)
test_data_1 = test_data_1.drop(["アクセス", "所在地", "エリア", "区", "No.", "地名", "路線名", "駅名"], axis=1)<jupyter_output><empty_output><jupyter_text>## 3. データチェック<jupyter_code>train_data_1.isnull().sum()
test_data_1.isnull().sum()
len(train_data_1)
len(train_data)
len(test_data_1)
len(test_data)
train_data_1.head()
test_data_1.head()<jupyter_output><empty_output><jupyter_text>## 4.学習 <jupyter_code># IDとflagの削除
train_data_1_no_ID = train_data_1.drop(["id", "flag"], axis=1)
test_data_1_no_ID = test_data_1.drop(["id", "flag"], axis=1)
# 特徴データと目的変数の設定
# train_x = train_data_1_no_ID.drop(["賃料", "賃料/面積"], axis=1)
train_x = train_data_1_no_ID
train_y = train_data_1_no_ID["賃料/面積"]
test_x = test_data_1_no_ID
import xgboost as xgb
from xgboost import XGBClassifier
from sklearn.metrics import log_loss, accuracy_score
from sklearn.model_selection import KFold
import pandas as pd
from sklearn.decomposition import PCA
import math
from sklearn import model_selection
from sklearn.ensemble import RandomForestRegressor
# 2つの値から角度を求めて、配列を返す関数
def make_radian_row(pca_result):
rad = []
for r in pca_result:
rad.append(math.atan(r[0]/r[1]))
return rad
scores = []
# GBDT用のハイパーパラメータ
params = {"objective": "reg:squarederror", "sileng":1, "random_state":71, "eval_metric":"rmse"}
num_round = 50
# クロスバリデーション
kf = KFold(n_splits=4, shuffle=True, random_state=71)
for tr_idx, va_idx, in kf.split(train_x):
tr_x, va_x = train_x.iloc[tr_idx], train_x.iloc[va_idx]
tr_y, va_y = train_y.iloc[tr_idx], train_y.iloc[va_idx]
# 訓練データを'区ID'でgroupbyを行い、'賃料/面積'の平均を求める。
tr_x['id_target_enc'] = tr_x.groupby('区ID')['賃料/面積'].transform('mean')
# 検証データを使用しないで'区ID'カラムに対する'賃料/面積'の平均を求める
va_x['id_target_enc'] = va_x['区ID'].map(tr_x.groupby('区ID')['賃料/面積'].mean())
test_x['id_target_enc'] = test_x['区ID'].map(tr_x.groupby('区ID')['賃料/面積'].mean())
# 検証データの行にtarget_encodingを入れる
# train_x.loc[train_x.index[va_idx], 'item_target_enc'] = va_x['item_target_enc']
# tr_x, va_xより、"賃料", "賃料/面積"を削除
tr_x = tr_x.drop(["賃料", "賃料/面積"], axis=1)
va_x = va_x.drop(["賃料", "賃料/面積"], axis=1)
# 主成分分析
pca = PCA(n_components=2)
pca.fit(tr_x)
# 角度データの追加
tr_x["rad"] = make_radian_row(pca.transform(tr_x))
va_x["rad"] = make_radian_row(pca.transform(va_x))
test_x["rad"] = make_radian_row(pca.transform(test_x))
# GBDTで学習実行
dtrain = xgb.DMatrix(tr_x, label=tr_y) # enable_categorical=True
dvalid = xgb.DMatrix(va_x, label=va_y)
watchlist = [(dtrain, "train"), (dvalid, "eval")]
model = xgb.train(params, dtrain, num_round, evals=watchlist)
tr_x = tr_x.drop("rad", axis=1)
va_x = va_x.drop("rad", axis=1)
# import xgboost as xgb
# from xgboost import XGBClassifier
# from sklearn.metrics import log_loss, accuracy_score
# from sklearn.model_selection import KFold
# scores = []
# # GBDT用のハイパーパラメータ
# params = {"objective": "reg:squarederror", "sileng":1, "random_state":71, "eval_metric":"rmse"}
# num_round = 50
# # クロスバリデーション
# kf = KFold(n_splits=4, shuffle=True, random_state=71)
# for tr_idx, va_idx, in kf.split(train_x):
# tr_x, va_x = train_x.iloc[tr_idx], train_x.iloc[va_idx]
# tr_y, va_y = train_y.iloc[tr_idx], train_y.iloc[va_idx]
# # GBDTで学習実行
# dtrain = xgb.DMatrix(tr_x, label=tr_y) # enable_categorical=True
# dvalid = xgb.DMatrix(va_x, label=va_y)
# dtest = xgb.DMatrix(test_x)
# watchlist = [(dtrain, "train"), (dvalid, "eval")]
# model = xgb.train(params, dtrain, num_round, evals=watchlist)<jupyter_output><empty_output><jupyter_text>## 5. 検証<jupyter_code># 特徴量の予測結果への貢献度(Fスコア)を可視化
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
font = {'family' : 'Meiryo'} # matplotlibのデフォルトフォントをTakaoGothicに設定
plt.rc('font', **font)
xgb.plot_importance(model, importance_type = "gain")
plt.show()
# ヒストグラムと散布図の行列を作成
from pandas.plotting import scatter_matrix
x = scatter_matrix(train_data_1_no_ID, alpha=1, figsize=(20, 10), diagonal='hist')
# PCA分析結果を可視化する
transformed = pca.fit_transform(tr_x)
# 主成分をプロットする
plt.subplot(1, 2, 2)
plt.scatter(transformed[:, 0], transformed[:, 1])
plt.title('principal component')
plt.xlabel('pc1')
plt.ylabel('pc2')
# 主成分の次元ごとの寄与率を出力する
print(pca.explained_variance_ratio_)
# グラフを表示する
plt.show()<jupyter_output>[0.98837778 0.01086506]
<jupyter_text>## 6. 予測<jupyter_code>tr_x.head()
test_x.head()
# テストデータでの予測
# 角度データの追加
test_x["rad"] = make_radian_row(pca.transform(test_x))
# # PCAで出力した角度情報を元データに結合
# test_x = pd.concat([test_x, test_x_pca["rad"]], axis=1)
dtest = xgb.DMatrix(test_x)
pred = model.predict(dtest)
pred
len(pred)
len(test_data_1)<jupyter_output><empty_output><jupyter_text>## 7. データ出力<jupyter_code># 予測結果の結合
test_data_1["賃料/面積"] = pred
# 予測結果から賃料を算出し、int型に変換
test_data_1["賃料"] = (test_data_1["面積m2"] * test_data_1["賃料/面積"]).astype(int)
result = test_data_1[["id", "賃料"]]
result
len(result)
# 除外していた重複idの賃料を戻す
test_duplicate_rentvalue = pd.read_excel("test_duplicate_rentvalue.xlsx")
result = pd.concat([result, test_duplicate_rentvalue])
result = result.sort_values("id")
len(result)
result
result.to_csv("result.csv", header=False, index=False)<jupyter_output><empty_output>
|
no_license
|
/00_Python/SIGNATE/20210818_【SOTA】マイナビ × SIGNATE Student Cup 2019 賃貸物件の家賃予測/.ipynb_checkpoints/RentForcast_210926_01-checkpoint.ipynb
|
Kosuke-Yanai/KY
| 23 |
<jupyter_start><jupyter_text># Learning Together - Data Analysis$e^{i\pi} + 1 = 0$<jupyter_code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00484/'
dframe_trip = pd.read_csv('tripadvisor_review.csv', sep = ',')
dframe_trip.head(10)
dframe_trip.tail(10)
dframe_trip['Category 7'].mean()
def max_to_min(arr):
return arr.max() - arr.min()
trip = dframe_trip.groupby('Category 10')
trip.describe()
trip.agg(max_to_min) #Fungsi agregasi yang menghitung nilai max dari setiap kolom dan mengurangi nilai min dari kolom
trip.agg('mean')
dframe_trip['Cat 10/4 Ratio'] = dframe_trip['Category 10'] / dframe_trip['Category 4']
dframe_trip.head()
#instead of groupby we can use pivot table
dframe_trip.pivot_table(index=['Category 10'])
%matplotlib inline
dframe_trip.plot(kind='scatter', x='Category 7', y='Category 10')<jupyter_output><empty_output>
|
permissive
|
/.ipynb_checkpoints/Aggregation-checkpoint.ipynb
|
buildGather/learnDataAnalysis
| 1 |
<jupyter_start><jupyter_text>---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
---*Note: Some of the cells in this notebook are computationally expensive. To reduce runtime, this notebook is using a subset of the data.*# Case Study: Sentiment Analysis### Data Prep<jupyter_code>import pandas as pd
import numpy as np
# Read in the data
df = pd.read_csv('Amazon_Unlocked_Mobile.csv')
# Sample the data to speed up computation
# Comment out this line to match with lecture
df = df.sample(frac=0.1, random_state=10)
df.head()
# Drop missing values
df.dropna(inplace=True)
# Remove any 'neutral' ratings equal to 3
df = df[df['Rating'] != 3]
# Encode 4s and 5s as 1 (rated positively)
# Encode 1s and 2s as 0 (rated poorly)
df['Positively Rated'] = np.where(df['Rating'] > 3, 1, 0)
df.head(10)
# Most ratings are positive
df['Positively Rated'].mean()
from sklearn.model_selection import train_test_split
# Split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(df['Reviews'],
df['Positively Rated'],
random_state=0)
print('X_train first entry:\n\n', X_train.iloc[0])
print('\n\nX_train shape: ', X_train.shape)
<jupyter_output>X_train first entry:
Everything about it is awesome!
X_train shape: (23052,)
<jupyter_text># CountVectorizer<jupyter_code>from sklearn.feature_extraction.text import CountVectorizer
# Fit the CountVectorizer to the training data
vect = CountVectorizer().fit(X_train)
vect.get_feature_names()[::2000]
len(vect.get_feature_names())
# transform the documents in the training data to a document-term matrix
X_train_vectorized = vect.transform(X_train)
X_train_vectorized
from sklearn.linear_model import LogisticRegression
# Train the model
model = LogisticRegression()
model.fit(X_train_vectorized, y_train)
from sklearn.metrics import roc_auc_score
# Predict the transformed test documents
predictions = model.predict(vect.transform(X_test))
print('AUC: ', roc_auc_score(y_test, predictions))
# get the feature names as numpy array
feature_names = np.array(vect.get_feature_names())
# Sort the coefficients from the model
sorted_coef_index = model.coef_[0].argsort()
# Find the 10 smallest and 10 largest coefficients
# The 10 largest coefficients are being indexed using [:-11:-1]
# so the list returned is in order of largest to smallest
print('Smallest Coefs:\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Largest Coefs: \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))<jupyter_output>Smallest Coefs:
['worst' 'terrible' 'slow' 'junk' 'waste' 'sucks' 'poor' 'useless'
'disappointed' 'horrible']
Largest Coefs:
['excelent' 'excellent' 'excelente' 'perfectly' 'love' 'perfect' 'exactly'
'great' 'best' 'awesome']
<jupyter_text># Tfidf<jupyter_code>from sklearn.feature_extraction.text import TfidfVectorizer
# Fit the TfidfVectorizer to the training data specifiying a minimum document frequency of 5
vect = TfidfVectorizer(min_df=5).fit(X_train)
len(vect.get_feature_names())
X_train_vectorized = vect.transform(X_train)
model = LogisticRegression()
model.fit(X_train_vectorized, y_train)
predictions = model.predict(vect.transform(X_test))
print('AUC: ', roc_auc_score(y_test, predictions))
feature_names = np.array(vect.get_feature_names())
sorted_tfidf_index = X_train_vectorized.max(0).toarray()[0].argsort()
print('Smallest tfidf:\n{}\n'.format(feature_names[sorted_tfidf_index[:10]]))
print('Largest tfidf: \n{}'.format(feature_names[sorted_tfidf_index[:-11:-1]]))
sorted_coef_index = model.coef_[0].argsort()
print('Smallest Coefs:\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Largest Coefs: \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
# These reviews are treated the same by our current model
print(model.predict(vect.transform(['not an issue, phone is working',
'an issue, phone is not working'])))<jupyter_output>[0 0]
<jupyter_text># n-grams<jupyter_code># Fit the CountVectorizer to the training data specifiying a minimum
# document frequency of 5 and extracting 1-grams and 2-grams
vect = CountVectorizer(min_df=5, ngram_range=(1,2)).fit(X_train)
X_train_vectorized = vect.transform(X_train)
len(vect.get_feature_names())
model = LogisticRegression()
model.fit(X_train_vectorized, y_train)
predictions = model.predict(vect.transform(X_test))
print('AUC: ', roc_auc_score(y_test, predictions))
feature_names = np.array(vect.get_feature_names())
sorted_coef_index = model.coef_[0].argsort()
print('Smallest Coefs:\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Largest Coefs: \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
# These reviews are now correctly identified
print(model.predict(vect.transform(['not an issue, phone is working',
'an issue, phone is not working'])))<jupyter_output>[1 0]
|
no_license
|
/CURSO4/semana3/Case+Study+-+Sentiment+Analysis.ipynb
|
sandrarairan/Ciencias-Datos-Aplicada-Python-coursera
| 4 |
<jupyter_start><jupyter_text># K Nearest Neighbors Project <jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set()<jupyter_output><empty_output><jupyter_text>## Get the Data<jupyter_code>df = pd.read_csv('KNN_Project_Data')<jupyter_output><empty_output><jupyter_text>**Check the head of the dataframe.**<jupyter_code>df.head()<jupyter_output><empty_output><jupyter_text># Exploratory Data Analysis (EDA)<jupyter_code>sns.pairplot(df, hue='TARGET CLASS', palette='coolwarm')<jupyter_output><empty_output><jupyter_text># Standardize the Variables<jupyter_code>from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df.drop('TARGET CLASS', axis=1)) # Compute the mean and std to be used for later scaling.
scaled_features = scaler.transform(df.drop('TARGET CLASS', axis=1)) # Perform standardization by centering and scaling
df_feat = pd.DataFrame(scaled_features, columns=df.columns[:-1])
df_feat.head(3)<jupyter_output><empty_output><jupyter_text># Train Test Split<jupyter_code>from sklearn.model_selection import train_test_split
X = df_feat
y = df['TARGET CLASS']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)<jupyter_output><empty_output><jupyter_text># Using KNN<jupyter_code>from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)<jupyter_output><empty_output><jupyter_text># Predictions and Evaluations<jupyter_code>pred = knn.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, pred))
print(classification_report(y_test, pred))<jupyter_output> precision recall f1-score support
0 0.73 0.72 0.72 152
1 0.71 0.72 0.72 148
accuracy 0.72 300
macro avg 0.72 0.72 0.72 300
weighted avg 0.72 0.72 0.72 300
<jupyter_text># Choosing a K Value<jupyter_code># Trying results on different k values
error_rate = []
for i in range(1, 40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
plt.figure(figsize=(10, 6))
# X-axis, Y-asix
plt.plot(range(1, 40), error_rate, color="blue", linestyle='--', marker='o', markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')<jupyter_output><empty_output><jupyter_text>## Retrain with new K Value<jupyter_code>knn = KNeighborsClassifier(n_neighbors=30)
knn.fit(X_train, y_train) # give input and output to learn pattren
pred = knn.predict(X_test) # give input to model to predict output value itself
print(confusion_matrix(y_test, pred)) # now compare predictied values and actual values
print('\n')
print(classification_report(y_test, pred))
knn = KNeighborsClassifier(n_neighbors=31)
knn.fit(X_train, y_train) # give input and output to learn pattren
pred = knn.predict(X_test) # give input to model to predict output value itself
print(confusion_matrix(y_test, pred)) # now compare predictied values and actual values
print('\n')
print(classification_report(y_test, pred))<jupyter_output>[[123 29]
[ 19 129]]
precision recall f1-score support
0 0.87 0.81 0.84 152
1 0.82 0.87 0.84 148
accuracy 0.84 300
macro avg 0.84 0.84 0.84 300
weighted avg 0.84 0.84 0.84 300
|
no_license
|
/K Nearest Neighbors Project.ipynb
|
hunain-saeed/KNN-pdsml
| 10 |
<jupyter_start><jupyter_text><jupyter_code>import pandas as pd
df = pd.read_csv('drug review processed.csv.gz')
df.info()
df.head()
df = df.drop(columns=df.columns[0])
df.head()
df.groupby('vaderSentimentLabel').size()
import matplotlib.pyplot as plt
df.groupby('vaderSentimentLabel').count().plot.bar()
plt.show()
df.groupby('ratingSentimentLabel').size()
df.groupby('ratingSentimentLabel').count().plot.bar()
plt.show()
df.groupby('ratingSentiment').size()
positive_vader_sentiments = df[df.ratingSentiment == 2]
positive_string = []
for s in positive_vader_sentiments.cleanReview:
positive_string.append(s)
positive_string = pd.Series(positive_string).str.cat(sep=' ')
from wordcloud import WordCloud
wordcloud = WordCloud(width=2000,height=1000,max_font_size=200).generate(positive_string)
plt.imshow(wordcloud,interpolation='bilinear')
plt.show()
for s in positive_vader_sentiments.cleanReview[:20]:
if 'side effect' in s:
print(s)
negative_vader_sentiments = df[df.ratingSentiment == 1]
negative_string = []
for s in negative_vader_sentiments.cleanReview:
negative_string.append(s)
negative_string = pd.Series(negative_string).str.cat(sep=' ')
from wordcloud import WordCloud
wordcloud = WordCloud(width=2000,height=1000,max_font_size=200).generate(negative_string)
plt.imshow(wordcloud,interpolation='bilinear')
plt.axis('off')
plt.show()
for s in negative_vader_sentiments.cleanReview[:20]:
if 'side effect' in s:
print(s)
neutral_vader_sentiments = df[df.ratingSentiment == 0]
neutral_string = []
for s in neutral_vader_sentiments.cleanReview:
neutral_string.append(s)
neutral_string = pd.Series(neutral_string).str.cat(sep=' ')
from wordcloud import WordCloud
wordcloud = WordCloud(width=2000,height=1000,max_font_size=200).generate(neutral_string)
plt.imshow(wordcloud,interpolation='bilinear')
plt.axis('off')
plt.show()
for s in neutral_vader_sentiments.cleanReview[:20]:
if 'side effect' in s:
print(s)
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english',ngram_range=(1,2))
features = tfidf.fit_transform(df.cleanReview)
labels = df.vaderSentiment
features.shape
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
x_train,x_test,y_train,y_test = train_test_split(df['cleanReview'],df['ratingSentimentLabel'],random_state=0)
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
models = [RandomForestClassifier(n_estimators=200,max_depth=3,random_state=0),LinearSVC(),MultinomialNB(),LogisticRegression(random_state=0,solver='lbfgs',max_iter=2000,multi_class='auto')]
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model,features,labels,scoring='accuracy',cv=CV)
for fold_idx,accuracy in enumerate(accuracies):
entries.append((model_name,fold_idx,accuracy))
cv_df = pd.DataFrame(entries,columns=['model_name','fold_idx','accuracy'])
cv_df
cv_df.groupby('model_name').accuracy.mean()
from sklearn.preprocessing import Normalizer
model = LinearSVC('l2')
x_train,x_test,y_train,y_test = train_test_split(features,labels,test_size=0.25,random_state=0)
normalize = Normalizer()
x_train = normalize.fit_transform(x_train)
x_test = normalize.transform(x_test)
model.fit(x_train,y_train)
y_pred = model.predict(x_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test,y_pred)
conf_mat
from mlxtend.plotting import plot_confusion_matrix
fig,ax = plot_confusion_matrix(conf_mat=conf_mat,colorbar=True,show_absolute=True,cmap='viridis')
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred,target_names= df['ratingSentimentLabel'].unique()))
#Precision is ratio between all adequately identified errors and all identified errors
#Positive precision means whenever positive was predicted, the model was right 94% of time
#Neutral precision means whenever neutral was predicted, the model was right 93% of time
#Negative precision means whenever negative was predicted the model was right 93% of time
# With a weighted average of 93% This model is quite good at recognizing error
#Recall or sensitivity or True Positive Rate is a ratio of all identified errors and all existing errors
#Positive recall means positive was selected 63% of the times it should have been selected
#Neutral recall means neutral was selected 95% of the times it should have been selected
#Negative recall means negative was selected 94% of the times it should have been selected
# with weighted average of 93%. this model is quite good at classifying error,it seem to be weaker in classifying positive class with recall percentage of 63%
<jupyter_output><empty_output>
|
no_license
|
/Opinion_Mining_using_the_UCI_Drug_Review_Data_(Part_2)_Sentiment_Prediction_Using_Machine_Learning_Classification_Algorithms(Scikit_Learn_Implementatition).ipynb
|
zainabnatiq/Opinion-Mining-using-the-UCI-Drug-Review-Dataset
| 1 |
<jupyter_start><jupyter_text><jupyter_code><jupyter_output><empty_output><jupyter_text># Introduction to Regression with Neural Networks in Tensorflow
There are many definition for a regression problem but in our case, we're going to simplify it: predicting a numerical variable based on some other combination of variables, even shorter...predicting a number<jupyter_code># Import TensorFlow
import tensorflow as tf
print(tf.__version__)<jupyter_output>2.4.1
<jupyter_text>## Creating data to view and fit<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
# Create features
X = np.array([-7.0, -4.0, -1.0, 2.0, 5.0, 8.0, 11.0, 14.0])
# Create labels
y = np.array([3.0,6.0,9.0,12.0,15.0,18.0,21.0,24.0])
# Visualize it
plt.scatter(X,y)
y == X + 10<jupyter_output><empty_output><jupyter_text>## Input and output shapes<jupyter_code># Create a demo tensor for our housing price prediction problem
house_info = tf.constant(["bedroom", "bathroom", "garage"])
house_price = tf.constant([939700])
house_info, house_price
X[0], y[0]
X[1], y[1]
input_shape = X[0].shape
output_shape = y[0].shape
input_shape, output_shape
X[0].ndim # this is why it has no shape
X[0], y[0] #we want a model that can take an input X and output y, use 1 x value to predict 1 y value
# Turn our Numpy array into tensors
X = tf.cast(tf.constant(X), dtype = tf.float32)
y = tf.cast(tf.constant(y), dtype = tf.float32)
X, y
input_shape = X[0].shape
output_shape = y[0].shape
input_shape, output_shape<jupyter_output><empty_output><jupyter_text>## Steps in modelling with TensorFlow
1. **Creating a model** - define the input and output layers, as well as the hidden layers of a deep learning model.
2. **Compiling a model ** - define the loss function (in other words, the function which tells our model how wrong it is) and the optimizer (tells our model how to imporve the patterns its learning) and evaluation metrics (what we can use to interpret the moerformance of our model).
3. ** Fitting a model** - letting the model try to find patterns between X and y (features and labels).<jupyter_code>tf.random.set_seed(42)
# 1. Create a model using the Sequential API
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
#2. Compile the model
model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.SGD(), metrics = ["mae"])
# 3. Fit the model
model.fit(X, y, epochs = 100)
# check out x and y
X, y
# try and make a prediction
y_pred = model.predict([17.0])
y_pred<jupyter_output><empty_output><jupyter_text>## Imporoving our model
We can improve our model, by altering the steps we took to create a model.
1. **Creating a model** - here we might add more layers, increase the number of hidden units (all called neurons) within each of the hidden layers, change the activation function of each layer.
2. **Compiling a model** - here we might change the optimization function or perhaps the learning rate of the optimization function
3. **Fitting a model** - here we might fit a model for more epochs (leave it training for longer or on more data)<jupyter_code># :et's rebild our model
import tensorflow as tf
from tensorflow import keras
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
#2. Compile the model
model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.SGD(), metrics = ['mae'])
#3. Fit the model
model.fit(X, y, epochs = 100)
# Let's see if model's prediction has imprtoved
model.predict([17.0])
# Let's rebuild the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(100, activation = 'relu'),
tf.keras.layers.Dense(100, activation = 'relu'),
tf.keras.layers.Dense(1)
])
model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.Adam(), metrics = ['mae'])
model.fit(X, y, epochs = 500)
model.predict([17.0])<jupyter_output><empty_output><jupyter_text>## Evaluating a model<jupyter_code><jupyter_output><empty_output>
|
permissive
|
/neural_network_regression_with_tensorflow.ipynb
|
artms-18/tensorflow_fundamentals
| 7 |
<jupyter_start><jupyter_text>
## 2. Linear regression with multiple variables
In this part, you will implement linear regression with multiple variables to
predict the prices of houses. You want to predict the price of a house given its size and number of bedrooms. To achieve this, you need to train a model on data collected on housing prices given its size and number of bedrooms.
The file ex1data2.txt contains a training set of housing prices in Portland, Oregon. The first column is the size of the house (in square feet), the
second column is the number of bedrooms, and the third column is the price
of the house.
<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Load Data
data = pd.read_csv('data/ex1data2.txt', names=['size','bedroom','price']);
X = data.iloc[:, :2].values;
y = data.iloc[:, 2].values;
m = len(y);
data.head()
data.shape
def h_of_x(X, theta):
return np.dot(X,theta)
def computeCostMulti(X, y, theta):
m = len(y)
J = (np.sum((np.dot(X,theta) - y)**2))/(2*m)
return J<jupyter_output><empty_output><jupyter_text>### 2.1 Feature Normalization
<jupyter_code>def featureNormalize(X):
X_norm = X.copy()
mu = np.zeros(X.shape[1])
sigma = np.zeros(X.shape[1])
#======================
mu = np.mean(X,axis=0)
sigma = np.std(X,axis=0)
X_norm = np.divide((X - mu),sigma)
#====================
return X_norm, mu, sigma
# Scale features and set them to zero mean
print ('Normalizing Features ...')
X, mu, sigma = featureNormalize(X)
print ('[mu] [sigma]')
print (mu, sigma)
# Add intercept term to X
X = np.concatenate((np.ones((m, 1)), X), axis=1)<jupyter_output>Normalizing Features ...
[mu] [sigma]
[2000.68085106 3.17021277] [7.86202619e+02 7.52842809e-01]
<jupyter_text>### 2.2 Gradient Descent
**Implementation Note:** In the multivariate case, the cost function can
also be written in the following vectorized form:
$$J(\theta) = \frac{1}{2m}(X\theta-\vec{y})^T (X\theta-\vec{y})$$
$$X =
\begin{bmatrix}
....(x^{(1)})^T....\\
....(x^{(2)})^T....\\
. \\
. \\
. \\
....(x^{(m)})^T....
\end{bmatrix}, \ \ \ \ \ \ \ \ \
\vec{y} =
\begin{bmatrix}
y^{(1)}\\
y^{(2)}\\
.\\
. \\
.\\
y^{(m)}
\end{bmatrix}$$
<jupyter_code>def gradientDescentMulti(X, y, theta, alpha, num_iters):
m = len(y); # number of training examples
J_history = np.zeros([num_iters, 1]);
for iter in range(num_iters):
#============ Your code here =============
#theta = theta - (alpha/m)*np.sum((h_of_x(X,theta)-y)[:,None]*X,axis=0)
theta = theta - (alpha/m)*(np.dot(X.T,(h_of_x(X,theta)-y)))
#==================================
J_history[iter] = computeCostMulti(X, y, theta)
return theta, J_history
print(y.shape)
print(h_of_x(X,np.zeros(3)).shape)
# Choose some alpha value
alpha = 0.01
num_iters = 400
def gd(alpha):
print ('Running gradient descent ...')
# Init Theta and Run Gradient Descent
thet = np.zeros(3)
thet, J_history = gradientDescentMulti(X, y, thet, alpha, num_iters)
return thet, J_history
thet, J_history = gd(alpha)<jupyter_output>Running gradient descent ...
<jupyter_text>### 2.3 Selecting learning rates
In this part of the exercise, you will get to try out different learning rates for
the dataset and find a learning rate that converges quickly. We rec-
ommend trying values of the learning rate $\alpha$ on a log-scale, at multiplicative
steps of about 3 times the previous value (i.e., 0.3, 0.1, 0.03, 0.01 and so on).
You may also want to adjust the number of iterations you are running if that
will help you see the overall trend in the curve.<jupyter_code># Choose some alpha value
alphas = [0.03,0.01,1,0.3]
thetas = []
num_iters = 400
def costs():
J = []
for i in alphas:
alpha = i
thet, j = gd(i)
J.append(j)
thetas.append(thet)
return J
J_hists = costs()
axs = []
fig = plt.figure(figsize=(15,6))
for i in range(len(J_hists)):
axs.append(fig.add_subplot('22'+str(i)))
axs[-1].plot(J_hists[i])
axs[-1].set_xlabel('Number of iterations')
axs[-1].set_ylabel('Cost J')
#axs[-1].title = str(alphas[i])
plt.show
<jupyter_output>Running gradient descent ...
Running gradient descent ...
Running gradient descent ...
Running gradient descent ...
<jupyter_text>Now make a prediction for a house with 1650 square feet and
3 bedrooms<jupyter_code>print ('Theta computed from gradient descent: ')
print (thet)
price = None
# Estimate the price of a 1650 sq-ft, 3 br house
####### START CODE ###########
example = np.array([[1, 1650, 3]]) # the test example
example_norm = np.ones(example.shape) # a variable is initialized with an array of ones of same shape as the test example
# this variable will hold the normalized features
example_norm.dtype = np.float # Note that the array datatype must be float32 and not int
# apply feature scaling and normalization
example_norm[:, 1:] = np.divide((example[:, 1:] - mu),sigma)
# Now make predictions
price = h_of_x(example_norm, thet)
####### END CODE ############
print ('Predicted price of a 1650 sq-ft, 3 br house')
print ('using gradient descent: ')
print (price)
theta[0]*examp
example_norm[:, 1:]<jupyter_output><empty_output><jupyter_text>### 3.3 Normal Equations
$$\theta = (X^TX)^{-1}X^T\vec{y}$$
Using this formula does not require any feature scaling, and you will get
an exact solution in one calculation: there is no \loop until convergence" like
in gradient descent.
Complete the code in normaleqn.py to use the formula above to calcu-
late $\theta$. Remember that while you don't need to scale your features, we still
need to add a column of 1's to the X matrix to have an intercept term<jupyter_code>def normalEqn(X,y):
return np.dot((np.linalg.inv(np.dot(X.T,X))),np.dot(X.T,y))
print ('Solving with normal equations...')
# Calculate the parameters from the normal equation
theta = normalEqn(X, y)
# Display normal equation's result
print ('Theta computed from the normal equations:')
print (' %s \n' % theta)<jupyter_output>Solving with normal equations...
Theta computed from the normal equations:
[340412.65957447 109447.79646964 -6578.35485416]
<jupyter_text>Now, since you have found $\theta$ using normal equation
method, use it to make a price prediction for a 1650-square-foot house with
3 bedrooms. You should find that gives the same predicted price as the value
you obtained using the model fit with gradient descent.<jupyter_code>price = None
# Estimate the price of a 1650 sq-ft, 3 br house
####### START CODE ###########
####### END CODE ############
print ('Predicted price of a 1650 sq-ft, 3 br house')
print ('using gradient descent: ')
print (price)<jupyter_output>Predicted price of a 1650 sq-ft, 3 br house
using gradient descent:
None
|
no_license
|
/Beginners/Week3/codelab/Linear regression with multiple variables.ipynb
|
Otsebolu/cycle2-resource-materials
| 7 |
<jupyter_start><jupyter_text># Preview one original file<jupyter_code>filename = folder+'TRABEAE12903CDD8EE.mp3'
orig, sr = librosa.load(filename, sr=16000)
IPython.display.Audio(orig, rate=sr)<jupyter_output><empty_output><jupyter_text># Sonify mel-spectrogram of that example<jupyter_code>recon, sr = transform_and_restore(filename, sr=16000, n_fft=512, n_mels=48, hop_length=256, hop_stride=10, hop_fill=True, plot_mels=True)
IPython.display.Audio(recon, rate=sr)<jupyter_output><empty_output><jupyter_text># Generate all the audio examples<jupyter_code>mels = [128, 96, 48, 32, 24, 16, 8]
temp = [1, 2, 3, 4, 5, 10]
srs = [12, 16]
files = []
for r, d, f in os.walk(folder):
for file in f:
if '.mp3' in file:
for n_mel in mels:
for tmp in temp:
for sr in srs:
recon, o_sr = transform_and_restore(os.path.join(folder, file), sr=sr*1000, n_mels=n_mel, hop_stride=tmp, hop_fill=True)
new_file_directory = os.path.join(folder, file.replace('.mp3',''))
if not os.path.exists(new_file_directory):
os.makedirs(new_file_directory)
new_filename = '{}/{}k-mel{}-x{}'.format(new_file_directory, sr, n_mel , tmp)
librosa.output.write_wav(new_filename+'.wav', recon, o_sr)
# Convert wav to flac
! ffmpeg -i {new_filename}.wav -vn -ar 16000 -sample_fmt s16 -ss 0 -t 30 {new_filename}.flac
os.remove(new_filename+'.wav')<jupyter_output><empty_output>
|
non_permissive
|
/Sonify.ipynb
|
andrebola/ICASSP2020
| 3 |
<jupyter_start><jupyter_text># MEE - AQUISIÇÃO DE DADOSPrimeira tentativa usando o celular do Matheus com n=1 e n=9. Foi aqui que percebemos que não estamos considerando todas as incertezas. ERRO NORMALIZADO INCOMPATÍVEL.## Importando Pacotes<jupyter_code>import matplotlib.pyplot as plt
from numpy import array, absolute, sqrt<jupyter_output><empty_output><jupyter_text>## Declarando os Arrays### n=1<jupyter_code>encoder1 = array([000.0000, 010.1250, 020.0249, 030.1499, 040.0499, 050.4000, 060.2999, 070.1999, 080.0999, 090.0000])
app1 = array([1.86371, 10.28004, 20.11119, 29.67642, 39.47946, 49.90371, 59.80443, 70.37051, 81.15369, 90.8339])
ictzApp1 = array([0.00245, 0.00472, 0.15142, 0.02835, 0.04445, 0.12088, 0.10862, 0.0249, 0.66702, 0.06263])
ictzEnc1 = array([0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225])<jupyter_output><empty_output><jupyter_text>### n=9<jupyter_code>encoder9 = array([000.0000, 009.8999, 020.2500, 030.1600, 040.2750, 049.9500, 060.0750, 070.8750, 080.3249, 090.0000])
app9 = array([2.08507, 9.61817, 20.3044, 29.7121, 38.96702, 48.85498, 59.3227, 70.47999, 81.40712, 90.16033])
ictzApp9 = array([0.01399, 0.03532, 0.06421, 0.13157, 0.15539, 0.18738, 0.17565, 0.33738, 0.10946, 0.2565])
ictzEnc9 = array([0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225, 0.225])<jupyter_output><empty_output><jupyter_text>## Aparência dos Gráficos<jupyter_code>#DEFININDO ALGUNS PARÂMETROS DO GRÁFICO
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
params = {'figure.figsize': [10, 5],
'axes.labelsize': 15,
'axes.titlesize':19,
'font.size': 13,
'legend.fontsize': 13,
'xtick.labelsize': 11,
'ytick.labelsize': 11,
'axes.axisbelow': True
}
plt.rcParams.update(params)<jupyter_output><empty_output><jupyter_text>## Plotando### n=1<jupyter_code>#PLOTANDO A IDEAL
plt.scatter(encoder1, encoder1, s=100, alpha=0.7, label="IDEAL", color="crimson")
#PLOTANDO A COMPARAÇÃO
plt.scatter(encoder1, app1, s=70, label="OBTIDO", color="darkblue")
#OUTROS PARÂMETROS DO GRÁFICO
plt.legend()
plt.title('Comparação com n=1')
plt.xlabel('Ângulo do Encoder')
plt.ylabel('Ângulo do Aplicativo')
plt.xlim(0, 90)
plt.ylim(0, 90)
plt.xticks(array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]), ['0°', '10°', '20°', '30°', '40°', '50°', '60°', '70°', '80°', '90°'])
plt.yticks(array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]), ['0°', '10°', '20°', '30°', '40°', '50°', '60°', '70°', '80°', '90°'])
plt.grid(True)
plt.savefig('PDF/celularMatheusN1.pdf', format='pdf', bbox_inches = 'tight')
plt.savefig('EPS/celularMatheusN1.eps', format='eps', bbox_inches = 'tight')
#EXIBINDO
plt.show()
#PLOTANDO A IDEAL
plt.errorbar(encoder1, encoder1, yerr=ictzEnc1, alpha=0.7, label="IDEAL", color="crimson", fmt='o')
#PLOTANDO A COMPARAÇÃO
plt.errorbar(encoder1, app1, yerr=ictzApp1, label="OBTIDO", color="darkblue", fmt='o')
#OUTROS PARÂMETROS DO GRÁFICO
plt.legend()
plt.title('Ampliação no ângulo de 80° com n=1')
plt.xlabel('Ângulo do Encoder')
plt.ylabel('Ângulo do Aplicativo')
plt.xlim(80, 80.2)
plt.ylim(79.50, 82.1)
plt.grid(True)
plt.savefig('PDF/AmpliacaoCelularMatheusN1.pdf', format='pdf', bbox_inches = 'tight')
plt.savefig('EPS/AmpliacaoCelularMatheusN1.eps', format='eps', bbox_inches = 'tight')
#EXIBINDO
plt.show()
<jupyter_output><empty_output><jupyter_text>### n=9<jupyter_code>#PLOTANDO A IDEAL
plt.scatter(encoder9, encoder9, s=100, alpha=0.7, label="IDEAL", color="crimson")
#PLOTANDO A COMPARAÇÃO
plt.scatter(encoder9, app9, s=70, label="OBTIDO", color="darkblue")
#OUTROS PARÂMETROS DO GRÁFICO
plt.legend()
plt.title('Comparação com n=9')
plt.xlabel('Ângulo do Encoder')
plt.ylabel('Ângulo do Aplicativo')
plt.xlim(0, 90)
plt.ylim(0, 90)
plt.xticks(array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]), ['0°', '10°', '20°', '30°', '40°', '50°', '60°', '70°', '80°', '90°'])
plt.yticks(array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]), ['0°', '10°', '20°', '30°', '40°', '50°', '60°', '70°', '80°', '90°'])
plt.grid(True)
plt.savefig('PDF/celularMatheusN9.pdf', format='pdf', bbox_inches = 'tight')
plt.savefig('EPS/celularMatheusN9.eps', format='eps', bbox_inches = 'tight')
#EXIBINDO
plt.show()
#PLOTANDO A IDEAL
plt.errorbar(encoder9, encoder9, yerr=ictzEnc9, alpha=0.7, label="IDEAL", color="crimson", fmt='o')
#PLOTANDO A COMPARAÇÃO
plt.errorbar(encoder9, app9, yerr=ictzApp9, label="OBTIDO", color="darkblue", fmt='o')
#OUTROS PARÂMETROS DO GRÁFICO
plt.legend()
plt.title('Ampliação no ângulo de 80° com n=9')
plt.xlabel('Ângulo do Encoder')
plt.ylabel('Ângulo do Aplicativo')
plt.xlim(40, 40.55)
plt.ylim(38.5, 40.75)
plt.grid(True)
plt.savefig('PDF/AmpliacaoCelularMatheusN9.pdf', format='pdf', bbox_inches = 'tight')
plt.savefig('EPS/AmpliacaoCelularMatheusN9.eps', format='eps', bbox_inches = 'tight')
#EXIBINDO
plt.show()
<jupyter_output><empty_output><jupyter_text>## ERROS NORMALIZADOS#### n = 1<jupyter_code># GERANDO O ARRAY DE ERROS NORMALIZADOS PARA N = 1
erroNormal1 = []
pontilhada = []
cor = []
for i in range(0, len(app1)):
dividendo = absolute(app1[i] - encoder1[i])
divisor = sqrt((2*ictzApp1[i])**2 + (2*ictzEnc1[i])**2)
erroNormalizadoAtual = dividendo / divisor
erroNormal1.append(erroNormalizadoAtual)
pontilhada.append(1)
cor.append("green")
erroNormal1 = array(erroNormal1)
pontilhada = array(pontilhada)
#PLOTANDO OS ERROS
plt.scatter(encoder1, erroNormal1, s=150, alpha=0.8, label="ERRO NORMALIZADO", color="darkblue")
#PLOTANDO A PONTILHADA
plt.plot(encoder1, pontilhada, '--', color="black")
#OUTROS PARÂMETROS DO GRÁFICO
plt.legend()
plt.title('Erros Normalizados com n=1')
plt.xlabel('Ângulo')
plt.ylabel('Erro Normalizado')
plt.xlim(0, 90)
plt.ylim(0, 5)
plt.xticks(array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90]), ['0°', '10°', '20°', '30°', '40°', '50°', '60°', '70°', '80°', '90°'])
plt.grid(True)
plt.savefig('PDF/ErroNormalizadoN1.pdf', format='pdf', bbox_inches = 'tight')
plt.savefig('EPS/ErroNormalizadoN1.eps', format='eps', bbox_inches = 'tight')
plt.plot()<jupyter_output><empty_output><jupyter_text>#### n = 9<jupyter_code># GERANDO O ARRAY DE ERROS NORMALIZADOS PARA N = 9
erroNormal9 = []
pontilhada = []
for i in range(0, len(app9)):
dividendo = absolute(app9[i] - encoder9[i])
divisor = sqrt((2*ictzApp9[i])**2 + (2*ictzEnc9[i])**2)
erroNormalizadoAtual = dividendo / divisor
erroNormal9.append(erroNormalizadoAtual)
pontilhada.append(1)
erroNormal9 = array(erroNormal9)
pontilhada = array(pontilhada)
#PLOTANDO OS ERROS
plt.scatter(encoder9, erroNormal9, s=150, alpha=0.8, label="ERRO NORMALIZADO", color="darkblue")
#PLOTANDO A PONTILHADA
plt.plot(encoder9, pontilhada, '--', color="black")
#OUTROS PARÂMETROS DO GRÁFICO
plt.legend()
plt.title('Erros Normalizados com n=9')
plt.xlabel('Ângulo')
plt.ylabel('Erro Normalizado')
plt.xlim(0, 90)
plt.ylim(0, 5)
plt.xticks(array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90]), ['0°', '10°', '20°', '30°', '40°', '50°', '60°', '70°', '80°', '90°'])
plt.grid(True)
plt.savefig('PDF/ErroNormalizadoN9.pdf', format='pdf', bbox_inches = 'tight')
plt.savefig('EPS/ErroNormalizadoN9.eps', format='eps', bbox_inches = 'tight')
plt.plot()<jupyter_output><empty_output>
|
non_permissive
|
/Aquisicoes/Primeira Tentativa/30.04.2019/Primeira_Tentativa.ipynb
|
Mathcd/MEE-sensorsProject
| 8 |
<jupyter_start><jupyter_text>Preprocesing<jupyter_code>netflix_titles_df = pd.read_csv('netflix_titles.csv')
netflix_titles_df.head()
netflix_titles_df.drop(netflix_titles_df.columns[[0,1,5,6,7,9]], axis=1, inplace=True)
netflix_titles_df.count()
null_rows = len(netflix_titles_df[netflix_titles_df.isna().any(axis=1)])
print(f'Rows with NaNs: {null_rows} ({(null_rows/netflix_titles_df.shape[0])*100:.0f}%)')
netflix_titles_df.fillna('', inplace=True)
netflix_titles_df.head()
netflix_titles_df[['director','cast']] = netflix_titles_df[['director','cast']].applymap(lambda x: ' '.join(x.replace(' ', '').split(',')[:3]))
netflix_titles_df.head()
netflix_titles_df['title_dup'] = netflix_titles_df['title']
titles_corpus = netflix_titles_df.apply(' '.join, axis=1)
titles_corpus.head
tfidf_vectorizer_params = TfidfVectorizer(lowercase=True, stop_words='english', ngram_range=(1, 3), max_df = .5)<jupyter_output><empty_output><jupyter_text>Text Vectorization <jupyter_code>tfidf_vectorizer = tfidf_vectorizer_params.fit_transform(titles_corpus)
pd.DataFrame(tfidf_vectorizer.toarray(), columns=tfidf_vectorizer_params.get_feature_names_out())
pickle.dump(tfidf_vectorizer, open('tfidf_vectorizer.pickle', 'wb'))
vects_cos_sim = cosine_similarity(tfidf_vectorizer, tfidf_vectorizer)
pd.DataFrame(data=vects_cos_sim, index=netflix_titles_df['title'], columns=netflix_titles_df['title']).head()<jupyter_output><empty_output>
|
no_license
|
/Netflix_Shows_Recommendation_API/preprocessing.ipynb
|
AnantShankhdhar/Recommender-Systems
| 2 |
<jupyter_start><jupyter_text># Pretrained BERT models<jupyter_code>import sys
package_dir = "../input/ppbert/pytorch-pretrained-bert/pytorch-pretrained-BERT"
sys.path.append(package_dir)
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import torch.utils.data
import numpy as np
import pandas as pd
from tqdm import tqdm
import os
import warnings
from pytorch_pretrained_bert import BertTokenizer, BertForSequenceClassification, BertAdam
from pytorch_pretrained_bert import BertConfig
import gc
warnings.filterwarnings(action='once')
device = torch.device('cuda')
def convert_lines(example, max_seq_length,tokenizer):
max_seq_length -=2
all_tokens = []
longer = 0
for text in tqdm(example):
tokens_a = tokenizer.tokenize(text)
if len(tokens_a)>max_seq_length:
tokens_a = tokens_a[:max_seq_length]
longer += 1
one_token = tokenizer.convert_tokens_to_ids(["[CLS]"]+tokens_a+["[SEP]"])+[0] * (max_seq_length - len(tokens_a))
all_tokens.append(one_token)
return np.array(all_tokens)
MAX_SEQUENCE_LENGTH = 220
SEED = 1234
BATCH_SIZE = 32
BERT_MODEL_PATH = '../input/bert-pretrained-models/uncased_l-12_h-768_a-12/uncased_L-12_H-768_A-12/'
LARGE_BERT_MODEL_PATH = '../input/bert-pretrained-models/uncased_l-24_h-1024_a-16/uncased_L-24_H-1024_A-16/'
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
# Pretrained BERT models - Google's pretrained BERT model
BERT_SMALL_PATH = '../input/bert-pretrained-models/uncased_l-12_h-768_a-12/uncased_L-12_H-768_A-12/'
BERT_LARGE_PATH = '../input/bert-pretrained-models/uncased_l-24_h-1024_a-16/uncased_L-24_H-1024_A-16/'
# JIGSAW fine-tuned BERT models
JIGSAW_BERT_SMALL_MODEL_PATH = '../input/bert-inference/bert/bert_pytorch.bin'
JIGSAW_BERT_LARGE_MODEL_PATH = '../input/jigsawpretrainedbertmodels/jigsaw-bert-large-uncased-len-220-fp16/epoch-1/pytorch_model.bin'
JIGSAW_BERT_SMALL_JSON_PATH = '../input/bert-inference/bert/bert_config.json'
JIGSAW_BERT_LARGE_JSON_PATH = '../input/jigsawpretrainedbertmodels/jigsaw-bert-large-uncased-len-220-fp16/epoch-1/config.json'
NUM_BERT_MODELS = 2
INFER_BATCH_SIZE = 64
train_df = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv')
test_df = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/test.csv')
test_preds = np.zeros((test_df.shape[0],NUM_BERT_MODELS))
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
print("Predicting BERT large model......")
# Prepare data
bert_config = BertConfig(JIGSAW_BERT_LARGE_JSON_PATH)
tokenizer = BertTokenizer.from_pretrained(BERT_LARGE_PATH, cache_dir=None,do_lower_case=True)
X_test = convert_lines(test_df["comment_text"].fillna("DUMMY_VALUE"), MAX_SEQUENCE_LENGTH, tokenizer)
test = torch.utils.data.TensorDataset(torch.tensor(X_test, dtype=torch.long))
# Load fine-tuned BERT model
gc.collect()
model = BertForSequenceClassification(bert_config, num_labels=1)
model.load_state_dict(torch.load(JIGSAW_BERT_LARGE_MODEL_PATH))
model.to(device)
for param in model.parameters():
param.requires_grad = False
model.eval()
# Predicting
model_preds = np.zeros((len(X_test)))
test_loader = torch.utils.data.DataLoader(test, batch_size=INFER_BATCH_SIZE, shuffle=False)
tk0 = tqdm(test_loader)
for i, (x_batch,) in enumerate(tk0):
pred = model(x_batch.to(device), attention_mask=(x_batch > 0).to(device), labels=None)
model_preds[i * INFER_BATCH_SIZE:(i + 1) * INFER_BATCH_SIZE] = pred[:, 0].detach().cpu().squeeze().numpy()
test_preds[:,0] = torch.sigmoid(torch.tensor(model_preds)).numpy().ravel()
del model
gc.collect()
print("Predicting BERT small model......")
bert_config = BertConfig(JIGSAW_BERT_SMALL_JSON_PATH)
tokenizer = BertTokenizer.from_pretrained(BERT_SMALL_PATH, cache_dir=None,do_lower_case=True)
X_test = convert_lines(test_df["comment_text"].fillna("DUMMY_VALUE"), MAX_SEQUENCE_LENGTH, tokenizer)
test = torch.utils.data.TensorDataset(torch.tensor(X_test, dtype=torch.long))
# # # Load fine-tuned BERT model
model = BertForSequenceClassification(bert_config, num_labels=1)
model.load_state_dict(torch.load(JIGSAW_BERT_SMALL_MODEL_PATH))
model.to(device)
for param in model.parameters():
param.requires_grad = False
model.eval()
# Predicting
model_preds = np.zeros((len(X_test)))
test_loader = torch.utils.data.DataLoader(test, batch_size=INFER_BATCH_SIZE, shuffle=False)
tk0 = tqdm(test_loader)
for i, (x_batch,) in enumerate(tk0):
pred = model(x_batch.to(device), attention_mask=(x_batch > 0).to(device), labels=None)
model_preds[i * INFER_BATCH_SIZE:(i + 1) * INFER_BATCH_SIZE] = pred[:, 0].detach().cpu().squeeze().numpy()
test_preds[:,1] = torch.sigmoid(torch.tensor(model_preds)).numpy().ravel()
del model
gc.collect()
# Sub-model prediction
bert_submission = pd.DataFrame.from_dict({
'id': test_df['id'],
'prediction': test_preds.mean(axis=1)})
bert_submission.to_csv('bert_submission.csv', index=False)
<jupyter_output><empty_output><jupyter_text>**Credits**
This notebook was mainly inspired by the following awesome kernel scripts:
https://www.kaggle.com/gpreda/jigsaw-fast-compact-solution
https://www.kaggle.com/christofhenkel/how-to-preprocessing-for-glove-part2-usage
https://www.kaggle.com/shubham505/apply-by-simple-bilstm
# Preparations
## Datasets
You will need to add the following Kaggle dataset for pickled pretrained embeddings
https://www.kaggle.com/chriscc/pickled-word-embedding
## Import packages<jupyter_code># This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import gc
import re
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
from keras.preprocessing import text, sequence
from keras import backend as K
from keras.models import Model
from keras.layers import Input, Dense, Embedding, SpatialDropout1D, add, concatenate
from keras.layers import CuDNNLSTM, Bidirectional, GlobalMaxPooling1D, GlobalAveragePooling1D
from keras.preprocessing import text, sequence
from keras.callbacks import LearningRateScheduler
from keras.engine.topology import Layer
from keras import initializers, regularizers, constraints, optimizers, layers
from tqdm._tqdm_notebook import tqdm_notebook as tqdm
import pickle
tqdm.pandas()<jupyter_output><empty_output><jupyter_text>## Configurations<jupyter_code>EMBEDDING_PATHS = ['../input/pickled-word-embedding/crawl-300d-2M.pkl',
'../input/pickled-word-embedding/glove.840B.300d.pkl']
NUM_MODELS = 2 # The number of classifiers we want to train
BATCH_SIZE = 512 # can be tuned
LSTM_UNITS = 128 # can be tuned
DENSE_HIDDEN_UNITS = 4*LSTM_UNITS # can betuned
EPOCHS = 4 # The number of epoches we want to train for each classifier
MAX_LEN = 220 # can ben tuned
# in bert + lstm blend kernel was identity_columns = [
# 'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
# 'muslim', 'black', 'white', 'psychiatric_or_mental_illness']
IDENTITY_COLUMNS = [
'transgender', 'female', 'homosexual_gay_or_lesbian', 'muslim', 'hindu',
'white', 'black', 'psychiatric_or_mental_illness', 'jewish'
]
AUX_COLUMNS = ['target', 'severe_toxicity','obscene','identity_attack','insult','threat']
TEXT_COLUMN = 'comment_text'
TARGET_COLUMN = 'target'<jupyter_output><empty_output><jupyter_text>## Utils<jupyter_code>#----------------------------------- Preprocessing-------------------------------------#
SYMBOLS_TO_ISOLATE = '.,?!-;*"…:—()%#$&_/@\・ω+=”“[]^–>\\°<~•≠™ˈʊɒ∞§{}·τα❤☺ɡ|¢→̶`❥━┣┫┗O►★©―ɪ✔®\x96\x92●£♥➤´¹☕≈÷♡◐║▬′ɔː€۩۞†μ✒➥═☆ˌ◄½ʻπδηλσερνʃ✬SUPERIT☻±♍µº¾✓◾؟.⬅℅»Вав❣⋅¿¬♫CMβ█▓▒░⇒⭐›¡₂₃❧▰▔◞▀▂▃▄▅▆▇↙γ̄″☹➡«φ⅓„✋:¥̲̅́∙‛◇✏▷❓❗¶˚˙)сиʿ✨。ɑ\x80◕!%¯−flfi₁²ʌ¼⁴⁄₄⌠♭✘╪▶☭✭♪☔☠♂☃☎✈✌✰❆☙○‣⚓年∎ℒ▪▙☏⅛casǀ℮¸w‚∼‖ℳ❄←☼⋆ʒ⊂、⅔¨͡๏⚾⚽Φ×θ₩?(℃⏩☮⚠月✊❌⭕▸■⇌☐☑⚡☄ǫ╭∩╮,例>ʕɐ̣Δ₀✞┈╱╲▏▕┃╰▊▋╯┳┊≥☒↑☝ɹ✅☛♩☞AJB◔◡↓♀⬆̱ℏ\x91⠀ˤ╚↺⇤∏✾◦♬³の|/∵∴√Ω¤☜▲↳▫‿⬇✧ovm-208'‰≤∕ˆ⚜☁'
SYMBOLS_TO_REMOVE = '\n🍕\r🐵\xa0\ue014\t\uf818\uf04a\xad😢🐶️\uf0e0😜😎👊\u200b\u200e😁عدويهصقأناخلىبمغر😍💖💵Е👎😀😂\u202a\u202c🔥😄🏻💥ᴍʏʀᴇɴᴅᴏᴀᴋʜᴜʟᴛᴄᴘʙғᴊᴡɢ😋👏שלוםבי😱‼\x81エンジ故障\u2009🚌ᴵ͞🌟😊😳😧🙀😐😕\u200f👍😮😃😘אעכח💩💯⛽🚄🏼ஜ😖ᴠ🚲‐😟😈💪🙏🎯🌹😇💔😡\x7f👌ἐὶήιὲκἀίῃἴξ🙄H😠\ufeff\u2028😉😤⛺🙂\u3000تحكسة👮💙فزط😏🍾🎉😞\u2008🏾😅😭👻😥😔😓🏽🎆🍻🍽🎶🌺🤔😪\x08‑🐰🐇🐱🙆😨🙃💕𝘊𝘦𝘳𝘢𝘵𝘰𝘤𝘺𝘴𝘪𝘧𝘮𝘣💗💚地獄谷улкнПоАН🐾🐕😆ה🔗🚽歌舞伎🙈😴🏿🤗🇺🇸мυтѕ⤵🏆🎃😩\u200a🌠🐟💫💰💎эпрд\x95🖐🙅⛲🍰🤐👆🙌\u2002💛🙁👀🙊🙉\u2004ˢᵒʳʸᴼᴷᴺʷᵗʰᵉᵘ\x13🚬🤓\ue602😵άοόςέὸתמדףנרךצט😒͝🆕👅👥👄🔄🔤👉👤👶👲🔛🎓\uf0b7\uf04c\x9f\x10成都😣⏺😌🤑🌏😯ех😲Ἰᾶὁ💞🚓🔔📚🏀👐\u202d💤🍇\ue613小土豆🏡❔⁉\u202f👠》कर्मा🇹🇼🌸蔡英文🌞🎲レクサス😛外国人关系Сб💋💀🎄💜🤢َِьыгя不是\x9c\x9d🗑\u2005💃📣👿༼つ༽😰ḷЗз▱ц🤣卖温哥华议会下降你失去所有的钱加拿大坏税骗子🐝ツ🎅\x85🍺آإشء🎵🌎͟ἔ油别克🤡🤥😬🤧й\u2003🚀🤴ʲшчИОРФДЯМюж😝🖑ὐύύ特殊作戦群щ💨圆明园קℐ🏈😺🌍⏏ệ🍔🐮🍁🍆🍑🌮🌯🤦\u200d𝓒𝓲𝓿𝓵안영하세요ЖљКћ🍀😫🤤ῦ我出生在了可以说普通话汉语好极🎼🕺🍸🥂🗽🎇🎊🆘🤠👩🖒🚪天一家⚲\u2006⚭⚆⬭⬯⏖新✀╌🇫🇷🇩🇪🇮🇬🇧😷🇨🇦ХШ🌐\x1f杀鸡给猴看ʁ𝗪𝗵𝗲𝗻𝘆𝗼𝘂𝗿𝗮𝗹𝗶𝘇𝗯𝘁𝗰𝘀𝘅𝗽𝘄𝗱📺ϖ\u2000үսᴦᎥһͺ\u2007հ\u2001ɩye൦lƽh𝐓𝐡𝐞𝐫𝐮𝐝𝐚𝐃𝐜𝐩𝐭𝐢𝐨𝐧Ƅᴨןᑯ໐ΤᏧ௦Іᴑ܁𝐬𝐰𝐲𝐛𝐦𝐯𝐑𝐙𝐣𝐇𝐂𝐘𝟎ԜТᗞ౦〔Ꭻ𝐳𝐔𝐱𝟔𝟓𝐅🐋ffi💘💓ё𝘥𝘯𝘶💐🌋🌄🌅𝙬𝙖𝙨𝙤𝙣𝙡𝙮𝙘𝙠𝙚𝙙𝙜𝙧𝙥𝙩𝙪𝙗𝙞𝙝𝙛👺🐷ℋ𝐀𝐥𝐪🚶𝙢Ἱ🤘ͦ💸ج패티W𝙇ᵻ👂👃ɜ🎫\uf0a7БУі🚢🚂ગુજરાતીῆ🏃𝓬𝓻𝓴𝓮𝓽𝓼☘﴾̯﴿₽\ue807𝑻𝒆𝒍𝒕𝒉𝒓𝒖𝒂𝒏𝒅𝒔𝒎𝒗𝒊👽😙\u200cЛ‒🎾👹⎌🏒⛸公寓养宠物吗🏄🐀🚑🤷操美𝒑𝒚𝒐𝑴🤙🐒欢迎来到阿拉斯ספ𝙫🐈𝒌𝙊𝙭𝙆𝙋𝙍𝘼𝙅ﷻ🦄巨收赢得白鬼愤怒要买额ẽ🚗🐳𝟏𝐟𝟖𝟑𝟕𝒄𝟗𝐠𝙄𝙃👇锟斤拷𝗢𝟳𝟱𝟬⦁マルハニチロ株式社⛷한국어ㄸㅓ니͜ʖ𝘿𝙔₵𝒩ℯ𝒾𝓁𝒶𝓉𝓇𝓊𝓃𝓈𝓅ℴ𝒻𝒽𝓀𝓌𝒸𝓎𝙏ζ𝙟𝘃𝗺𝟮𝟭𝟯𝟲👋🦊多伦🐽🎻🎹⛓🏹🍷🦆为和中友谊祝贺与其想象对法如直接问用自己猜本传教士没积唯认识基督徒曾经让相信耶稣复活死怪他但当们聊些政治题时候战胜因圣把全堂结婚孩恐惧且栗谓这样还♾🎸🤕🤒⛑🎁批判检讨🏝🦁🙋😶쥐스탱트뤼도석유가격인상이경제황을렵게만들지않록잘관리해야합다캐나에서대마초와화약금의품런성분갈때는반드시허된사용🔫👁凸ὰ💲🗯𝙈Ἄ𝒇𝒈𝒘𝒃𝑬𝑶𝕾𝖙𝖗𝖆𝖎𝖌𝖍𝖕𝖊𝖔𝖑𝖉𝖓𝖐𝖜𝖞𝖚𝖇𝕿𝖘𝖄𝖛𝖒𝖋𝖂𝕴𝖟𝖈𝕸👑🚿💡知彼百\uf005𝙀𝒛𝑲𝑳𝑾𝒋𝟒😦𝙒𝘾𝘽🏐𝘩𝘨ὼṑ𝑱𝑹𝑫𝑵𝑪🇰🇵👾ᓇᒧᔭᐃᐧᐦᑳᐨᓃᓂᑲᐸᑭᑎᓀᐣ🐄🎈🔨🐎🤞🐸💟🎰🌝🛳点击查版🍭𝑥𝑦𝑧NG👣\uf020っ🏉ф💭🎥Ξ🐴👨🤳🦍\x0b🍩𝑯𝒒😗𝟐🏂👳🍗🕉🐲چی𝑮𝗕𝗴🍒ꜥⲣⲏ🐑⏰鉄リ事件ї💊「」\uf203\uf09a\uf222\ue608\uf202\uf099\uf469\ue607\uf410\ue600燻製シ虚偽屁理屈Г𝑩𝑰𝒀𝑺🌤𝗳𝗜𝗙𝗦𝗧🍊ὺἈἡχῖΛ⤏🇳𝒙ψՁմեռայինրւդձ冬至ὀ𝒁🔹🤚🍎𝑷🐂💅𝘬𝘱𝘸𝘷𝘐𝘭𝘓𝘖𝘹𝘲𝘫کΒώ💢ΜΟΝΑΕ🇱♲𝝈↴💒⊘Ȼ🚴🖕🖤🥘📍👈➕🚫🎨🌑🐻𝐎𝐍𝐊𝑭🤖🎎😼🕷grntidufbk𝟰🇴🇭🇻🇲𝗞𝗭𝗘𝗤👼📉🍟🍦🌈🔭《🐊🐍\uf10aლڡ🐦\U0001f92f\U0001f92a🐡💳ἱ🙇𝗸𝗟𝗠𝗷🥜さようなら🔼'
ISOLATE_DICT = {ord(c):f' {c} ' for c in SYMBOLS_TO_ISOLATE}
REMOVE_DICT = {ord(c):f'' for c in SYMBOLS_TO_REMOVE}
CHARS_TO_REMOVE = '!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n“”’\'∞θ÷α•à−β∅³π‘₹´°£€\×™√²—'
CONTRACTION_MAPPING = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not", "he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is", "I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would", "i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would", "it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam", "mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have", "mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock", "oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have", "she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is", "should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as", "this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would", "there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have", "they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have", "wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are", "we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are", "what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is", "where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have", "why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have", "would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all", "y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have","you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have", "you're": "you are", "you've": "you have" }
def handle_punctuation(text):
text = text.translate(REMOVE_DICT)
text = text.translate(ISOLATE_DICT)
return text
def clean_contractions(text, mapping=CONTRACTION_MAPPING):
'''
Expand contractions
'''
specials = ["’", "‘", "´", "`"]
for s in specials:
text = text.replace(s, "'")
text = ' '.join([mapping[t] if t in mapping else t for t in text.split(" ")])
return text
def preprocess(x):
x = handle_punctuation(x)
# x = clean_contractions(x)
return x
#----------------------------------- Embedding -------------------------------------#
def get_coefs(word, *arr):
"""
Get word, word_embedding from a pretrained embedding file
"""
return word, np.asarray(arr,dtype='float32')
def load_embeddings(path):
if path.split('.')[-1] in ['txt','vec']: # for original pretrained embedding files (extension .text, .vec)
with open(path,'rb') as f:
return dict(get_coefs(*line.strip().split(' ')) for line in f)
if path.split('.')[-1] =='pkl': # for pickled pretrained embedding files (extention pkl). Loading pickeled embeddings is faster than texts
with open(path,'rb') as f:
return pickle.load(f)
def build_matrix(word_index, path):
"""
Here we take each word we've tokenized in our text corpus
for each word we look up in the pre-trained embedding.
Each row in this matrix is a corpus word's embedding.
"""
embedding_index = load_embeddings(path)
embedding_matrix = np.zeros((len(word_index)+1, 300))
for word, i in word_index.items():
try:
embedding_matrix[i] = embedding_index[word]
except KeyError:
pass
return embedding_matrix<jupyter_output><empty_output><jupyter_text>## Define LSTM model<jupyter_code>class Attention(Layer):
def __init__(self, step_dim,
W_regularizer=None, b_regularizer=None,
W_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
self.step_dim = step_dim
self.features_dim = 0
super(Attention, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
self.features_dim = input_shape[-1]
if self.bias:
self.b = self.add_weight((input_shape[1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
else:
self.b = None
self.built = True
def compute_mask(self, input, input_mask=None):
return None
def call(self, x, mask=None):
features_dim = self.features_dim
step_dim = self.step_dim
eij = K.reshape(K.dot(K.reshape(x, (-1, features_dim)),
K.reshape(self.W, (features_dim, 1))), (-1, step_dim))
if self.bias:
eij += self.b
eij = K.tanh(eij)
a = K.exp(eij)
if mask is not None:
a *= K.cast(mask, K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], self.features_dim
def build_model(embedding_matrix, num_aux_targets):#, loss_weight):
"""
embedding layer
droput layer
2 * bidirectional LSTM layers
2 * pooling layers
2 dense layers
1 softmax layer
"""
words = Input(shape=(MAX_LEN,))
#Embedding layer takes variable size input
x = Embedding(*embedding_matrix.shape, weights = [embedding_matrix], trainable=False)(words)
x = SpatialDropout1D(0.2)(x)
x = Bidirectional(CuDNNLSTM(LSTM_UNITS, return_sequences=True))(x)
x = Bidirectional(CuDNNLSTM(LSTM_UNITS, return_sequences=True))(x)
#att = Attention(MAX_LEN)(x)
hidden = concatenate([
GlobalMaxPooling1D()(x),
GlobalAveragePooling1D()(x)
])
hidden = add([hidden, Dense(DENSE_HIDDEN_UNITS, activation='relu')(hidden)])
hidden = add([hidden, Dense(DENSE_HIDDEN_UNITS, activation='relu')(hidden)])
result = Dense(1, activation='sigmoid')(hidden)
aux_result =Dense(num_aux_targets, activation='sigmoid')(hidden)
model = Model(inputs =words, outputs =[result, aux_result])
model.compile(loss='binary_crossentropy', optimizer='adam')
return model<jupyter_output><empty_output><jupyter_text># Data preprocessing<jupyter_code># Preprocess comment texts
train_df['comment_text'] = train_df['comment_text'].progress_apply(lambda x:preprocess(x))
test_df['comment_text'] = test_df['comment_text'].progress_apply(lambda x:preprocess(x))
gc.collect()
x_train = train_df[TEXT_COLUMN].astype(str)
y_train = train_df[TARGET_COLUMN].values
y_aux_train = train_df[AUX_COLUMNS].values
x_test = test_df[TEXT_COLUMN].astype(str)
# Convert target probability to 1 or 0 so they can be used for classification
for column in IDENTITY_COLUMNS + [TARGET_COLUMN]:
train_df[column] = np.where(train_df[column] >=0.5, True, False)<jupyter_output><empty_output><jupyter_text># Tokenize comment texts<jupyter_code># Return a Keras tokenizer class
tokenizer = text.Tokenizer(filters = CHARS_TO_REMOVE)
tokenizer.fit_on_texts(list(x_train)+ list(x_test))
# Turn text to sequences of tokens
x_train = tokenizer.texts_to_sequences(x_train)
x_test = tokenizer.texts_to_sequences(x_test)
#Pad sequences to the same length
x_train = sequence.pad_sequences(x_train,maxlen=MAX_LEN)
x_test= sequence.pad_sequences(x_test, maxlen=MAX_LEN)
# Initialize weights
sample_weights = np.ones(len(x_train), dtype=np.float32)
# Add all the values of the identities along rows
sample_weights += train_df[IDENTITY_COLUMNS].sum(axis=1)
#Add all values of targets*~identity
sample_weights += train_df[TARGET_COLUMN]*(~train_df[IDENTITY_COLUMNS]).sum(axis=1)
#Add all values ~targets*identity
sample_weights += (~train_df[TARGET_COLUMN])*train_df[IDENTITY_COLUMNS].sum(axis=1)
#Normalize them
sample_weights/=sample_weights.mean()<jupyter_output><empty_output><jupyter_text>## Create embedding matrix<jupyter_code>embedding_matrix = np.concatenate([build_matrix(tokenizer.word_index,f) for f in EMBEDDING_PATHS], axis =-1)
print("Embedding matrix shape:", embedding_matrix.shape)
del train_df, tokenizer
gc.collect()<jupyter_output><empty_output><jupyter_text># Model training
* 2 models will be trained (NUM_MODELS=2)
* Make predictions at the end of each epoch
* Weighted averaging epoch predictions
* Weights = 2 ** epoch<jupyter_code>checkpoint_predictions = []
weights = []
NUM_MODELS = 1
for model_idx in range(NUM_MODELS):
#Passes embedding matrix and aux outputs shape
model = build_model(embedding_matrix, y_aux_train.shape[-1]) #1/sample_weights.mean())
for global_epoch in range(EPOCHS):
model.fit(
x_train,
[y_train, y_aux_train],
batch_size=BATCH_SIZE,
epochs=1,
verbose=1,
sample_weight=[sample_weights.values, np.ones_like(sample_weights)],
callbacks = [
LearningRateScheduler(lambda _: 1e-3*(0.55**global_epoch)) # Decayed learning rate
]
)
# model.save_weights("model_%d_%d.h5" % (model_idx, global_epoch)) # Save model weights
checkpoint_predictions.append(model.predict(x_test, batch_size=2048)[0].flatten())
weights.append(2 ** global_epoch)
del model # If a model didn't get deleted Keras will continue training it even though build_model() was used to initialize a model
gc.collect() # It's a good practice to use gc.collect() once the training is done to free up RAM
print (weights)
predictions = np.average(checkpoint_predictions, weights=weights, axis=0)
lstm_submission = pd.DataFrame.from_dict({
'id': test_df.id,
'prediction': predictions
})
lstm_submission.to_csv('submission.csv', index=False)
submission = pd.DataFrame.from_dict({
'id': test_df['id'],
'prediction': lstm_submission['prediction'].rank(pct=True)*0.4 + bert_submission['prediction'].rank(pct=True)*0.6})
submission.to_csv('submission.csv', index=False)
<jupyter_output><empty_output>
|
no_license
|
/jigsaw-starter-blend.ipynb
|
OmerBgu/jigsaw-unintended-bias-in-toxicity-classification
| 9 |
<jupyter_start><jupyter_text>Table of Contents
1 Reuso de Código1.1 Pacotes e Módulos1.2 Um pouco sobre pacotes1.3 Localizando Módulos2 Módulos personalizados2.1 O módulo vector.py2.1.1 Vector Add2.1.2 Vector Subtraction2.1.3 Vector Multiplication2.1.4 Vector Division2.1.5 Vector Magnitude2.1.6 Vector Normalization2.2 A Classe Vector2.3 Definições de Métodos2.3.1 O Método __init__:2.3.2 O Método __str__:2.3.3 Outros métodos especiais:2.3.4 Métodos normais de uma classe2.4 Exercícios# Reuso de CódigoSe tivéssemos que escrever todos os programas a partir do zero, não poderíamos fazer muito. Parte da diversão da programação é usar algo que outra pessoa escreveu para resolver um problema mais rapidamente. Outro aspecto divertido da programação é escrever o código que outros possam reutilizar em seus programas. Considerando que as funções nos permitem "compor" partes de código para que possam ser reutilizadas ao longo de um programa, os módulos fornecem um meio de coletar conjuntos de funções (e como veremos nesta aula, tipos de dados personalizados) juntos para que possam ser usado por qualquer número de programas. ## Pacotes e MódulosUm módulo Python, de maneira sucinta, é um arquivo .py. Um módulo pode conter qualquer código Python que nós gostamos. Todos os programas que escrevemos até agora foram contidos em um único arquivo .py e, portanto, são módulos e programas. A principal diferença é que os programas são projetados para serem executados, enquanto os módulos são projetados para serem
importados e usados por programas. Um pacote é simplesmente um diretório que contém um conjunto de módulos que podem estar inter-relacionados ou não. Os pacotes funcionam como coleções para organizar módulos de forma hierárquica. Quando queremos usar uma função de um módulo ou pacote, precisamos importá-lo. Existem muitas formas diferentes de importar módulos e pacotes. A sintaxe mais básica é:<jupyter_code>import numpy<jupyter_output><empty_output><jupyter_text>após o qual qualquer função em `numpy` pode ser chamada como `numpy.function()`. Você pode mudar internamente o nome do pacote, por exemplo porque ele é longo ou porque este nome conflita com o nome de um objeto seu. No exemplo abaixo nos alteramos o nome de numpy para np, o que é bastante padrão na computação científica:<jupyter_code>import numpy as np<jupyter_output><empty_output><jupyter_text>Todas as funções em `numpy` podem ser chamadas internamente como `np.function()`.
Os pacotes também podem ter subpacotes. Por exemplo, o pacote `numpy` tem um subpacote chamado `random`, que tem um conjunto de funções para lidar com variáveis aleatórias. Se o pacote numpy for importado com `import numpy as np`, as funções no sub-pacote `random` podem ser chamadas como `np.random.function()`.## Um pouco sobre pacotesPython é uma ótima linguagem de programação de propósito geral por conta própria, mas com a ajuda de algums pacotes populares (numpy, scipy, matplotlib) torna-se um ambiente poderoso para a computação científica.
`Numpy`, que significa Numerical Python, é a principal biblioteca usada na computação científica em Python. Ela fornece um objeto de vetores multidimensionais (matrizes) de alto desempenho e ferramentas para trabalhar com esses vetores.Se você precisar apenas de uma função específica, não é necessário importar todo o pacote. Por exemplo, se você quiser usar apenas a função coseno do pacote `numpy`, você pode importá-lo a partir de `numpy` assim:<jupyter_code>from numpy import cos<jupyter_output><empty_output><jupyter_text>após o qual você pode simplesmente chamar a função coseno como `cos()`. Você pode mesmo renomear as funções quando as importa. Por exemplo:<jupyter_code>from numpy import cos as coseno<jupyter_output><empty_output><jupyter_text>após o qual você pode chamar a função `coseno()` para calcular o coseno (eu sei, muito bobo, mas isso pode se tornar útil).Onde devemos inserir as declarações de importação? É prática comum colocar todas as instruções de importação no início dos arquivos .py, após a documentação do módulo. Além disso, recomendamos importar os módulos de biblioteca padrão do Python primeiro, depois módulos de biblioteca de terceiros e, finalmente, nossos próprios módulos.## Localizando MódulosQuando importamos pela primeira vez um módulo, se ele não for uma \textit{builtin}, o Python procura o módulo em cada diretório listado na variável de ambiente PYTHONPATH (`sys.path`) que normalmente inclui a pasta corrente em primeiro lugar. Uma conseqüência disso é que se criarmos um módulo ou programa com o mesmo nome de um dos módulos de biblioteca do Python, o nosso será encontrado primeiro, causando inevitavelmente problemas. Para evitar isso, nunca crie um programa ou módulo com o mesmo nome de um dos diretórios (pacotes) ou módulos de nível superior da biblioteca Python (i.e., não inclui os sub-módulos), a menos que você esteja fornecendo sua própria implementação desse módulo e que o substitua deliberadamente.Nós podemos incluir o diretório ou diretórios de nossos módulos na variável de ambiente através da biblioteca `sys.path` assim: <jupyter_code>import sys
sys.path.append('my_path')
import my_lib<jupyter_output><empty_output><jupyter_text># Módulos personalizadosNós abordamos até agora algumas ferramentas de software na resolução de
problemas computacionais. A mais importante dessas ferramentas são os
mecanismos de abstração para simplificar os projetos e controlar a complexidade
das soluções. Os mecanismos de abstração incluem funções, módulos, objetos e
classes. Em vários momentos começamos com uma visão externa de um recurso,
mostrando o que ele faz e como ele pode ser usado. Por exemplo, para usar a
função `sqrt` da \textit{builtin} `math`, você o importa: ``` from math import sqrt```,
executa: ``` help(sqrt) ``` para aprender a usar a função corretamente e
então inclua-o apropriadamente em seu código. Os mesmos procedimentos são
seguidos para estruturas de dados \textit{builtin}, como strings e listas. Do ponto de vista de um usuário, você não teve que se preocupar com a forma como um recurso é executado. A beleza e a utilidade de uma abstração é que ela liberta você da necessidade de se preocupar com tais detalhes.Infelizmente, nem todas as abstrações são encontradas em buitins. Às vezes, você precisa personalizar uma abstração para atender às necessidades de um aplicativo especializado ou conjunto de aplicativos que você está desenvolvendo. Ao projetar sua própria abstração, você deve ter uma visão diferente da dos usuários e se preocupar com o funcionamento interno de um recurso. O programador que define uma nova função ou constrói um novo módulo de recursos está usando os recursos fornecidos por outros para criar novos componentes de software. Nós vamos mostrar nesta aula como projetar, implementar e testar outro mecanismo de abstração - uma classe.
As linguagens de programação que permitem ao programador definir novas classes
de objetos são chamados de linguagem orientada a objetos. Essas linguagens também suportam um estilo de programação chamado programação orientada a objetos. Ao contrário da programação baseada em objetos, que simplesmente usa objetos e classes prontas dentro
uma estrutura de funções e código algorítmico, a programação orientada a objetos oferece mecanismos para conceber e construir sistemas de software inteiros a partir de classes cooperantes.Como funções, os objetos são abstrações. Uma função agrupa um algoritmo em uma
única operação que pode ser chamada pelo nome. Um objeto envolve um conjunto de
valores de dados -- seu estado, e um conjunto de operações -- seus métodos, em uma única entidade que pode ser referenciada com um nome. Isso torna um objeto uma abstração mais complexa do que uma função. Uma definição de classe é como um modelo para cada um dos objetos dessa classe. Este modelo contém:
- Definições de todos os métodos que seus objetos reconhecem;
- Descrições das estruturas de dados usadas para manter o estado de um objeto,
ou seus atributos.
Para ilustrar estas idéias, analisaremos a construção de um tipo abstrato de dados personalizado.## O módulo vector.pyO que queremos construir é um módulo que representa um tipo abstrato de dados chamado de vetor euclidiano (também conhecido como vetor geométrico) e operações sobre este. Um vetor é definido como uma entidade que possui tanto a magnitude quanto a direção. O vetor é tipicamente desenhado como uma seta; a direção é indicada por onde a flecha está apontando, e a magnitude pelo comprimento da própria seta. Por questão de simplicidade, vamos trabalhar com vetores de duas dimensões (representados num plano) embora os exemplos podem ser facilmente extendidos para três dimensões.Uma outra maneira de pensar em um vetor é a diferença entre dois pontos. Considere como você pode fazer instruções para andar de um ponto para outro. Aqui estão alguns vetores e possíveis traduções:Portanto, podemos representar os vetores assim: <jupyter_code>vector(3, 4)
vector(2, -1)
vector(-15, 3)<jupyter_output><empty_output><jupyter_text>Antes de continuar a olhar para a tipo `vector` e seus métodos de soma, subtração etc., vamos recordar as principais operações sobre o tipo `vector` usando a notação encontrada em livros de matemática e física.### Vector AddDigamos que eu tenho os seguintes dois vetores:Cada vetor tem dois componentes, $x$ e $y$. Para adicionar dois vetores juntos, simplesmente adicionamos ambos $x$ e os dois $y$:### Vector Subtraction### Vector Multiplication### Vector DivisionA divisão funciona como a multiplicação - simplesmente substituímos o sinal de multiplicação ($*$) pelo sinal de divisão ($/$).### Vector MagnitudeA multiplicação e a divisão, como acabamos de ver, são meios pelos quais o comprimento do vetor pode ser alterado sem afetar a direção. Talvez você esteja se perguntando: "OK, então, como eu sei qual é o comprimento de um vetor? Conheço os componentes (x e y), mas qual o comprimento (em pixels) é a seta real? "Compreender como calcular o comprimento (também conhecido como magnitude) de um vetor é incrivelmente útil e importante.Observe nos diagramas anteriores como o vetor, desenhado como uma seta e dois componentes (x e y), cria um triângulo reto. Os lados são os componentes e a hipotenusa é a própria flecha. Armado com o theorema de Pitágoras podemos calcular a magnitude de v da seguinte maneira:$|v| = \sqrt{x^2 + y^2}$### Vector NormalizationPara normalizar um vetor, simplesmente dividimos cada componente por sua magnitude. Isso é bastante intuitivo. Digamos que um vetor tem comprimento 5. Bem, 5 dividido por 5 é 1. Então, olhando para o nosso triângulo retângulo, então precisamos escalar a hipotenusa para baixo dividindo-se por 5. Nesse processo os lados diminuirão, divididos por 5 também.Existem muitas operações matemáticas que são comumente usadas com vetores, mas para nosso exemplo, vamos parar por aqui. Eis nosso código:<jupyter_code>#!/usr/bin/env python3
"""vector.py: A simple little Vector class. Enabling basic vector math. """
__author__ = "Marcio Pereira"
__license__ = "GPL"
__version__ = "1.0"
class Vector:
"""Represents a Vector."""
def __init__(self, x, y):
"""Constructor creates a Vector with x-axes and y-axes values."""
self.x = x
self.y = y
def __str__(self):
""" Return a string representation of self."""
return "Vector(" + str(self.x) + ", " + str(self.y) + ")"
# for testing: create and use some Vector objects
if __name__ == '__main__':
v1 = Vector(3, 4)
v2 = Vector(2, -1)
v3 = Vector(-15, 3)
print(v1)
print(v2.x)
print(v3.y)<jupyter_output><empty_output><jupyter_text>A estrutura deste módulo (e a maioria dos outros) difere pouco do de um programa. A primeira linha é a linha `shebang` e, em seguida, é comum ter uma seqüência de citações entre aspas triplas que fornece uma visão geral do conteúdo do módulo, muitas vezes incluindo alguns exemplos de uso - esta é a `docstring` do módulo. Depois, temos alguns comentários (normalmente, os direitos autorais e as informações da licença), seguido do corpo do módulo e ao final, dentro de uma sentença `if`, uma parte reservada aos testes de suas funcionalidades.Sempre que um módulo é importado, o Python cria uma variável para o módulo chamado `__name__` e armazena o nome do módulo nesta variável. O nome de um módulo é simplesmente o nome do seu arquivo .py, mas sem a extensão. Então, neste exemplo, quando o módulo for importado `__name__` terá o valor `"vector"`, e a condição `if` não será atendida, então as últimas linhas de teste do módulo não serão executadas. Isto significa que estas últimas linhas têm praticamente nenhum custo quando o módulo é importado.
Sempre que um arquivo .py é executado, o Python cria uma variável para o programa chamado `__name__` e define-o para a string `"__main__"`. Então, se fosse executar o `vector.py` como se fosse um programa, o Python configurará `__name__` para `"__main__"` e a condição `if` irá avaliar para `True` e as últimas linhas de teste do módulo serão executadas.## A Classe VectorA sintaxe de definição de classe tem duas partes: um cabeçalho de classe e um conjunto de definições de método que seguem o cabeçalho da classe. O cabeçalho da classe consiste no nome da classe e, no nome da classe "pai", onde este objeto "herda" suas caracteristicas. Quando não fornecido, esta classe é a classe `Object` de Python. Este mecanismo é conhecido como mecanismo de herança de classes, porém está fora do escopo deste curso.
O nome da classe é um identificador em Python. Embora os nomes de tipos \textit{builtins} não estejam capitalizados, os programadores Python tipicamente usam seus nomes de classes capitalizados para distingui-los de nomes de variáveis.## Definições de MétodosTodas as definições do método são recuadas abaixo do cabeçalho da classe. Como os métodos são parecidos com funções, a sintaxe de suas definições é semelhante. Note, no entanto, que cada definição de método deve incluir um primeiro parâmetro chamado `self`, mesmo que esse método não receba nenhum argumento quando chamado. Quando um método é chamado com um objeto, o interpretador liga o parâmetro `self` a esse objeto para que o código do método possa se referir ao objeto pelo nome.### O Método `__init__`:A maioria das classes inclui um método especial chamado `__init__`. Observe que `__init__` deve começar e terminar com dois sublinhados consecutivos. Esse método também é chamado de construtor da classe, porque é executado automaticamente quando um usuário instancia a classe. Assim, quando o segmento de código:
```
v1 = Vector(3, 4)
```
é executado, o Python executa automaticamente o construtor ou o método `__init__` da classe Vector. O objetivo do construtor é inicializar os atributos de um objeto individual. Além de si próprio, o construtor Vector espera dois argumentos que fornecem os valores iniciais para esses atributos. A partir deste momento, quando nos referimos ao construtor de uma classe, queremos dizer seu método `__init__`.Os atributos de um objeto são representados como variáveis "de instância". Cada objeto individual possui seu próprio conjunto de variáveis "de instância". Essas variáveis servem de armazenamento para seu estado. O escopo de uma variável de instância (incluindo `self`) é a definição de classe inteira. Assim, todos os métodos da classe estão em posição de fazer referência às variáveis "da instância". A vida útil de uma variável de instância é a vida útil do objeto em si.
Dentro da definição da classe, os nomes das variáveis "de instância" devem começar com `self`. Variáveis declaradas como da classe (i.e., sem `self`) são compartilhadas por todos os objetos daquele tipo.### O Método `__str__`:As classes geralmente incluem um método `__str__`. Esse método cria e retorna uma representação em string do estado de um objeto. Quando a função `str` é chamada com um objeto, o método `__str__` desse objeto é automaticamente invocado para obter a seqüência de caracteres que descreve o objeto. Por exemplo, a chamada da função `str(v1)` é equivalente à chamada do método `v1.__str__()`, e é mais simples de escrever. Note também que a função `print`do Python chama o método `__str__` da classe do objeto a ser impresso.<jupyter_code>print(v2)<jupyter_output>Vector(2, -1)
<jupyter_text>### Outros métodos especiais:Podemos também definir as operações básicas de soma, subtração, multiplicação e divisão,usando métodos especiais (\textit{dunder methods}) do Python: <jupyter_code>class Vector:
"""Represents a Vector."""
def __init__(self, x, y):
"""Constructor creates a Vector with x-axes and y-axes values."""
self.x = x
self.y = y
def __str__(self):
""" Return a string representation of self."""
return "Vector(" + str(self.x) + ", " + str(self.y) + ")"
def __add__(self, other):
""" Return the sum of self and Vector object other."""
return Vector(self.x + other.x, self.y + other.y)
def __sub__(self, other):
""" Return the difference of self and Vector object other."""
return Vector(self.x - other.x, self.y - other.y)
def __mul__(self, alpha):
""" Return the product of self and numeric object alpha."""
return Vector(self.x * alpha, self.y * alpha)
def __rmul__(self, alpha):
""" Return the reverse product of self and numeric object alpha."""
return Vector(self.x * alpha, self.y * alpha)
def __truediv__(self, alpha):
""" Return the division of self and numeric object alpha."""
return Vector(self.x / alpha, self.y / alpha)
# for testing: create and use some Vector objects
if __name__ == '__main__':
v1 = Vector(3, 4)
v2 = Vector(2, -1)
v3 = Vector(-15, 3)
print(v1)
print(v2 + v1)
print(v1 - v3)
print(v3 * 2)
print(4 * v1)
print(v2 / 2)<jupyter_output><empty_output><jupyter_text>### Métodos normais de uma classeOs métodos "normais" de uma classe são chamados pelos nomes. Vamos definir as operações de Normalização e Magnitude de vetores como métodos comuns, isto é, sem recorrer aos métodos especiais:<jupyter_code>from math import sqrt
class Vector:
def __init__(self, x, y):
"""Constructor creates a Vector with x-axes and y-axes values."""
self.x = x
self.y = y
def __str__(self):
""" Return a string representation of self."""
return "Vector(" + str(self.x) + ", " + str(self.y) + ")"
def __add__(self, other):
""" Return the sum of self and Vector object other."""
return Vector(self.x + other.x, self.y + other.y)
def __sub__(self, other):
""" Return the difference of self and Vector object other."""
return Vector(self.x - other.x, self.y - other.y)
def __mul__(self, alpha):
""" Return the product of self and numeric object alpha."""
return Vector(self.x * alpha, self.y * alpha)
def __rmul__(self, alpha):
""" Return the reverse product of self and numeric object alpha."""
return Vector(self.x * alpha, self.y * alpha)
def __truediv__(self, alpha):
""" Return the division of self and numeric object alpha."""
return Vector(self.x / alpha, self.y / alpha)
def magnitude(self):
""" Return the magnitude of self object."""
return sqrt(self.x ** 2 + self.y ** 2)
def norm(self):
""" Return the self object normalized."""
return self / self.magnitude()
# for testing: create and use some Vector objects
if __name__ == '__main__':
v1 = Vector(3, 4)
v2 = Vector(2, -1)
v3 = Vector(-15, 3)
print(v1.magnitude())
print(v1.norm())<jupyter_output><empty_output>
|
non_permissive
|
/17.Pacotes e Módulos.ipynb
|
zuj2019/mc102
| 10 |
<jupyter_start><jupyter_text># create new fields
adm_df["year_term"] = (
adm_df["ACADEMIC_YEAR"] + "." + adm_df["ACADEMIC_TERM"].str.title()
)
# week_number = (
# lambda r: (r["create_date"].date().isocalendar()[1])
# if (r["create_date"].date() >= date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1))
# else (date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1).isocalendar()[1])
# )
# adm_df["Week_Number"] = adm_df.apply(week_number, axis=1)
print(adm_df.shape)
adm_df.head(3)<jupyter_code># create new fields
adm_df["year_term"] = (
adm_df["ACADEMIC_YEAR"] + "." + adm_df["ACADEMIC_TERM"].str.title()
)
week_number = (
lambda r: (r["create_date"].date().isocalendar()[1])
if (r["create_date"].date() >= date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1))
else (date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1).isocalendar()[1])
)
adm_df["Week_Number"] = adm_df.apply(week_number, axis=1)
print(adm_df.shape)
adm_df.head(3)
# convert ACADEMIC_YEAR to numeric keep numeric-valued records
adm_df["ACADEMIC_YEAR"] = pd.to_numeric(
adm_df["ACADEMIC_YEAR"], errors="coerce", downcast="integer"
)
adm_df = adm_df.loc[adm_df["ACADEMIC_YEAR"].notnull()]
print(adm_df.shape)
adm_df.head(3)
adm_df.loc[(adm_df['PEOPLE_CODE_ID']=='P000024505') & (adm_df['year_term']=='2012.Fall')]<jupyter_output><empty_output><jupyter_text>adm_week_number = (
lambda r: (
int(
(pd.to_datetime(r["create_date"])
- pd.to_datetime(date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1)))
/ np.timedelta64(1,'W')
)
)
if (
pd.to_datetime(r["create_date"]) >= pd.to_datetime(date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1))
)
else 0
)
adm_df["Admissions_Week"] = adm_df.apply(adm_week_number, axis=1)
print(adm_df.shape)
adm_df.head(3)<jupyter_code>adm_week_number = (
lambda r: (
r["Week_Number"]
- (date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1).isocalendar()[1])
+ 1
)
if (
r["Week_Number"] >= (date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1).isocalendar()[1])
)
else (
53
+ r["Week_Number"]
- (date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1).isocalendar()[1])
)
)
adm_df["Admissions_Week"] = adm_df.apply(adm_week_number, axis=1)
print(adm_df.shape)
adm_df.head(3)
adm_df.loc[(adm_df['PEOPLE_CODE_ID']=='P000024505') & (adm_df['year_term']=='2012.Fall')]
adm_keep_values = [
"300",
"ACC",
"ACXL",
"CANC",
"DEF",
"DEFR",
"DENY",
"DPAC",
"TRDP",
"TRPD",
"TRNS",
"WAIT",
"500",
"PEND",
"COMP",
]
adm_keep_cols = ["PEOPLE_CODE_ID", "year_term", "Admissions_Week", "field_value"]
adm_df = adm_df.loc[(adm_df["field_value"].isin(adm_keep_values)), adm_keep_cols]
# admissions status table
admission_status = {
"300": "Applied",
"ACC": "Accepted",
"ACXL": "Canceled",
"CANC": "Canceled",
"DEF": "Canceled",
"DEFR": "Canceled",
"DENY": "Canceled",
"DPAC": "Deposited",
"TRDP": "Deposited",
"TRPD": "Deposited",
"TRNS": "Accepted",
"WAIT": "Accepted",
"500": "Deposited",
"PEND": "Applied",
"COMP": "Applied",
}
adm_stat = pd.DataFrame(
list(admission_status.items()), columns=["field_value", "admission_status"]
)
adm_df1 = (
pd.merge(adm_df, adm_stat, on=["field_value"], how="left")
.drop(["field_value"], axis=1)
.drop_duplicates(
["PEOPLE_CODE_ID", "year_term", "Admissions_Week", "admission_status"]
)
)
print(adm_df1.shape)
adm_df1 = adm_df1.sort_values(
["year_term", "PEOPLE_CODE_ID", "admission_status", "Admissions_Week"]
).drop_duplicates(["year_term", "PEOPLE_CODE_ID", "admission_status"], keep="first")
print(adm_df1.shape)
adm_df1.loc[(adm_df1['PEOPLE_CODE_ID']=='P000024505')]
e = adm_df1.pivot_table(
index=["year_term", "PEOPLE_CODE_ID"],
columns=["admission_status"],
values=["Admissions_Week"],
)
e = e.fillna(np.int(54))
print(e.shape)
e.head()
e.info()
e.loc[('2012.Fall', 'P000024505')]
# function returns status for week
def f_status(field, data_frame, n):
f_week = (
lambda df: 1
if (
(df[("Admissions_Week", field)] <= n)
& (df[("Admissions_Week", "Canceled")] > n)
)
else 0
)
return data_frame.apply(f_week, axis=1)
# function returns DataFrame of 53 week status values
def fill_weeks(field, data_frame):
weeks = range(0, 54)
r = pd.DataFrame(
np.zeros((data_frame.shape[0], 54)),
index=data_frame.index,
columns=[f"{w:02d}" for w in weeks],
)
for w in weeks:
f = f"{w:02d}"
r.loc[:, f] = f_status(field, data_frame, w)
r.loc[:, "stage"] = field
r = r.reset_index().set_index(["year_term", "stage", "PEOPLE_CODE_ID"])
return r
stage_list = ["Applied", "Accepted", "Deposited"]
w = pd.DataFrame()
for stg in stage_list:
w = pd.concat([w, fill_weeks(stg, e)])
print(w.shape)
w.head()
w.info()
w.loc[('2018.Fall', 'Accepted',)]
# add CURRICULUM field
sql_str = (
"SELECT PEOPLE_CODE_ID, ACADEMIC_YEAR, ACADEMIC_TERM, "
+ "ACADEMIC_SESSION, CURRICULUM, PRIMARY_FLAG "
+ "FROM ACADEMIC WHERE "
+ "PRIMARY_FLAG = 'Y' AND "
+ f"ACADEMIC_YEAR >= '{begin_year}' "
)
curriculum_df = pd.read_sql_query(sql_str, connection)
curriculum_df["year_term"] = (
curriculum_df["ACADEMIC_YEAR"] + "." + curriculum_df["ACADEMIC_TERM"].str.title()
)
curriculum_df = curriculum_df.rename(columns={"CURRICULUM": "curriculum"})
curr_flds = ["PEOPLE_CODE_ID", "year_term", "curriculum"]
curriculum_df = curriculum_df[curr_flds]
curriculum_df = curriculum_df.drop_duplicates(curr_flds)
y = pd.merge(
w.reset_index(), curriculum_df, on=["year_term", "PEOPLE_CODE_ID"], how="left"
)
print(y.shape)
y.head()
y.to_hdf(data_store, key="weekly", mode="w", data_columns=True, complevel=0)
<jupyter_output>(45015, 58)
|
non_permissive
|
/funnel/admissions.py_Test_20201217.ipynb
|
JeffWalton-PSC/Admissions
| 2 |
<jupyter_start><jupyter_text>### A Fredkin gate is be obtained with very good fidelity, $\simeq 99.999\%$, starting with all interactions on, and only $\sigma_z$ operators as self-interactions.<jupyter_code>net = QubitNetwork(
num_qubits=4,
# interactions=('all', ['xy', 'xx', 'yy', 'zz']),
interactions='all',
self_interactions=('all', ['z']),
system_qubits=[0, 1, 2]
# J=new_Jvalues
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=10,
target_gate=qutip.fredkin(),
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01,
saveafter_file='nets/fredkin_best.pickle'
)<jupyter_output>Generating training data...
Building the model...
Let's roll!
<jupyter_text>## Failing to implement 3-qubit QFT with 1 ancilla<jupyter_code>net = QubitNetwork(
num_qubits=4,
interactions='all',
system_qubits=[0, 1, 2]
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=400,
batch_size=2,
target_gate=qutip.qip.algorithms.qft.qft(3),
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01,
saveafter_file='nets/qft.pickle'
)<jupyter_output>Generating training data...
Building the model...
Let's roll!
<jupyter_text>## Success in implementing Hadamard gate with 3 qubits + 1 ancilla<jupyter_code>net = QubitNetwork(
num_qubits=4,
interactions='all',
system_qubits=3
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=qutip.hadamard_transform(3),
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01,
saveafter_file='nets/hadamard.pickle'
)
Jvalues = net.J.get_value()
vanishing_indices = np.where(np.abs(Jvalues) < 1e-4)[0]
[net.J_index_to_interaction(v) for v in vanishing_indices]<jupyter_output><empty_output><jupyter_text>### Unitaries that are tensor product of 1-qubit gates are reachable (of course)<jupyter_code>rand_U = qutip.tensor([qutip.rand_unitary_haar(2) for _ in range(3)])
net = QubitNetwork(
num_qubits=4,
interactions='all',
system_qubits=3
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=rand_U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
# saveafter_file='nets/hadamard.pickle'
)
p00 = qutip.ket2dm(qutip.basis(2, 0))
p11 = qutip.ket2dm(qutip.basis(2, 1))
u1 = qutip.rand_unitary_haar(4)
u2 = qutip.rand_unitary_haar(4)
u1.dims = u2.dims = [[2, 2]] * 2
U = qutip.tensor(p00, u1) + qutip.tensor(p11, u2)
U
net = QubitNetwork(
num_qubits=4,
interactions='all',
system_qubits=3
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01,
saveafter_file='nets/controlled_stuff.pickle'
)
U = qutip.tensor(qutip.sigmax(), qutip.sigmax(), qutip.qeye(2))
net = QubitNetwork(
num_qubits=3,
system_qubits=3,
interactions=('all', ['xx', 'zz', 'z']),
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
gate = qutip.Qobj(net.get_current_gate())
v = net.J.get_value()
display(v)
[net.J_index_to_interaction(i) for i in np.where(np.abs(v) > 1)[0]]
U = qutip.tensor(qutip.sigmax(), qutip.sigmax())
net = QubitNetwork(
num_qubits=2,
system_qubits=2,
interactions=('all', ['xx', 'yy', 'zz', 'z']),
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
net.J.get_value()
p0 = qutip.Qobj([[1, 0], [0, 0]])
p1 = qutip.Qobj([[0, 0], [0, 1]])
i2 = qutip.qeye(2)
u2 = qutip.rand_unitary_haar(2)
u2p = qutip.rand_unitary_haar(2)
U = (qutip.tensor(p0, p0, i2) +
qutip.tensor(p0, p1, i2) +
qutip.tensor(p1, p0, i2) +
qutip.tensor(p1, p1, u2))
net = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions='all',
ancillae_state=qutip.basis(2, 0)
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
p0 = qutip.Qobj([[1, 0], [0, 0]])
p1 = qutip.Qobj([[0, 0], [0, 1]])
i2 = qutip.qeye(2)
u2 = qutip.rand_unitary_haar(2)
u2p = qutip.rand_unitary_haar(2)
U = (qutip.tensor(p0, p0, u2p) +
qutip.tensor(p0, p1, i2) +
qutip.tensor(p1, p0, i2) +
qutip.tensor(p1, p1, u2))
net = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions='all',
ancillae_state=qutip.basis(2, 0)
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
p0 = qutip.Qobj([[1, 0], [0, 0]])
p1 = qutip.Qobj([[0, 0], [0, 1]])
i2 = qutip.qeye(2)
u2 = qutip.rand_unitary_haar(2)
u2p = qutip.rand_unitary_haar(2)
u2pp = qutip.rand_unitary_haar(2)
u2ppp = qutip.rand_unitary_haar(2)
U = (qutip.tensor(p0, p0, u2p) +
qutip.tensor(p0, p1, u2pp) +
qutip.tensor(p1, p0, u2ppp) +
qutip.tensor(p1, p1, u2))
net = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions='all',
ancillae_state=qutip.basis(2, 0)
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
p0 = qutip.Qobj([[1, 0], [0, 0]])
p1 = qutip.Qobj([[0, 0], [0, 1]])
i2 = qutip.qeye(2)
u2 = qutip.rand_unitary_haar(2)
u2p = qutip.rand_unitary_haar(2)
u2pp = qutip.rand_unitary_haar(2)
u2ppp = qutip.rand_unitary_haar(2)
U = (qutip.tensor(p0, p0, u2p) +
qutip.tensor(p0, p1, i2) +
qutip.tensor(p1, p0, i2) +
qutip.tensor(p1, p1, u2))
net = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions='all',
ancillae_state=qutip.basis(2, 0)
)
sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=2,
target_gate=U,
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
np.savetxt('matrices/CCrandomrandom_4q1a_all_all.txt', net.get_current_gate(), delimiter=',')
interactions = [((0, 1), 'zz'), ((0, 2), 'zz'), ((1, 2), 'zz'), ((1, 3), 'zz'), ((2, 3), 'zz'),
((0, 3), 'xx'),
(0, 'z'), (1, 'z'), (2, 'z'), (3, 'z'),
(0, 'x'), (1, 'x'), (2, 'x'), (3, 'x'),
(0, 'y'), (1, 'y'), (2, 'y'), (3, 'y')]
eta = 0.8182
xi = 0.0587
initial_ancilla = qutip.Qobj([[np.cos(eta)], [np.sin(eta)]])
net = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions=interactions,
ancillae_state=initial_ancilla
)
net = sgd_optimization(
net=net,
learning_rate=1,
n_epochs=1000,
batch_size=5,
target_gate=qutip.toffoli(),
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
interactions = [((0, 1), 'zz'), ((0, 2), 'zz'), ((1, 2), 'zz'), ((1, 3), 'zz'), ((2, 3), 'zz'),
((0, 3), 'xx'),
(0, 'z'), (1, 'z'), (2, 'z'), (3, 'z'),
(0, 'x'), (1, 'x'), (2, 'x'), (3, 'x')]
eta = 0.8182
xi = 0.0587
initial_ancilla = qutip.Qobj([[np.cos(eta)], [np.sin(eta)]])
net1 = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions=interactions,
ancillae_state=initial_ancilla
)
net1 = sgd_optimization(
net=net1,
learning_rate=1,
n_epochs=1000,
batch_size=5,
target_gate=qutip.toffoli(),
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
interactions = [((0, 1), 'zz'), ((0, 2), 'zz'), ((1, 2), 'zz'), ((1, 3), 'zz'), ((2, 3), 'zz'),
((0, 3), 'xx'),
(0, 'z'), (1, 'z'), (2, 'z'), (3, 'z'),
(0, 'x'), (1, 'x'), (2, 'x'), (3, 'x')]
eta = 0.8182
xi = 0.0587
initial_ancilla = qutip.Qobj([[np.cos(eta)], [np.sin(eta)]])
net2 = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions=interactions,
ancillae_state=initial_ancilla
)
net2 = sgd_optimization(
net=net2,
learning_rate=1,
n_epochs=1000,
batch_size=5,
target_gate=qutip.toffoli(),
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
interactions = [((0, 1), 'zz'), ((0, 2), 'zz'), ((1, 2), 'zz'), ((1, 3), 'zz'), ((2, 3), 'zz'),
((0, 3), 'xx'),
(0, 'z'), (1, 'z'), (2, 'z'), (3, 'z'),
(0, 'x'), (1, 'x'), (2, 'x'), (3, 'x')]
eta = 0.8182
xi = 0.0587
initial_ancilla = qutip.Qobj([[np.cos(eta)], [np.sin(eta)]])
net3 = QubitNetwork(
num_qubits=4,
system_qubits=3,
interactions=interactions,
ancillae_state=initial_ancilla
)
net3 = sgd_optimization(
net=net3,
learning_rate=1,
n_epochs=1000,
batch_size=5,
target_gate=qutip.toffoli(),
training_dataset_size=100,
test_dataset_size=1000,
decay_rate=.01
)
import scipy.sparse
import scipy.linalg as la
import sympy
import theano
import theano.tensor.slinalg
import theano.tensor as T
sympy.init_printing()
n = 4
A = np.zeros(shape=(n, n), dtype=np.float)
A += scipy.sparse.rand(4, 4, density=.8)
A = np.asarray(A)
# numerical differentiation
x = 2.
h = 0.001
numerical_diff = (la.expm((x + h) * A) - la.expm(x * A)) / h
# automatic differentiation with theano
x = T.dscalar('x')
expA = T.slinalg.expm(x * A)
expA_flat = T.flatten(expA)
def compute_element_grad(idx, flattened_matrix):
return T.grad(flattened_matrix[idx], wrt=x)
g_x_flat, _ = theano.scan(
fn=compute_element_grad,
sequences=T.arange(expA_flat.shape[0]),
non_sequences=[expA_flat]
)
# deflatten result
g_x = T.reshape(g_x_flat, newshape=expA.shape)
# here is where the computational graph is actually compiled
gradient = theano.function(inputs=[x], outputs=g_x)
# compute and display gradient computed with AD
display(gradient(2.))
# display gradient computed with numerical differentiation
display(numerical_diff)<jupyter_output><empty_output>
|
no_license
|
/notebooks/quantum_gate_learning.ipynb
|
theneva/quantum-gate-learning
| 4 |
<jupyter_start><jupyter_text># AssignmentQ1. Write the NumPy program to create an array of ones and an array
of zeros?
Expected OutputCreate an array of zeros
Default type is float
[[ 0. 0.]]
Type changes to int
[[0 0]]
Create an array of ones
Default type is float
[[ 1. 1.]]
Type changes to int
[[1 1]]<jupyter_code>import numpy as np
print("Create an array of zeros")
zeros_array = np.zeros((1,2))
print("Default type is float")
print(zeros_array)
print("Type changes to int")
zeros_array = np.zeros((1,2), dtype = np.int)
print(zeros_array)
print("\nCreate an array of ones")
ones_array= np.ones((1,2))
print("Default type is float")
print(ones_array)
print("Type changes to int")
ones_array = np.ones((1,2), dtype = np.int)
print(ones_array)<jupyter_output>Create an array of zeros
Default type is float
[[0. 0.]]
Type changes to int
[[0 0]]
Create an array of ones
Default type is float
[[1. 1.]]
Type changes to int
[[1 1]]
<jupyter_text>Q2. Write the NumPy program to change the dimension of an array?
Expected Output6 rows and 0 columns
(6,)
(3, 3) -> 3 rows and 3 columns
[[1 2 3]
[4 5 6]
[7 8 9]]
Change array shape to (3, 3) -> 3 rows and 3 columns
[[1 2 3]
[4 5 6]
[7 8 9]]<jupyter_code>import numpy as np
array = np.array([1, 2, 3, 4, 5, 6])
print("6 rows and 0 columns")
print(array.shape)
ndarray = np.array([[1, 2, 3],[4, 5, 6],[7,8,9]])
print("\n(3, 3) -> 3 rows and 3 columns ")
print(ndarray)
array = np.array([1,2,3,4,5,6,7,8,9])
print("\nChange array shape from (6,) to (3, 3) -> 3 rows and 3 columns ")
array.shape = (3, 3)
print(array)<jupyter_output>6 rows and 0 columns
(6,)
(3, 3) -> 3 rows and 3 columns
[[1 2 3]
[4 5 6]
[7 8 9]]
Change array shape from (6,) to (3, 3) -> 3 rows and 3 columns
[[1 2 3]
[4 5 6]
[7 8 9]]
<jupyter_text>Q3. Write the NumPy program to create a new shape to an array
without changing its data ?
Reshape 3x2-
[[1 2]
[3 4]
[5 6]]
Reshape 2x3-
[[1 2 3]
[4 5 6]]<jupyter_code>import numpy as np
x = np.array([1, 2, 3, 4, 5, 6])
y = np.reshape(x,(3,2))
print("Reshape 3x2 -")
print(y)
z = np.reshape(x,(2,3))
print("\nReshape 2x3 -")
print(z)<jupyter_output>Reshape 3x2 -
[[1 2]
[3 4]
[5 6]]
Reshape 2x3 -
[[1 2 3]
[4 5 6]]
<jupyter_text>Q4. Write the NumPy program to create a new array of 3*5, filled with
2?
Expected Output-
[[2 2 2 2 2]
[2 2 2 2 2]
[2 2 2 2 2]]
[[2 2 2 2 2]
[2 2 2 2 2]
[2 2 2 2 2]]<jupyter_code>import numpy as np
#Method-1 using no.full
print("Method-1 using no.full")
full_array = np.full((3, 5), 2, dtype=np.uint)
print(full_array)
#Method-2 using no.ones
print("\nMethod-2 using no.ones")
ones_array = np.ones([3, 5], dtype=np.uint) *2
print(ones_array)<jupyter_output>Method-1 using no.full
[[2 2 2 2 2]
[2 2 2 2 2]
[2 2 2 2 2]]
Method-2 using no.ones
[[2 2 2 2 2]
[2 2 2 2 2]
[2 2 2 2 2]]
<jupyter_text>Q5. Write the NumPy program to create a 3-D array with ones on a
diagonal and zeros elsewhere?
Expected Output-
[[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 1.]]<jupyter_code>import numpy as np
nd_array = np.eye(3)
print(nd_array)<jupyter_output>[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
<jupyter_text>Q6. Write the NumPy program to split an array of 14 elements into the
3 arrays and each of which has 2, 4, and 8 elements in original
order?
Expected OutputOriginal array- [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14]
After splitting-
[array([1, 2]), array([3, 4, 5, 6]), array([ 7, 8, 9, 10, 11, 12, 13, 14])]<jupyter_code>import numpy as np
array = np.arange(1, 15)
print("Original array- ", array)
print("\nAfter splitting- ")
print(np.split(array, [2, 6]))<jupyter_output>Original array- [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14]
After splitting-
[array([1, 2]), array([3, 4, 5, 6]), array([ 7, 8, 9, 10, 11, 12, 13, 14])]
<jupyter_text>Q7. Write the NumPy program to split of an array of shape 4x4 it into
two arrays along the second axis ?
Sample array -
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
Expected Output-
[array([[ 0, 1],
[ 4, 5],
[ 8, 9],
[12, 13]]), array([[ 2, 3],
[ 6, 7],
[10, 11],
[14, 15]]), array([], shape=(4, 0), dtype=int64)]<jupyter_code>import numpy as np
array = np.arange(16).reshape((4, 4))
print("Original array:\n", array)
''' Default dtype allotted is 32 bit '''
print("\nOutput after splitting horizontally:")
print(np.hsplit(array, [2, 6]))<jupyter_output>Original array:
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
Output after splitting horizontally:
[array([[ 0, 1],
[ 4, 5],
[ 8, 9],
[12, 13]]), array([[ 2, 3],
[ 6, 7],
[10, 11],
[14, 15]]), array([], shape=(4, 0), dtype=int32)]
<jupyter_text>Q8. Write the NumPy program to create a 5x5 matrix with row values
ranging from 0 to 4?
Original array-
[[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]]
Row values ranging from 0 to 4.
[[ 0. 1. 2. 3. 4.]
[ 0. 1. 2. 3. 4.]
[ 0. 1. 2. 3. 4.]
[ 0. 1. 2. 3. 4.]
[ 0. 1. 2. 3. 4.]]<jupyter_code>import numpy as np
array = np.zeros((5,5))
print("Original array -")
print(array)
print("\nRow values ranging from 0 to 4.")
array += np.arange(5)
print(array)<jupyter_output>Original array -
[[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]
Row values ranging from 0 to 4.
[[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]]
<jupyter_text>Q9. Write the NumPy program to create an array of zeros and three
column types (integer, float, character)?
Expected Output-
[(1, 2., b'Albert Einstein') (2, 2., b'Edmond Halley')
(3, 3., b'Gertrude B. Elion')]<jupyter_code>import numpy as np
#data type a20 choosen as all the characters of names are less than 20
data = np.zeros((3,), dtype=('i4,f4,a20'))
new_data = [(1, 2., "Albert Einstein"), (2, 2., "Edmond Halley"), (3, 3., "Gertrude B. Elion")]
data[:] = new_data
print("Output - \n", data)<jupyter_output>Output -
[(1, 2., b'Albert Einstein') (2, 2., b'Edmond Halley')
(3, 3., b'Gertrude B. Elion')]
<jupyter_text>Q10. Write the NumPy program to remove the negative values in the
numpy array with 0?
Expected OutputOriginal array:
[-1 -4 0 2 3 4 5 -6]
Replace the negative values of the said array with 0-
[0 0 0 2 3 4 5 0]<jupyter_code>import numpy as np
array = np.array([-1, -4, 0, 2, 3, 4, 5, -6])
print("Original array:")
print(array)
print("\nReplace the negative values of the said array with 0 -")
array[array < 0] = 0
print(array)<jupyter_output>Original array:
[-1 -4 0 2 3 4 5 -6]
Replace the negative values of the said array with 0 -
[0 0 0 2 3 4 5 0]
<jupyter_text>Q11. Write the NumPy program to compute the histogram of a set of
data?<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
plt.hist([0, 1, 1, 2, 2, 2, 3, 3, 4], bins= range(6))
plt.show()<jupyter_output><empty_output><jupyter_text>Q12. Write the NumPy program to compute the line graph of a set of
data?<jupyter_code>import sys
import numpy as np
import matplotlib.pyplot as plt
arr = np.random.randint(20, 80, 30)
y, x = np.histogram(arr, bins=np.arange(101))
fig, ax = plt.subplots()
ax.plot(x[:-1], y)
fig.savefig('D:/Basha/New folder/ax.jpg')<jupyter_output><empty_output><jupyter_text>Q13. Write the NumPy program to extracts all the elements from second
row from given (4x4) array?
Sample OutputOriginal array-
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
Extracted data- Second row
[4 5 6 7]<jupyter_code>import numpy as np
arra_data = np.arange(0,16).reshape((4, 4))
print("Original array -")
print(arra_data)
print("\nExtracted data - Second row")
print(arra_data[1,:])<jupyter_output>Original array -
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
Extracted data - Second row
[4 5 6 7]
<jupyter_text>Q14. Write the NumPy program to extract first element of the second
row and fourth element of fourth row from a given (4x4) array?
Sample OutputOriginal array-
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
Extracted data- First element of the second row and fourth element of fourth row
[ 4 15]
<jupyter_code>import numpy as np
arra_data = np.arange(0,16).reshape((4, 4))
print("Original array -")
print(arra_data)
print("\nExtracted data - First element of the second row and fourth element of fourth row ")
print(arra_data[[1,3],[0,3]])<jupyter_output>Original array -
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
Extracted data - First element of the second row and fourth element of fourth row
[ 4 15]
<jupyter_text>Q15. Write the NumPy program to add two arrays A and B of sizes (3,3)
and (,3)?
Sample OutputOriginal arrayArray-1
[[1 1 1]
[1 1 1]
[1 1 1]]
Array-2
[0 1 2]
A + B:
[[1 2 3]
[1 2 3]
[1 2 3]]<jupyter_code>import numpy as np
A = np.ones((3,3),dtype=np.int)
B = np.arange(3)
print("Original array:")
print("\nArray-1")
print(A)
print("\nArray-2")
print(B)
print("\nAddition of two arrays A + B:")
new_array = A + B
print(new_array)<jupyter_output>Original array:
Array-1
[[1 1 1]
[1 1 1]
[1 1 1]]
Array-2
[0 1 2]
Addition of two arrays A + B:
[[1 2 3]
[1 2 3]
[1 2 3]]
<jupyter_text>Q16. Write the NumPy program to copy data from a given array to
another array?
Sample OutputOriginal array-
[24 27 30 29 18 14]
Copy of the said array-
[24 27 30 29 18 14]<jupyter_code>import numpy as np
array1 = np.array([24, 27, 30, 29, 18, 14])
print("Original array:")
print(array1)
copy_array1 = np.empty_like (array1)
copy_array1[:] = array1
print("\nCopy of the said array:")
print(copy_array1)<jupyter_output>Original array:
[24 27 30 29 18 14]
Copy of the said array:
[24 27 30 29 18 14]
<jupyter_text>Q17. Write the NumPy program to calculate the sum of all columns of
the 2D numpy array?
Sample OutputOriginal array-
[[ 0 1 2 3 4 5 6 7 8]
[ 9 10 11 12 13 14 15 16 17]
[18 19 20 21 22 23 24 25 26]
[27 28 29 30 31 32 33 34 35]]
Sum of all columns-
[54 58 62 66 70 74 78 82 86]<jupyter_code>import numpy as np
num = np.arange(36)
array1 = np.reshape(num, [4, 9])
print("Original array -")
print(array1)
result = array1.sum(axis=0)
print("\nSum of all columns -")
print(result)<jupyter_output>Original array -
[[ 0 1 2 3 4 5 6 7 8]
[ 9 10 11 12 13 14 15 16 17]
[18 19 20 21 22 23 24 25 26]
[27 28 29 30 31 32 33 34 35]]
Sum of all columns -
[54 58 62 66 70 74 78 82 86]
<jupyter_text>Q18. Write the NumPy program to calculate averages without NaNs
along the given array?
Sample OutputOriginal array-
[[10. 20. 30.]
[40. 50. nan]
[nan 6. nan]
[nan nan nan]]
Averages without NaNs along the said array-
[20. 45. 6. nan]<jupyter_code>import numpy as np
array1 = np.array([[10, 20 ,30], [40, 50, np.nan], [np.nan, 6, np.nan], [np.nan, np.nan, np.nan]])
print("Original array -")
print(array1)
temp = np.ma.masked_array(array1,np.isnan(array1))
#masked nan values to compute an average of the remaining numbers
result = np.mean(temp, axis=1)
print("\nAverages without NaNs along the said array -")
print(result.filled(np.nan))<jupyter_output>Original array -
[[10. 20. 30.]
[40. 50. nan]
[nan 6. nan]
[nan nan nan]]
Averages without NaNs along the said array -
[20. 45. 6. nan]
|
no_license
|
/DL - Subjective Assignment - 6 - Numpy 2.ipynb
|
lalubasha434/DL-Assignments
| 18 |
<jupyter_start><jupyter_text>### Crear base de datos con SQLite
Primero vamos a crear una base de datos<jupyter_code>import sqlite3
conn = sqlite3.connect('todo.db')
conn.execute("CREATE TABLE todo (id INTEGER PRIMARY KEY, task char(100) NOT NULL, status boll NOT NULL)")
conn.execute("INSERT INTO todo (task, status)VALUES('Crear hola mundo en bottlepy', 1)")
conn.execute("INSERT INTO todo (task, status)VALUES('Documentar su creacion en jupyter',1)")
conn.execute("INSERT INTO todo (task, status)VALUES('Crar un proyecto nuevo de una lista ToDO',1)")
conn.execute("INSERT INTO todo (task, status)VALUES('Documentar creacion de ToDo en jupyter',0)")
conn.commit()
import sqlite3
from bottle import Bottle, run
app= Bottle()
@app.route('/todo')
@app.route('/')
def todo_list():
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("SELECT id, task FROM todo WHERE status LIKE '1'" )
result = cursor.fetchall()
return str(result)
run(app, host='0.0.0.0', port=5000, debug=True)
import sqlite3
from bottle import Bottle, run, template
app= Bottle()
@app.route('/todo')
@app.route('/')
def todo_list():
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("SELECT id, task FROM todo WHERE status LIKE '1'" )
result = cursor.fetchall()
output = template('make_table', rows=result)
return output
run(app, host='0.0.0.0', port=5000, debug=True)
import sqlite3
from bottle import Bottle, run, template, request
app= Bottle()
@app.route('/todo')
@app.route('/')
def todo_list():
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("SELECT id, task FROM todo WHERE status LIKE '0'" )
result = cursor.fetchall()
output = template('make_table', rows=result)
return output
@app.route('/new', method='GET')
def new_item():
return template('new_task.tpl')
@app.route('/new', method='POST')
def new_item_do():
new_item = request.POST.task.strip()
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("INSERT INTO todo (task,status)VALUES (?,?)", (new_item, 0))
new_id = cursor.lastrowid
connection.commit()
cursor.close()
return f"""
<meta HTTP-EQUIV="REFRESH" content="5; url={request.urlparts[0]+'://'+request.urlparts[1]}/">
<p>The new task was insert into the database, the ID is {new_id}</p>"""
run(app, host='0.0.0.0', port=5000, debug=True)
import sqlite3
from bottle import Bottle, run, template, request
app= Bottle()
@app.route('/todo')
@app.route('/')
def todo_list():
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("SELECT id, task FROM todo WHERE status LIKE '0'" )
result = cursor.fetchall()
output = template('make_table', rows=result)
return output
@app.route('/new', method='GET')
def new_item_do():
if request.GET.save:
new_item = request.GET.task.strip()
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("INSERT INTO todo (task,status)VALUES (?,?)", (new_item, 0))
new_id = cursor.lastrowid
connection.commit()
cursor.close()
return f"""
<meta HTTP-EQUIV="REFRESH" content="5; url={request.urlparts[0]+'://'+request.urlparts[1]}/">
<p>The new task was insert into the database, the ID is {new_id}</p>"""
else:
return template('new_task.tpl')
run(app, host='0.0.0.0', port=5000, debug=True)<jupyter_output><empty_output><jupyter_text>## Agregar una ruta para editar tareas<jupyter_code>import sqlite3
from bottle import Bottle, run, template, request
app= Bottle()
@app.route('/todo')
@app.route('/')
def todo_list():
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("SELECT id, task FROM todo WHERE status LIKE '0'" )
result = cursor.fetchall()
output = template('make_table', rows=result)
return output
@app.route('/new', method='GET')
def new_item_do():
if request.GET.save:
new_item = request.GET.task.strip()
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("INSERT INTO todo (task,status)VALUES (?,?)", (new_item, 0))
new_id = cursor.lastrowid
connection.commit()
cursor.close()
return f"""
<meta HTTP-EQUIV="REFRESH" content="5; url={request.urlparts[0]+'://'+request.urlparts[1]}/">
<p>The new task was insert into the database, the ID is {new_id}</p>"""
else:
return template('new_task.tpl')
@app.route('/edit/<num:int>', method='GET')
def edit_item(num):
if request.GET.save:
edit = request.GET.task.strip()
status = request.GET.status.strip()
if status == 'open':
status = 0
else:
status = 1
connection = sqlite3.connect('todo.db')
cursor = connection.cursor()
cursor.execute("UPDATE todo SET task = ?, status = ? WHERE id LIKE ?",(edit, status, num))
connection.commit()
return f'<p>The item number {num} was successfully update</p>'
else:
connection=sqlite3.connect('todo.db')
cursor=connection.cursor()
cursor.execute("SELECT task FROM todo WHERE id LIKE ?",(str(num),))
cur_data=cursor.fetchone()
return template('edit_task',old=cur_data,num=num)
run(app, host='0.0.0.0', port=5000, debug=True)<jupyter_output><empty_output>
|
no_license
|
/TodoList-Bottle.ipynb
|
RamonAgramon/Jupyternotebooks
| 2 |
<jupyter_start><jupyter_text># Electronic structure through (quantum) annealing
In this project we map the electronic structure Hamiltonian to an Ising Hamiltonian and find the ground state energy. Refer to the following references:
[1] https://arxiv.org/abs/1706.00271
[2] https://arxiv.org/abs/1811.05256
[3] https://arxiv.org/abs/1208.5986
We use molecular Hydrogen $H_2$ as an example. Assuming the atomic nucleus does not move due to its larger mass, the Hamiltonian which governs the electronic state can be transformed to a qubit representation appropriate for simulation on a quantum computer [3]. See Ref. [2], Eq. (6) for the $n$ qubit Hamiltonian, which encodes the electronic structure problem. Following Ref. [1], we then encode this problem in a classical Ising model, appropriate for annealing. This requires $r$ ancillary bit for each $n$ qubit.
The qubit Hamiltonian for moledular hydrogen $H_2$ is given by Eq. (37) in Ref. [1]. After the mapping described above, the problem eventually maps to the 2-local Ising-type Hamiltonian Eq. (41). This goal becomes the calculation of the ground state energy of this Hamiltonian.
<jupyter_code>import numpy as np
from scipy.special import logsumexp
import pandas as pd
def energy(spins, J, h):
# J - 2D np.array
# h - 1D np.array
# spins - 1D np.array (entries +/- 1)
interaction = 0.5 * np.einsum("...i,ij,...j->...", spins, J, spins)
field = np.einsum("...i,i->...", spins, h)
return interaction + field
def energy_diff(i, spins, J, h):
return -2 * np.einsum("j,...j->...", J[i, :], spins) * spins[..., i] - 2 * h[i] * spins[..., i]
num_spins = 10
# random interaction+field ising model
J = np.random.randn(num_spins, num_spins)
J = (J + J.T) / 2
for i in range(J.shape[0]):
J[i, i] = 0
h = np.random.randn(num_spins)
spins = (2*np.random.randint(2, size=(num_spins,)) - 1)
# standard classical ising with no field
J = np.zeros((num_spins, num_spins))
for i in range(J.shape[0]):
J[i, (i+1) % num_spins] = -1
J = (J + J.T)
h = np.zeros(num_spins)
spins = (2*np.random.randint(2, size=(num_spins,)) - 1)
def mc_step(spins, J, h, T):
current_energy = energy(spins, J, h)
for _ in range(spins.shape[0]):
i = np.random.randint(spins.shape[0])
dE = energy_diff(i, spins, J, h)
if (dE < 0) or (np.random.rand() < np.exp(-dE / T)):
current_energy += dE
spins[i] *= -1
return spins, current_energy
T = 1.0
burn_in = 100
num_samples = 1000
for t in range(burn_in):
mc_step(spins, J, h, T)
E = np.zeros(num_samples)
M = np.zeros(num_samples)
for t in range(num_samples):
_, e = mc_step(spins, J, h, T)
E[t] = e
M[t] = np.abs(np.mean(spins))
(np.mean(E), np.std(E)/np.sqrt(num_samples)), (np.mean(M), np.std(M)/np.sqrt(num_samples))
size = num_spins
dim = np.arange(2 ** size)
space = ((dim[:, None] & (1 << np.arange(size))) > 0)[:, ::-1]
space = 2*space.astype(int) - 1
E = energy(space, J, h)
M = np.abs(np.mean(space, axis=-1))
logZ = logsumexp(-E / T)
probs = np.exp(-E / T - logZ)
np.dot(E, probs), np.dot(M, probs)
params = pd.read_csv("coefficients.csv", index_col=0)<jupyter_output><empty_output>
|
permissive
|
/Project_4_Ising_Annealer/H2_Ising_Annealing.ipynb
|
HermanniH/CohortProject_2020_Week2
| 1 |
<jupyter_start><jupyter_text># Loading mixed data (GTZAN dataset + YouTube data)<jupyter_code>classes = {'reggae': 0, 'classical': 1, 'country': 2, 'disco': 3, 'hiphop': 4, 'jazz': 5, 'metal': 6, 'pop': 7, 'reggae': 8, 'rock': 9}
!unzip /content/drive/MyDrive/mel_spec_mix.zip
!unzip /content/drive/MyDrive/mel_spec_test.zip
genre = os.listdir('/content/mel_spectrogram')
os.mkdir('/content/mel_spectrogram/train')
os.mkdir('/content/mel_spectrogram/val')
random.seed(101)
for g in genre:
total = list(os.listdir('/content/mel_spectrogram/' + g))
os.mkdir('/content/mel_spectrogram/train/' + g)
os.mkdir('/content/mel_spectrogram/val/' + g)
random.shuffle(total)
train = total[: int(len(total) * 0.85)]
val = total[int(len(total) * 0.85) :]
for file in train:
shutil.copyfile('/content/mel_spectrogram/' + g + '/' + file, '/content/mel_spectrogram/train/' + g + '/' +file)
for file in val:
shutil.copyfile('/content/mel_spectrogram/' + g + '/' + file, '/content/mel_spectrogram/val/' + g + '/' +file)
%rm -rf /content/mel_spectrogram/train/.ipynb_checkpoints
%rm -rf /content/mel_spectrogram/val/.ipynb_checkpoints
%rm -rf /content/mel_spectrogram_test/.ipynb_checkpoints
train_ds = tf.keras.preprocessing.image_dataset_from_directory('/content/mel_spectrogram/train/',
batch_size= 32,
image_size=(108, 128),
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory('/content/mel_spectrogram/val',
batch_size= 32,
image_size=(108, 128),
)
test_ds = tf.keras.preprocessing.image_dataset_from_directory('/content/mel_spectrogram_test',
batch_size= 32,
image_size=(108, 128),
)
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(genre[labels[i]])
plt.axis("off")<jupyter_output><empty_output><jupyter_text># Convolutional Neural Network Architecture<jupyter_code>model = tf.keras.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=(108, 128, 3)),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(1024, activation='relu'),
layers.Dropout(0.5),
layers.Dense(256, activation='relu'), layers.Dense(128, activation='relu'),
layers.Dropout(0.5),
layers.Dense(32, activation='relu'),
layers.Dropout(0.2), layers.Dense(10)
])
model.summary()
model.compile(optimizer='adam', loss= tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history = model.fit(train_ds, epochs=40, validation_data= val_ds)<jupyter_output>Epoch 1/40
706/706 [==============================] - 53s 31ms/step - loss: 2.2921 - accuracy: 0.1777 - val_loss: 2.0267 - val_accuracy: 0.2156
Epoch 2/40
706/706 [==============================] - 22s 30ms/step - loss: 1.9776 - accuracy: 0.2778 - val_loss: 1.8370 - val_accuracy: 0.3318
Epoch 3/40
706/706 [==============================] - 22s 32ms/step - loss: 1.8173 - accuracy: 0.3472 - val_loss: 1.6663 - val_accuracy: 0.4115
Epoch 4/40
706/706 [==============================] - 22s 31ms/step - loss: 1.6697 - accuracy: 0.4076 - val_loss: 1.4759 - val_accuracy: 0.4743
Epoch 5/40
706/706 [==============================] - 22s 31ms/step - loss: 1.5765 - accuracy: 0.4514 - val_loss: 1.4054 - val_accuracy: 0.5192
Epoch 6/40
706/706 [==============================] - 22s 31ms/step - loss: 1.4976 - accuracy: 0.4816 - val_loss: 1.3537 - val_accuracy: 0.5294
Epoch 7/40
706/706 [==============================] - 22s 31ms/step - loss: 1.4055 - accuracy: 0.5159 - val_loss: 1.2943 - val_accuracy:[...]<jupyter_text># Accuracy & Loss<jupyter_code>plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()<jupyter_output><empty_output><jupyter_text># Test on Youtube data<jupyter_code>model.evaluate(test_ds)<jupyter_output>31/31 [==============================] - 1s 17ms/step - loss: 2.2807 - accuracy: 0.4688
<jupyter_text># Saving model<jupyter_code>model.save('/content/drive/MyDrive/ML_Project/models/CNN_Mixed')
print('done')<jupyter_output>done
|
no_license
|
/sourcecode_dataset/CNN source code/ML_Project_CNN_Mixed.ipynb
|
tuandung2812/Music-genres-classification
| 5 |
<jupyter_start><jupyter_text># 散点图<jupyter_code># 导入包
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# 创建数据
N = 1000
x = np.random.randn(N)
y = np.random.randn(N)
# matplotlib作图
plt.scatter(x, y, marker='o')
plt.show()
plt.scatter(x, y, marker='>')
plt.show()
plt.scatter(x, y, marker='o')
plt.show()
# seaborn作图
df = pd.DataFrame({'x':x, 'y':y})
sns.jointplot(x='x', y='y', data= df, kind='scatter')
plt.show()<jupyter_output><empty_output>
|
no_license
|
/Data-Visualization/ScatterPlot.ipynb
|
Fengql95/Data-Analysis
| 1 |
<jupyter_start><jupyter_text># Tutorial for Small FOV Instruments
In this tutorial we combine our skymaps with galaxy catalogs to get a list of galaxies for individual pointings. A note is made that this is only possible with 3D skymaps which are provided for combact binary merger candidate events.We begin by importing the necessary packages as done previously. We will also download the 2MASS Redshift Survey galaxy catalog using VizieR.<jupyter_code># -- For Google Colab
#! pip install -q "astropy==3.2.3" "astroquery==0.3.10" "healpy==1.12.9" "matplotlib==3.1.2" "scipy==1.4.1"
import healpy as hp # for working with HEALPix files
import numpy as np # needed for vector operations
from matplotlib import pyplot as plt # plotting skymaps
from scipy.stats import norm # probability functions
from astropy.utils.data import download_file
url = ('https://dcc.ligo.org/public/0146/G1701985/001/LALInference_v2.fits.gz')
# This is the publication LALInference localization
filename = download_file(url, cache=True)<jupyter_output><empty_output><jupyter_text>We read in the probability, distmu, distsigma, and distnorm.<jupyter_code>prob, distmu, distsigma, distnorm = hp.read_map(filename, field=range(4))
npix = len(prob)
nside = hp.npix2nside(npix)
npix, nside
# Area per pixel in steradians
pixarea = hp.nside2pixarea(nside)
from astroquery.vizier import Vizier
Vizier.ROW_LIMIT = -1 # This gets the complete catalog
cat1, = Vizier.get_catalogs('J/ApJS/199/26/table3') # Downloading the 2MRS Galaxy Catalog<jupyter_output><empty_output><jupyter_text>According to Tully(2015), the 2MRS luminosity function is well fit by a Schechter function with a cutoff absolute magnitude of $M_k^* = -23.55$ and a power-law index of $\alpha_K = -1$. We find the maximum absolute magnitude $M_k^{\text{max}}$ for a completeness fraction of 0.5.<jupyter_code>from scipy.special import gammaincinv
completeness = 0.5
alpha = -1.0
MK_star = -23.55
MK_max = MK_star + 2.5*np.log10(gammaincinv(alpha + 2, completeness))
MK_max<jupyter_output><empty_output><jupyter_text>Now, we select only galaxies with positive redshifts and absolute magnitudes greater than $M_k^{\text{max}}$.<jupyter_code>from astropy.cosmology import WMAP9 as cosmo
from astropy.table import Column
import astropy.units as u
import astropy.constants as c
z = (u.Quantity(cat1['cz'])/c.c).to(u.dimensionless_unscaled)
MK = cat1['Ktmag']-cosmo.distmod(z)
keep = (z > 0) & (MK < MK_max)
cat1 = cat1[keep]
z = z[keep]<jupyter_output>/cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py37/lib/python3.7/site-packages/astropy/cosmology/core.py:1251: IntegrationWarning: The occurrence of roundoff error is detected, which prevents
the requested tolerance from being achieved. The error may be
underestimated.
args=self._inv_efunc_scalar_args)[0]
/cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py37/lib/python3.7/site-packages/numpy/lib/function_base.py:2167: RuntimeWarning: invalid value encountered in ? (vectorized)
outputs = ufunc(*inputs)
/cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py37/lib/python3.7/site-packages/astropy/cosmology/core.py:1447: RuntimeWarning: divide by zero encountered in log10
val = 5. * np.log10(abs(self.luminosity_distance(z).value)) + 25.0
/cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py37/lib/python3.7/site-packages/astropy/table/column.py:991: RuntimeWarning: invalid value encountered in less
result = getattr(super(), op)(other)
<jupyter_text>Now, we calculate the luminosity distance and HEALPix index of each galaxy.<jupyter_code>r = cosmo.luminosity_distance(z).to('Mpc').value
theta = 0.5*np.pi - cat1['DEJ2000'].to('rad').value
phi = cat1['RAJ2000'].to('rad').value
ipix = hp.ang2pix(nside, theta, phi)<jupyter_output><empty_output><jupyter_text>We find the probability density per unit volume at the position of each galaxy.<jupyter_code>dp_dV = prob[ipix] * distnorm[ipix] * norm(distmu[ipix], distsigma[ipix]).pdf(r)/pixarea<jupyter_output><empty_output><jupyter_text>Finally, we sort the galaxies by descending probability density and take the top 50.<jupyter_code>top50 = cat1[np.flipud(np.argsort(dp_dV))][:50]
top50['RAJ2000', 'DEJ2000', 'Ktmag']<jupyter_output><empty_output><jupyter_text>The coordinates of the first galaxy above are (197.01802, -23.79687). A pointing in this direction would likely have captured the true host galaxy of GW170817 which is (197.45, -23.38).Now, we attempt a similar down-selection but with a different galaxy catalog: the Galaxy List for the Advanced Detector Era.<jupyter_code>catalog_list = Vizier.find_catalogs('GLADE')
{k:v.description for k,v in catalog_list.items()}
catalogs = Vizier.get_catalogs(catalog_list.keys())
catalogs
Vizier.ROW_LIMIT = 50000
# Note, the GLADE catalog is 1,918,147 rows long thus we will get a memory error if we set the row limit to -1
cat2, = Vizier.get_catalogs('VII/275/glade1') # Downloading the GLADE catalog (Galaxy List for the Advanced Detector Era)<jupyter_output><empty_output><jupyter_text>According to Gehrels et al(2016), the GLADE luminosity function is well fit by a Schechter function with a cutoff absolute magnitude of $M_k^* = -20.47$ and a power-law index of $\alpha_K = -1.07$. We find the maximum absolute magnitude $M_k^{\text{max}}$ for a completeness fraction of 0.5.<jupyter_code>completeness = 0.5
alpha = -1.07
MK_star = -20.47
MK_max = MK_star + 2.5*np.log10(gammaincinv(alpha + 2, completeness))
MK_max
dist = u.Quantity(cat2['Dist']) # Distance in Mpc
z = (u.Quantity(cat2['zph2MPZ'])).to(u.dimensionless_unscaled)
MK = cat2['Kmag2']-cosmo.distmod(z)
keep = (z > 0) & (MK < MK_max)
cat2 = cat2[keep]
dist = dist[keep]
r = dist.value
theta = 0.5*np.pi - cat2['DEJ2000'].to('rad').value
phi = cat2['RAJ2000'].to('rad').value
ipix = hp.ang2pix(nside, theta, phi)
dp_dV = prob[ipix] * distnorm[ipix] * norm(distmu[ipix], distsigma[ipix]).pdf(r)/pixarea
top50 = cat2[np.flipud(np.argsort(dp_dV))][:50]
top50['RAJ2000', 'DEJ2000', 'Kmag2']<jupyter_output><empty_output>
|
non_permissive
|
/skymaps/Tutorial_for_Small_FOV_Instruments.ipynb
|
battyone/odw-2018
| 9 |
<jupyter_start><jupyter_text># Physics 91SI: Lab 3
Part 1
------<jupyter_code># Don't edit this function
def load_sample():
"""Return the entire text of Hamlet in a string."""
with open('hamlet.txt') as f:
sample = f.read()
return sample
# Edit this function. "pass" tells Python to do nothing.
def parse_sample(sample, words_only=False, sort_list=False):
currWord = ""
currList = []
for char in sample+" ":
c = char
if c=="-" or c=="\n":
c=" "
if c==" ":
if currWord != "":
currList.append(currWord)
currWord = ''
continue
if words_only:
if c.isalpha():
currWord+=(c.lower())
elif c.isnumeric():
currWord+=(c)
else:
currWord+=(c)
if sort_list:
currList.sort()
return currList
parse_sample(load_sample(), True)<jupyter_output><empty_output><jupyter_text>Part 2
------<jupyter_code>import numpy as np
def count_freq(lst):
"""Return a dict of word frequencies given a list of words.
Arguments:
-- lst: a list of strings
Returns:
-- freq_dict: a dictionary of word: frequency pairs. The keys are strings
containing the words, and the values are integers indicating how many
times each word appears.
Example:
>>> count_freq(['time', 'after', 'time'])
{'time': 2, 'after': 1}
"""
freq_dict = {}
for word in lst:
if word not in freq_dict:
freq_dict[word]=1
else:
freq_dict[word]+=1
return freq_dict
def mean(lst):
return np.mean(lst)
def stdev(lst):
return np.std(lst)
def median(lst):
return np.median(lst)<jupyter_output><empty_output><jupyter_text>Part 3
------<jupyter_code>def print_stats(lst): ##list of words
print("Number of words: ", len(lst))
freq_dict = count_freq(lst)
numbers = list(freq_dict.values())
print("Mean word frequency: ", mean(numbers))
print("Standard deviation of word frequencies: ", stdev(numbers))
print("Median word frequency: ", median(numbers))
print_stats(parse_sample(load_sample(), True))<jupyter_output>Number of words: 32195
Mean word frequency: 6.879273504273504
Standard deviation of word frequencies: 38.36281137909605
Median word frequency: 1.0
|
no_license
|
/lab3.ipynb
|
physics91si/lab03-Matias-A
| 3 |
<jupyter_start><jupyter_text># INF702 - Text Mining in Social Media
/xlsx ( folder)
Ontology_PreSNA.xlsx
ML_Super.xlsx
ML_Unsuper.xlsx
Emerging Technologies.xlsx<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd # Importing Pandas
import os
outfolder = 'xlsx'
if(os.path.exists(outfolder)==False):
os.makedirs(outfolder)
sns.set() # default set
sns.set_style("white")
target_label= 'Applications'
new_label = 'Application'
import matplotlib as mpl
mpl.rcParams['font.family'] = 'Calibri'
sns.set(font_scale=1.5)
sns.set_palette(sns.color_palette("Greens"))
fname1 = 'datasets/References1.xlsx'
pdf1 = pd.read_excel(fname1)
pdf1.head(3)
#print(pdf1.info())
#pdf1['Year'] = pdf1['Year'].astype(int)
fname2 = 'datasets/References2.xlsx'
pdf2 = pd.read_excel(fname2)
pdf2.head(3)
#pdf2['Year'] = pdf2['Year'].astype(int)
#print(pdf2.info())
frames = [pdf1,pdf2]
allref=pd.concat(frames)
print(len(allref))
allref=allref.reset_index(drop=True)
print(len(allref))
#allref.index.droplevel(-1)
#print(allref.info())
allref.head(3)
#pdf = pdf.replace(r'\\n',' ', regex=True)
print(len(allref))
allref = allref.replace('\n',' ', regex=True)
allref.head(3)
print(len(allref))
print(len(allref))
allref= allref.dropna(axis=0, subset=['Sources'])
allref['Sources'].unique()
#allref.head(2)
print(len(allref))
allref['Year'] = allref['Year'].astype(object)
def trim_fraction(text):
if '.0' in text:
return text[:text.rfind('.0')]
return text
allref['Year']= allref['Year'].astype(str).apply(trim_fraction)
#stat_pd=allref.join(s)
#stat_pd.info()
#application_df = pd.DataFrame()
def getTable1(query, alldf, QueryName, withFile):
subdf=alldf.query(query)
#print('getTable2 length', len(subdf))
outdf = pd.DataFrame()
outdf['Authors'] = subdf['Authors']
outdf['keywords'] = subdf['keywords']
outdf['Applications'] = subdf['Applications']
outdf['Emerging Technologies'] = subdf['Emerging Technologies']
outdf['ML'] = subdf['ML']
outdf['Challenges'] = subdf['Challenges']
outdf['Future Works'] = subdf['Future Works']
outdf['Year'] = subdf['Year']
#adf= application_df.drop_duplicates(keep='first')
print('getTable2 length outdf', len(outdf))
writer = pd.ExcelWriter(outfolder+'/'+QueryName+'.xlsx')
outdf.to_excel(writer,'Sheet1')
writer.save()
return outdf
#application_df.head(5)
def getTable2(query, alldf, QueryName, withFile):
subdf=alldf.query(query)
#print('getTable2 length', len(subdf))
outdf = pd.DataFrame()
outdf['Authors'] = subdf['Authors']
outdf['keywords'] = subdf['keywords']
outdf['Applications'] = subdf['Applications']
outdf['Emerging Technologies'] = subdf['Emerging Technologies']
outdf['ML'] = subdf['ML']
outdf['Challenges'] = subdf['Challenges']
outdf['Future Works'] = subdf['Future Works']
outdf['Year'] = subdf['Year']
#adf= application_df.drop_duplicates(keep='first')
print('getTable2 length outdf', len(outdf))
writer = pd.ExcelWriter(outfolder+'/'+ QueryName+'.xlsx')
outdf.to_excel(writer,'Sheet1')
writer.save()
return outdf
#application_df.head(5)
query = "ML=='Supervised learning'"
QueryName = "ML_Super"
retd = getTable2(query, allref, QueryName,True)
query = "ML=='Unsupervised learning'"
QueryName = "ML_Unsuper"
retd = getTable2(query, allref, QueryName,True)
query = "ML=='Semi-supervised learning'"
QueryName = "ML_Semi-super"
retd = getTable2(query, allref, QueryName,True)
query = "Ontology=='Text Analysis'"
QueryName = "OnTology_TA"
retd = getTable2(query, allref, QueryName,True)
query = "Ontology=='Text Analysis based on SA'"
QueryName = "Ontology_TA_SA"
retd = getTable2(query, allref, QueryName,True)
query = "Ontology=='Prediction based on TA'"
QueryName = "Ontology_PreTA"
retd = getTable2(query, allref, QueryName,True)
query = "Ontology=='Prediction based on SNA'"
QueryName = "Ontology_PreSNA"
retd = getTable2(query, allref, QueryName,True)
query = "Ontology=='Social Network Analysis'"
QueryName = "Ontology_SNA"
retd = getTable2(query, allref, QueryName,True)
df =allref
df['index'] = df['Title'].str.find('swarm') #, na = False)
#df.head()
query = "index !=-1"
QueryName = "01Title_swarm"
retd = getTable2(query, df, QueryName,True)
df =allref
df['index'] = df['Title'].str.find('Swarm') #, na = False)
#df.head()
query = "index !=-1"
QueryName = "02Title_Swarm"
retd = getTable2(query, df, QueryName,True)
df =allref
df['ndex'] = df['Title'].str.find('swarm') #, na = False)
#df.head()
query = "index !=-1"
QueryName = "03Keywords_swarm"
retd = getTable2(query, df, QueryName,True)
df =allref
df['index'] = df['Title'].str.find('Swarm') #, na = False)
#df.head()
query = "index !=-1"
QueryName = "04Keywords_Swarm"
retd = getTable2(query, df, QueryName,True)
df =allref
print(len(df))
#query = "Ontology=='Prediction based on SNA'"
#subdf=allref.query.substring(query)
df['index'] = df['Title'].str.find('swarm') #, na = False)
#df.head()
query = "index !=-1"
QueryName = "Title_swarm"
retd = getTable2(query, allref, QueryName,True)
#df['Index'] = lambda x: x.find('Super'), df['ML']
#print(len(df))
#print(df['Index'])
def getTable3(alldf):
#subdf=alldf.query(query)
subdf=alldf
#print('getTable2 length', len(subdf))
outdf = pd.DataFrame()
outdf['Authors'] = subdf['Authors']
outdf['keywords'] = subdf['keywords']
outdf['Applications'] = subdf['Applications']
outdf['Emerging Technologies'] = subdf['Emerging Technologies']
outdf['ML'] = subdf['ML']
outdf['Challenges'] = subdf['Challenges']
outdf['Future Works'] = subdf['Future Works']
outdf['Year'] = subdf['Year']
#adf= application_df.drop_duplicates(keep='first')
#print('getTable2 length outdf', len(outdf))
#writer = pd.ExcelWriter(QueryName+'.xlsx')
#outdf.to_excel(writer,'Sheet1')
#writer.save()
return outdf
#application_df.head(5)
df =allref
retd = getTable3(df)
df['index'] = df['Emerging Technologies'].isna()
#df['index']
#print(df.head(15))
query = "index == False"
QueryName = "Emerging Technologies"
print(len(df))
retd = getTable2(query, df, QueryName,True)
print(len(retd))
#print(retd.head(15))
#print(df['Emerging Technologies'][1])
#df['index'] = df['Emerging Technologies'].isna()
#df.head()
#query = "index !=-1"
#QueryName = "EmergingTechnology"
#retd = getTable1(query, allref, QueryName,True)
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
type(retd)
#retd = retd.fillna('Text Analysis', regex=True)
# Start with one review:
text = retd.to_string()
#text
#text.to_string()
#print(type(text))t
# Create and generate a word cloud image:
wordcloud = WordCloud(background_color='white').generate(text)
# Display the generated image:
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()<jupyter_output><empty_output>
|
no_license
|
/[Survey]SummaryTable.ipynb
|
fairhs77/WeeklyTasks
| 1 |
<jupyter_start><jupyter_text># Capstone Project
This notebook will mainly be used for the Capstone Project in the IBM Data Science Specialization Course on Coursera.<jupyter_code>import pandas as pd
import numpy as np
print('Hello Capstone Project Course!')<jupyter_output>Hello Capstone Project Course!
|
no_license
|
/capstone.ipynb
|
lmarien/Coursera_Capstone
| 1 |
<jupyter_start><jupyter_text># Read in USV data for all 3 Saildrone
- caluclate density and wind speed
- caluclate distance between successive obs
- caluculate total cumulative distance
- switch from time to cumulative distance as index
- interpolate data onto grid
<jupyter_code>
ds=[]
for iusv in range(3):
fname=saildrone_filenames[iusv]
ds_usv=xr.open_dataset(fname).isel(trajectory=0).swap_dims({'obs':'time'})
ds_usv.close()
ds_usv['wspd']=np.sqrt(ds_usv.UWND_MEAN**2+ds_usv.VWND_MEAN**2)
tem=sw.dens0(ds_usv.SAL_SBE37_MEAN,ds_usv.TEMP_SBE37_MEAN)
ds_usv['density_mean']=xr.DataArray(tem,dims=('time'),coords={'time':ds_usv.time})
tem=sw.alpha(ds_usv.SAL_SBE37_MEAN,ds_usv.TEMP_SBE37_MEAN,ds_usv.BARO_PRES_MEAN*0) #pressure =0 at surface
ds_usv['alpha_ME']=xr.DataArray(tem,dims=('time'),coords={'time':ds_usv.time})
tem=sw.beta(ds_usv.SAL_SBE37_MEAN,ds_usv.TEMP_SBE37_MEAN,ds_usv.BARO_PRES_MEAN*0) #pressure =0 at surface
ds_usv['beta_MEAN']=xr.DataArray(tem,dims=('time'),coords={'time':ds_usv.time})
ds_usv['latitude']=ds_usv.latitude.interpolate_na(dim='time')
ds_usv['longitude']=ds_usv.longitude.interpolate_na(dim='time')
xlat=ds_usv.latitude
xlon=ds_usv.longitude
dkm2 = abs(np.abs((((xlon[1:].data-xlon[0:-1].data)**2+(xlat[1:].data-xlat[0:-1].data)**2)**.5)*110.567*np.cos(np.pi*xlat[1:].data/180)))
dkm2=np.append(dkm2,dkm2[66238]) #add on last point
dkm3 = dkm2.cumsum()
ds_usv['dist_total']=xr.DataArray(dkm3,dims=('time'),coords={'time':ds_usv.time})
ds_usv['dist_between']=xr.DataArray(dkm2,dims=('time'),coords={'time':ds_usv.time})
if iusv==0:
ds = ds_usv
else:
ds = xr.concat([ds,ds_usv],dim='trajectory')<jupyter_output><empty_output><jupyter_text># Check what the min/max/mean distance travelled between 1 min obs<jupyter_code>for iusv in range(3):
print(ds.dist_between[iusv,:].min().data,ds.dist_between[iusv,:].max().data,ds.dist_between[iusv,:].mean().data)
#ave distance is 0.08 km = 80 m <jupyter_output>0.0002628831597359855 0.20042159708332527 0.08381687316789584
0.0002626788582655633 0.19503147634695842 0.08436334324573692
8.762569944303782e-05 0.2074201536740167 0.08631113513100724
<jupyter_text># Make an evenly sampled timeseries
- Swap the coordinates from time to distance_total
- interp along evenly sampled distance total, 80m (0.08km)<jupyter_code>ds_usv = ds.isel(trajectory=0)
ds2 = ds_usv.assign_coords(dist_total = ds_usv.dist_total)
ds3 = ds2.swap_dims({'time':'dist_total'})
dist_interp = np.arange(ds2.dist_total[0],ds2.dist_total[-1],0.08)
ds4 = ds3.interp(dist_total=dist_interp)
plt.plot(ds2.time,ds3.density_mean)
plt.plot(ds_usv.time,ds_usv.density_mean)
<jupyter_output><empty_output><jupyter_text># detrend<jupyter_code>from scipy import signal
den = ds4.density_mean.interpolate_na(dim='dist_total')
ds4_detrend = signal.detrend(den)
#plt.plot(ds4.density_mean)
#plt.plot(den)
plt.plot(ds4_detrend)<jupyter_output><empty_output><jupyter_text># , smooth using 2km gaussian filter then power density<jupyter_code>import scipy.ndimage
ds4_detrend_smooth = scipy.ndimage.filters.gaussian_filter1d(ds4_detrend, sigma=25)
plt.plot(ds4_detrend_smooth[5000:7000])
plt.plot(ds4_detrend[5000:7000])
f, Pxx_den = signal.periodogram(ds4_detrend_smooth,1/.080) #fs = sampled at .08km or 80m
plt.loglog(f[2:5000], Pxx_den[2:5000])
plt.loglog(f[2:5000], f[2:5000]**(-2.4)/100000)
#plt.semilogy(f[2:200], Pxx_den[2:200])
plt.xlabel('frequency [km]')
plt.ylabel('PSD [kg/m^3 /km]')
length_scale = np.arange(.1,20,.1)
xx_in = np.arange(0,.04,.001)
xx_in2 = np.arange(0,.04-.001,.001)
data = np.ones((len(length_scale),len(xx_in2)))
dd=xr.DataArray(data,dims=('length_scale','gradx'),coords={'length_scale':length_scale,'gradx':xx_in2})
ddn=xr.DataArray(data,dims=('length_scale','gradx'),coords={'length_scale':length_scale,'gradx':xx_in2})
ds_usv = ds.isel(trajectory=0)
ds2 = ds_usv.assign_coords(dist_total = ds_usv.dist_total)
ds3 = ds2.swap_dims({'time':'dist_total'})
for ilen2,len2 in enumerate(length_scale):
dist_interp = np.arange(ds2.dist_total[0],ds2.dist_total[-1],len2)
ds4 = ds3.interp(dist_total=dist_interp)
den_grad = np.abs(np.gradient(ds4.density_mean))
result,xx = np.histogram(den_grad,bins=xx_in)
dd[ilen2,:]=result
dd[ilen2,:]=result/sum(result)
print(len(length_scale),len(xx_in))
fig = plt.figure(figsize=(10,10))
tem=ddn
tem = tem.where(tem>.04)
plt.pcolor(length_scale,xx_in2,tem.T,vmin=0,vmax=.2,cmap='hot')
plt.colorbar()<jupyter_output>199 40
|
permissive
|
/atomic/Calculate_density_plot.ipynb
|
agonmer/Saildrone
| 5 |
<jupyter_start><jupyter_text># Growing Decision Trees via ID3/C4.5 & Greedy Learning
A from-scratch implementation of a growing decision trees ML algorithm for quick classification. We use entropy / 'information gain' as the metric for generating the decision rules. This notebook is a step-by-step walkthrough, with motivating graphs.
### General Structure of notebook:
* Introduce entropy function associated with the information gain metric that will be used to implement growing decision trees.
* Preliminary decision tree implementation
* Final decision tree implementation
* *Note:* Right now the decision tree implementation assumes a dataframe structure with:
* the (desired) class label column having index 0 (first column)
* an additional final (last indexed) column of aggregate counts
There is nothing special about these requirements, and the code may eventually be rewritten to be a little more flexible (allow you to indicate the class label column at your disgression and let the counts column be located anywhere). As is discussed later, the class label column allows the dataset to be compressed and makes computations more efficient. Furthermore, I include a walkthrough of the relatively straightforward process of transforming a raw dataset into a compressed form with aggregate counts, and wrap it in a function which makes it straightforwards to apply. Also note this implementation is not optimized to run efficiently on large data.
### Example
An application of growing decision trees on a sample dataset. I've included some sample datasets to test the implementation on.<jupyter_code>### Example Bank Dataset
import pandas as pd
### Entering data as columns - pandas reads them as tuples though so they will have to be transformed
credit_ranking_data = ["Excellent","Excellent","Excellent","Excellent","Good","Good","Good","Good","Fair","Fair","Fair","Fair"]
age_data = ["< 30","< 30","> 60","> 60","30 - 60","30 - 60","< 30","< 30","> 60","30 - 60","< 30","< 30"]
gender_data = ["M","F","M","F","M","F","M","F","M","F","M","F"]
year_income_data = ["60k - 100k","> 100k","> 100k","60k - 100k","< 60k","60k - 100k","> 100k","60k - 100k","> 100k","60k - 100k","60k - 100k","< 60k"]
counts_data = [16,4,4,16,5,15,5,15,2,2,18,18]
ex_df = pd.DataFrame.from_records(data = [ credit_ranking_data, age_data, gender_data, year_income_data, counts_data])
### Transpose to put in proper form
ex_df = ex_df.T
### Add column labels
cols = ["credit-ranking","age","gender","year-income","count"]
ex_df.columns = cols
ex_df # just checking<jupyter_output><empty_output><jupyter_text>The bank-dataset table above is a useful illustration of the kind of formatting the algorithm I will write expects:
* The desired class-label column is the first column (this can be adjusted to be more flexible in the future)
* The last column is a count column (this can be constructed without too much difficulty by using some aggregation functions on arbitrary unstructured datasets, and compresses the dataset to make the algorithm run more efficiently)
In the following, I walk through construction of the information purity heuristic which we use to grow the decision tree.
## Information Purity Measures<jupyter_code>import math
# term from info function I
def p_logp(p): # scope --- p is proportion from 0 to 1
if (p < 10**(-10)) | (p > 1 - 10**(-10)) :
return 0
else: return (-1) * p * math.log2(p)
<jupyter_output><empty_output><jupyter_text>We can use the "p_logp" function to compute the statistical information entropy (physical measure) that occurs from observing a proportion $p$ of observations:<jupyter_code>## EXAMPLE BASIC ENTROPY FUNCTION
granularity = 100
domain = [x/granularity for x in range(granularity+1)]
entropy_range = [ p_logp(p) + p_logp(1-p) for p in domain ]
# print(entropy_range)
import matplotlib.pyplot as plt
plt.scatter(domain,entropy_range)
plt.ylabel('entropy = -[plog(p) + (1-p)log(1-p)]')
plt.show()
# information measure
def info_measure(vector): ### NOTE: vector should be of counts, so all terms in it ≥ 0!! ADD CODE TO CHECK THIS LATER
# if vector == [0]*len(vector): return 0
if sum(vector) == 0: return 0
return sum( [ p_logp( s/sum(vector) ) for s in vector ] )
<jupyter_output><empty_output><jupyter_text>## The Information measure $I(\mathbf{x}) = - \sum \left(\frac{x_i}{ \sum x_i } \right) \log_2 \left(\frac{x_i}{ \sum x_i } \right)$
generalizes this entropy measure to proportions drawn from higher-dimension source - ie, more than the two states corresponding to $p$ and $1-p$ (the interpretation of $\mathbf{x}$ is as a list of counts for the different states $i$). Thinking back to the basic entropy function we introduced, which came from our '$p \log_2 p$' function, if $p$ is the proportion $k/n$ then the basic entropy function was $I(\mathbf{x})$ where $\mathbf{x} = (k,n-k)$.
In general, this measures the amount of information required to classify a tuple $\mathbf{x}$ to $m$ classes (if the input $\mathbf{x}$ is an $m$-vector).
### An Example:
Consider: plotting the information measure over an input of datapoints $(x,y)$ uniformly sampled from the part of a 2D circle with $x,y \geq 0$ (since $x,y$ both represent counts (and the point $(x,y)$ then a configuration of counts in the system / data), then it makes sense although it is not necessary mathematically, to restrict our attention to a specific 'positive domain' subset of the circle.
<jupyter_code>## EXAMPLE of the INFORMATION Measure on the input of a 2D circle
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import random
def circleplot(granularity,color=[random.random() for i in range(3)]):
# granularity = 10000
domain = [[x/granularity, (1-(x/granularity)**2)**0.5] for x in range(granularity+1)]
entropy_range = [ info_measure(x) for x in domain ]
# print(entropy_range)
fig = plt.figure()
ax = fig.add_subplot(1,1,1, projection='3d')
circ_points = [ x + [z] for x,z in zip(domain,entropy_range) ]
xs = [x[0] for x in circ_points]
ys = [x[1] for x in circ_points]
zs = [x[2] for x in circ_points]
### different rectangular parametrizations of a 2d-circle
### we plot entropy associated with a particular configuration of points
ax.scatter( xs,ys,zs , c=color)
### Can plot along all points on a circle - but don't have interpretations as negative counts
# ax.scatter( xs,[-y for y in ys],zs , c=color )
# ax.scatter( [-x for x in xs],ys,zs , c=color)
# ax.scatter( [-x for x in xs],[-y for y in ys],zs , c=color)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
# plt.scatter([x[0] for x in domain],entropy_range)
# plt.scatter([x[1] for x in domain],entropy_range)
# plt.ylabel('some numbers')
plt.show()
circleplot(10000, 'goldenrod') # shortcut code<jupyter_output><empty_output><jupyter_text>## Another example:
Plot the information metric for classifying a mesh of 2D points $(x,y)$ over a rectangular region $x,y \in [0,1]$ (aka $[0,1] \times [0,1]$).<jupyter_code>from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
granularity = 25
domain = [[x/granularity, y/granularity] for x in range(granularity+1) for y in range(granularity+1)]
entropy_range = [ info_measure(x) for x in domain ]
# print(entropy_range)
fig = plt.figure()
ax = fig.add_subplot(1,1,1, projection='3d')
rect_points = [ x + [z] for x,z in zip(domain,entropy_range) ]
xs = [x[0] for x in rect_points]
ys = [x[1] for x in rect_points]
zs = [x[2] for x in rect_points]
### we plot entropy associated with a particular configuration of points
ax.scatter( xs,ys,zs , c="goldenrod")
# ### plot circle for comparison
# #####
# granularity = 100
# domain = [[x/granularity, (1-(x/granularity)**2)**0.5] for x in range(granularity+1)]
# entropy_range = [ info_measure(x) for x in domain ]
# circ_points = [ x + [z] for x,z in zip(domain,entropy_range) ]
# xs = [x[0] for x in circ_points]
# ys = [x[1] for x in circ_points]
# zs = [x[2] for x in circ_points]
# ### different rectangular parametrizations of a 2d-circle
# ### we plot entropy associated with a particular configuration of points
# ax.scatter( xs,ys,zs , c='blue')
# # ax.scatter( xs,[-y for y in ys],zs , c='blue' )
# # ax.scatter( [-x for x in xs],ys,zs , c='blue')
# # ax.scatter( [-x for x in xs],[-y for y in ys],zs , c='blue')
# #####
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
# plt.scatter([x[0] for x in domain],entropy_range)
# plt.scatter([x[1] for x in domain],entropy_range)
# plt.ylabel('some numbers')
plt.show()<jupyter_output><empty_output><jupyter_text>Comparison of Plots: [TBD]## Entropy Function
Next, we want to use the information measure to formally define an entropy function for attributes of a dataset, which is based off a weighted sum of the aggregate information measures for a given configuration of data: If the attribute $A$ with values $\{ a_1 , \dots , a_\nu \}$ partitions the dataset into subsets $\{S_1, \dots , S_\nu\}$, and $S_j$ contains $s_{ij}$ samples of a given class class $C_i$, then the entropy associated with $A$ is
$$
E ( A ) = \sum_{j=1} ^\nu \frac{ s_{1j} + \cdots + s_{mj} }{\sum_{ij} s_{ij}} I(s_{1j} , \dots , s_{mj})
$$
Some observations –– from intuition (and by studying its defining equation) we can see that $I ( \mathbf{x} )$ tends to be maximized along the line $x_1 = x_2 = \cdots = x_m$, and hence so does the entropy function. As the interpretation of these $x_i$ was as counts for different states $i$, this means roughly speaking that we maximize entropy when the counts are similar and minimize it when they are very different. The metric we will use to grow the decision tree is information purity, or information gain, which increases as entropy decreases. So, we will maximize this when we split over branches where the counts tend to be most dissimilar. This all makes sense too, as if the counts are very dissimilar, than intuitively it is easier to predict a class label based on the counts while if they are similar, it is more difficult to decide what an appropriate class label should be.<jupyter_code># entropy cost associated with an attribute
# ASSUME: last column of input dataset is 'count' associated with particular outcome
# ASSUME: first column is class label
def entropy(dataset, attribute, debug_flag=False):
label = dataset.iloc[:,0] # 1st column class label
total_counts = sum(dataset['count']) # REPLACE 'count' eventually
total_entropy = 0
for value in set(dataset[attribute]):
if debug_flag: print(attribute, value) #debugging
sub_data = dataset.loc[dataset[attribute] == value]
if debug_flag: print(sub_data) #debugging
label_counts = []
for label_value in set(label):
# pick out rows from sub data with desired label value
sub_sub_data = sub_data.loc[sub_data.iloc[:,0] == label_value]
label_counts.append(sum(sub_sub_data['count'])) # REPLACE 'count' eventually
# if debug_flag: print(label_counts) #debugging
if debug_flag: print(str(sum(label_counts)) + '/' + str(total_counts) + ' * I(' + str(label_counts) + ')') #debugging
total_entropy += (sum(label_counts)/total_counts) * info_measure(label_counts)
if debug_flag: print('*'*50)
return total_entropy
### ENTROPY on example dataset
print(ex_df)
print()
print('*' * 50)
print()
## Possible attributes to compute entropy on:
## Remove class label and count columns from consideration
possible_attributes = list(ex_df.columns)
possible_attributes = [possible_attributes[i] for i in range(1,len(possible_attributes)-1)]
print((possible_attributes[0], 'entropy: '
+ str(entropy(ex_df, possible_attributes[0], debug_flag = False))))
# Define Information Gain Criteria
# equivalent to minimizing entropy but has a nice interpretation
def info_gain(dataset, attribute, debug_flag = False):
#get label counts
label = dataset.iloc[:,0] # assume it's first column for now
label_counts = []
for label_value in set(label):
# pick out rows from dataset with desired label value
sub_data = dataset.loc[dataset.iloc[:,0] == label_value]
label_counts.append(sum(sub_data['count'])) # REPLACE 'count' eventually
if debug_flag:
print('I(...):', label_counts) #debugging
print('*'*50) #debugging
I = info_measure(label_counts)
return I - entropy(dataset, attribute, debug_flag)
info_gain(ex_df,possible_attributes[2],debug_flag=True) # debug flag displays info_measure coefficients
for attribute in possible_attributes:
print(( attribute, 'entropy: ' + str(entropy(ex_df,attribute)), 'gain: ' + str(info_gain(ex_df,attribute)) ))
# information gained by branching on attribute A<jupyter_output>('age', 'entropy: 1.1265494549903445', 'gain: 0.45841304573081154')
('gender', 'entropy: 1.5421864522230475', 'gain: 0.042776048498108565')
('year-income', 'entropy: 1.3836778130106993', 'gain: 0.20128468771045682')
<jupyter_text>## Growing Decision Trees:
Now that we have a properly defined information gain metric for a given attribute of a dataset, we're prepared to use this to implement growing a decision tree such that it maximizes local information gain on each attribute, at each step.
Note: We should be careful to avoid overgrowing the tree (various methods not yet implemented) as well as to look into methods to post/pre prune a tree in order to avoid over-fitting the decision tree model on the data. For example, we could implement a $k$-fold cross validation method to grow $k$ different trees on train/test folds and compare the overall accuracy obtained, and try boosting to improve on the accuracy of the model.
### Printing decision trees
Here is a first pass at a growing-decision-tree algorithm, that prints generated decision rules as well as debugging info. It will automatically grow decisions to full length (use every attribute) even when not necessary. This overgrowing will be optimized for the proper algorithm.<jupyter_code>#### GROWING DECISION TREES:
def print_decision_tree(dataset, condition_text='', debug_flag = False): # condition_text used to pass conditions down the desicion tree (for printing)
if debug_flag: print(dataset) # for debugging
columns = list(dataset.columns)
# label = columns[0]
# counts = columns[len(columns)]
# get possible attributes for branching
possible_attributes = columns
possible_attributes = [possible_attributes[i] for i in range(1,len(possible_attributes)-1)] # remove first entry (class label) and last (count)
if len(possible_attributes) > 0: ### can still keep branching
# maximize info gain on possible attributes
info_gains = [info_gain(dataset, attribute) for attribute in possible_attributes]
# as first index for label wasn't included in info_gains, we do +1 to get the correct index
branching_index = info_gains.index(max(info_gains)) + 1
new_condition = 'branch on ' + str(columns[branching_index]) # the new attribute to branch on
# print( new_condition ) # this info is printed as 'rule' below
branching_attribute = dataset.iloc[:,branching_index]
for attribute_value in set(branching_attribute):
sub_dataset = dataset.loc[ dataset.iloc[:,branching_index] == attribute_value]
sub_dataset = sub_dataset.drop(columns[branching_index], axis=1)
rule = condition_text + ' ' + new_condition + ': ' + str(attribute_value)
print(rule)
# print(attribute_value)
# iterate
print_decision_tree( sub_dataset, rule, debug_flag )
else: # no more attributes to split on, so take majority vote to decide what the correct label is
votes = []
for outcome in set(dataset.iloc[:,0]):
sub_dataset = dataset.loc[dataset.iloc[:,0] == outcome]
votes.append( [outcome, sum(sub_dataset['count'])] ) # REPLACE 'count' eventually
temp = [x[1] for x in votes]
decision_index = temp.index(max(temp))
decision = votes[decision_index][0]
print( 'final decision: ' + str(decision))
print('*'*50)
return # returns null-type
# Example
ex_df
print_decision_tree(ex_df)
<jupyter_output> branch on age: 30 - 60
branch on age: 30 - 60 branch on gender: F
branch on age: 30 - 60 branch on gender: F branch on year-income: 60k - 100k
final decision: Good
**************************************************
branch on age: 30 - 60 branch on gender: M
branch on age: 30 - 60 branch on gender: M branch on year-income: < 60k
final decision: Good
**************************************************
branch on age: < 30
branch on age: < 30 branch on year-income: < 60k
branch on age: < 30 branch on year-income: < 60k branch on gender: F
final decision: Fair
**************************************************
branch on age: < 30 branch on year-income: > 100k
branch on age: < 30 branch on year-income: > 100k branch on gender: F
final decision: Excellent
**************************************************
branch on age: < 30 branch on year-income: > 100k branch on gender: M
final decision: Good
**************************************************
branch on age: < 30 branch on year-in[...]<jupyter_text>### Evolving the function slightly
After the first pass, we're ready to amend some flaws. Right now I have set up a decision tree builder using recursion. It outputs a two item list –– the first is a list of 'verbose' decision branches (as strings) and the second is a condensed form of the decision tree as lists. Probably eventually I will rewrite the second form to either remove it entirely, or make its output more amenable for direct analysis. <jupyter_code>### Initial Pass
# still need to alter algorithm to prevent 'overgrowing' the tree when unnecessary
def decision_tree(dataset, condition_text='', decision_rules_human_readable=[], decision_rules_as_list=[]):
columns = list(dataset.columns)
possible_attributes = columns
possible_attributes = [possible_attributes[i] for i in range(1,len(possible_attributes)-1)] # remove first entry (class label) and last (count)
## len(list(set(dataset.iloc[:,0]))) == '#' of out omes in class label
boolean = not ( len(set(dataset.iloc[:,0])) == 1 ) # ie there are more than 1 class labels in the data still
if (len(possible_attributes) > 0 & boolean): ### can keep branching
# maximize info gain on possible attributes
info_gains = [info_gain(dataset, attribute) for attribute in possible_attributes]
branching_index = info_gains.index(max(info_gains)) + 1 # as first index for label wasn't included in info_gains
new_condition = str(columns[branching_index])
branching_attribute = dataset.iloc[:,branching_index]
for attribute_value in set(branching_attribute):
sub_dataset = dataset.loc[ dataset.iloc[:,branching_index] == attribute_value]
sub_dataset = sub_dataset.drop(columns[branching_index], axis=1)
rule = condition_text + '%.%' + new_condition + '%.%' + str(attribute_value)
decision_rules_as_list += [[ columns[branching_index], attribute_value ]]
# If it's a 'complete decision rule' add it to the list
if len(possible_attributes) == 1: decision_rules_human_readable.append( [rule], )
decision_tree( sub_dataset, rule, decision_rules_human_readable, decision_rules_as_list )
else: # no more attributes to split on, assign label via majority vote
votes = []
for outcome in set(dataset.iloc[:,0]):
sub_dataset = dataset.loc[dataset.iloc[:,0] == outcome]
votes.append( [outcome, sum(sub_dataset['count'])] ) # REPLACE 'count' eventually
temp = [x[1] for x in votes]
decision_index = temp.index(max(temp))
decision = votes[decision_index][0]
decision_rules_human_readable += [' final decision:' + str(decision)]
decision_rules_as_list += [[columns[0], decision]]
return [decision_rules_human_readable, decision_rules_as_list]<jupyter_output><empty_output><jupyter_text>Let's test building decision trees on our example dataset:<jupyter_code>trees = decision_tree(ex_df)
trees[0]<jupyter_output><empty_output><jupyter_text>### Breakdown and Accuracy
I wrote a quick tool that uses the 'verbose' stem of the decision tree builder to compute the accuracy of the decision rule numerically, as well as in fraction form (so that we can easily skim through effect size). I may rewrite it slightly in the future in order to either:
* perhaps use the list form output of the decision tree
* directly leave in effect size so that output can be sorted for highest accuracy *and* effect size simultaneously<jupyter_code>def accuracy(dataset, label):
correct_counts = sum( dataset.loc[ dataset.iloc[:, 0] == label ]['count'] )
total_counts = sum( dataset['count'] )
# if total_counts==0: print(dataset)
# print(correct_counts, total_counts)
acc = correct_counts / total_counts
return (acc, str(correct_counts) + '/' + str(total_counts))
def analyze(dataset, decision_rules_list, importance_filter=False):
result = []
columns = list(dataset.columns)
# print('length of decision list is ' + str(len(decision_rules_list)))
for i in range( int(len(decision_rules_list) / 2) ):
parsed = [x.split('%.%') for x in decision_rules_list[2*i]]
# get attributes from decision rule branch
# get subdataset satisfying parsed rules
conditions = parsed[0][1:] #throw away 0th term = a space
subdataset = dataset
for j in range( int(len(conditions)/2) ): # -1 to prevent inex to go 1 iteration too many (and pass in empty arg)
index = columns.index(conditions[2*j])
# print(i,j)
# print(conditions)
# print(columns[index], conditions[2*j+1])
subdataset = subdataset.loc[ subdataset.iloc[:,index] == conditions[2*j + 1]]
# print('8'*20)
decision = decision_rules_list[2*i+1].split(':')[1]
# print(decision)
# print(subdataset)
result.append(accuracy(subdataset, decision))
i = i + 2 # new rule starts every two entries
trees_and_accuracy = [ [decision_rules_list[2*i] + [decision_rules_list[2*i+1]] , result[i]] for i in range(len(result)) ]
# sorts by accuracy, decreasing. set importance threshold at say .1% of database size?
if importance_filter:
temp = sorted(trees_and_accuracy, key=lambda x: x[1], reverse = True)
final_info = [x for x in temp if (int(x[1][1][0]) > 15) ]
else: final_info = sorted(trees_and_accuracy, key=lambda x: x[1], reverse = True)
return final_info
### Outputs list of decision branches + accuracy
analyze(ex_df, trees[0])<jupyter_output><empty_output><jupyter_text># A Few More Examples
Let's see the algorithm in practice on a few more datasets and extend it beyond the 'toy' dataset ex_df we've used so far.
## 2. Working with a raw dataset from scratch
I first generate a (skewed) dataset using the label-pools from ex_df, and put it into the standard form my decision tree algorithm utilizes before building decision trees and analyzing the results:<jupyter_code>### Entering data as columns - pandas reads them as tuples though so they will have to be transformed
credit_ranking_data = ["Excellent","Good","Fair"]
age_data = ["> 60","30 - 60","< 30"]
gender_data = ["M","F"]
year_income_data = ["> 100k","60k - 100k","< 60k"]
# counts_data # I will build the counts
aggregate_attributes = [credit_ranking_data, age_data, gender_data, year_income_data]
### simulate_data:
import random
n = 10000
data = []
for i in range(n):
# to make it more interesting / realistic, I am going to skew it slightly
if i % 15 == 0:
data.append(["Excellent",
random.choice(age_data[0:2]),
"F",
random.choice(year_income_data[1:2])])
else: data.append( [random.choice(x) for x in aggregate_attributes] )
df = pd.DataFrame.from_records(data)
# ### Add column labels
cols = ["credit-ranking","age","gender","year-income"]#,"count"]
df.columns = cols
df.head() # just checking
<jupyter_output><empty_output><jupyter_text>### Compressing the raw dataset
I'm going to build the counts column and shrink the dataset in size. This helps make the algorithm scale since we don't have to access the entire dataset.<jupyter_code>compressed_df = df[ [not x for x in df.duplicated()] ]
# following is good for sorting but runs into a copy-setting warning, so I've just left it out for the moment
# compressed_df["credit-ranking"] = pd.Categorical(compressed_df['credit-ranking'], ["Excellent","Good","Fair"])
# sorts by variable, in order
compressed_df = compressed_df.sort_values(["credit-ranking", 'age', 'gender'])
print('length:', len(compressed_df))
compressed_df.head()
## Add counts:
# reminder about cols = columns
counts = []
for index, row in compressed_df.iterrows():
#count
sub_df = df
for i in range(len(cols)):
sub_df = sub_df.loc[ df[cols[i]] == row[i] ]
count = sum( sub_df.duplicated() )
counts.append(count + 1) # +1 since undercounting duplicates by 1 (removes the original comparison)
compressed_df['count'] = counts
print('total counts:' , sum(compressed_df['count'])) # had better sum to the original 10000
compressed_df[0:15] # first 15 terms
tree = decision_tree(compressed_df)
# tree[0] # stores the verbose decision trees
analyze(compressed_df, tree[0])[0:10]
<jupyter_output><empty_output><jupyter_text>Some interesting (and relatively weak) results --- unsurprising considering how much of the data was randomly generated. Lets wrap the compression procedure above in a function we can reuse:<jupyter_code>def structure_data(dataframe):
###
# Assumes a pandas dataframe
### Compress dataframe
compressed_df = dataframe[ [not x for x in dataframe.duplicated()] ]
columns = list(dataframe.columns)
# sorts by variable, in order
compressed_df = compressed_df.sort_values(columns)
# # debug
# print('length:', len(compressed_df))
# compressed_df.head()
### Add counts:
counts = []
for index, row in compressed_df.iterrows():
#count
sub_dataframe = dataframe
for i in range(len(columns)):
sub_dataframe = sub_dataframe.loc[ dataframe[columns[i]] == row[i] ]
count = sum( sub_dataframe.duplicated() )
counts.append(count + 1) # +1 since undercounting duplicates by 1 (removes the original comparison)
compressed_df['count'] = counts
# debug
print('total counts:' , sum(compressed_df['count']), 'dataset length:' , len(dataframe))
return compressed_df
<jupyter_output><empty_output><jupyter_text>## 3. Working with a (slightly) larger dataset
The Food Mart dataset contains approximately 2000 transactions containing subsets of 100 different items - a simplified version of a small market transaction database TDB. Each row is a transaction, and a 1 in the tuple indicates the item was purchased, a 0 indicates it wasn't. One feature to notice in this dataset is the sparseness of the TDB --- most transactions will only contain a few of the 100 various products.<jupyter_code># Example 3
Food_Mart = pd.read_excel('FoodMart.xls')
print(Food_Mart.columns)
Food_Mart.head()<jupyter_output>Index(['Acetominifen', 'Anchovies', 'Aspirin', 'Auto Magazines', 'Bagels',
'Batteries', 'Beer', 'Bologna', 'Candles', 'Canned Fruit',
'Canned Vegetables', 'Cereal', 'Cheese', 'Chips', 'Chocolate',
'Chocolate Candy', 'Clams', 'Cleaners', 'Coffee', 'Cold Remedies',
'Computer Magazines', 'Conditioner', 'Cookies', 'Cooking Oil',
'Cottage Cheese', 'Crackers', 'Deli Meats', 'Deli Salads',
'Deodorizers', 'Dips', 'Donuts', 'Dried Fruit', 'Dried Meat', 'Eggs',
'Fashion Magazines', 'Flavored Drinks', 'French Fries', 'Fresh Chicken',
'Fresh Fish', 'Frozen Chicken', 'Frozen Vegetables', 'Gum', 'Hamburger',
'Hard Candy', 'Home Magazines', 'Hot Dogs', 'Ibuprofen', 'Ice Cream',
'Jam', 'Jelly', 'Juice', 'Lightbulbs', 'Maps', 'Milk', 'Mouthwash',
'Muffins', 'Nasal Sprays', 'Nuts', 'Oysters', 'Pancake Mix', 'Pancakes',
'Paper Dishes', 'Paper Wipes', 'Pasta', 'Peanut Butter',
'Personal Hygiene', 'Pizza', 'Plastic U[...]<jupyter_text>It is difficult to run the unoptimized compressor on the full dataset, so instead let us restrict our attention to a subset:
<jupyter_code>columns_subset = list(Food_Mart.columns)[35:65]
small_df = pd.DataFrame(data = [ Food_Mart[x] for x in columns_subset ]).T
# [
# Food_Mart['Cooking Oil'], Food_Mart['Eggs'],
# Food_Mart['Frozen Chicken'], Food_Mart['Frozen Vegetables'],
# Food_Mart['Pancake Mix'], Food_Mart['Sauces'],
# Food_Mart['Pot Scrubbers'], Food_Mart['Soup'],
# Food_Mart['Sour Cream'], Food_Mart['Spices'],
# Food_Mart['Sugar']]).T # T for transpose
df = structure_data(small_df)
df.head()<jupyter_output>total counts: 2127 dataset length: 2127
<jupyter_text>Let's suppose we also want to predict whether French Fries are purchased –– we can reorder the columns as well:<jupyter_code>### Reorder
old_cols = df.columns.tolist()
index_salads = old_cols.index('French Fries')
cols = [old_cols[index_salads]] + [ old_cols[i] for i in range(len(old_cols)) if i != index_salads ]
fm_df = df[cols]
fm_df.head()
fm_tree = decision_tree(fm_df,'',[],[]) # have to overwrite stored values from previous use
fm_tree[0][0:5]
analyze(fm_df, fm_tree[0])[0:5]
<jupyter_output>/anaconda3/lib/python3.6/site-packages/pandas/core/ops.py:798: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
result = getattr(x, name)(y)
|
no_license
|
/Growing Decision Trees Implementation.ipynb
|
Kodyak/Growing-Decision-Trees
| 16 |
<jupyter_start><jupyter_text>## INF1608 - Análise Numérica - 2017.1
## Departamento de Informática - PUC-Rio
## Prof. Hélio Lopes - [email protected]
## http://www.inf.puc-rio.br/~lopes
# Exercícios resolvidos
R1) Faça uma função que verique se uma matriz A de tamanho nxn é estritamente diagonal dominante:
Definição: Uma matriz A (nxn) é estritamente diagonal dominante se para toda linha i vale:
$$|A_{i,i}| > \sum_{i=1, i\ne j}^n |A_{i,j}|$$
R2) Implemente o método de Jacobi para a solução de um sistema de equações lineares. Defina um critério de parada para o seu método, e explique-o.
https://pt.wikipedia.org/wiki/M%C3%A9todo_de_Jacobi
R3) Implemente o método de Gauss-Seidel para a solução de um sistema de equações lineares. Defina um critério de parada para o seu método, e explique-o.
https://pt.wikipedia.org/wiki/M%C3%A9todo_de_Gauss-Seidel
## Lista 4
1) Implemente o método de Gradiente Conjugado para a solução de um sistema de equações lineares. Defina um critério de parada para o seu método, e explique-o.
https://en.wikipedia.org/wiki/Conjugate_gradient_method
2) Teste muito bem todos esses algoritmos!## SoluçãoR1) Faça uma função que verique se uma matriz A de tamanho nxn é estritamente diagonal dominante:
Definição: Uma matriz A (nxn) é estritamente diagonal dominante se para toda linha i vale:
$$|A_{i,i}| > \sum_{i=1, i\ne j}^n |A_{i,j}|$$
<jupyter_code>import numpy as np
import math
def isdiagonaldominat(A):
n = len(A)
for i in range(n):
soma = 0.0
for j in range(n):
if (i != j):
a = A[i,j]
soma += math.fabs(a)
if (math.fabs(A[i,i]) <= soma):
return False
return True
A = np.matrix([[-2.0,1.0,0.0],[1.0,-2.0,1.0],[0.0,1.0,-2.0]])
print(A)
print(isdiagonaldominat(A))
<jupyter_output>[[-2. 1. 0.]
[ 1. -2. 1.]
[ 0. 1. -2.]]
False
<jupyter_text>R2) Implemente o método de Jacobi para a solução de um sistema de equações lineares. Defina um critério de parada para o seu método, e explique-o. https://pt.wikipedia.org/wiki/M%C3%A9todo_de_Jacobi<jupyter_code>import numpy as np
def jacobi(A,b,x0,N,TOL):
nlin = len(A); #num. de linhas
x = np.zeros(nlin); #cria x
#iteracoes
k = 1;
while (k <= N):
#iteracao de Jacobi
for i in range(nlin):
x[i] = 0.0;
for j in range(i):
x[i] = x[i] - A[i,j]*x0[j]
for j in range(i+1,nlin):
x[i] = x[i] - A[i,j]*x0[j]
x[i] = (x[i] + b[i])/A[i,i]
#criterio de parada
if (np.linalg.norm(x-x0) < TOL):
return x,k
#prepara nova iteracao
k = k+1;
x0 = np.copy(x)
return x0,k
A = np.matrix([[2.0,1.0],[5.0,7.0]])
b = np.array([11.0,13.0])
guess = np.array([1.0,1.0])
sol,n = jacobi(A,b,guess,100,0.000001)
print "A:"
print(A)
print "b:"
print(b)
print "x:"
print(sol)
print "||Ax-b||:"
print(np.linalg.norm(A.dot(sol)-b))
print "Numero de iteracoes:"
print(n)
<jupyter_output>A:
[[ 2. 1.]
[ 5. 7.]]
b:
[ 11. 13.]
x:
[ 7.1111107 -3.22222137]
||Ax-b||:
3.92347242466e-06
Numero de iteracoes:
31
<jupyter_text>R3) Implemente o método de Gauss-Seidel para a solução de um sistema de equações lineares. Defina um critério de parada para o seu método, e explique-o. https://pt.wikipedia.org/wiki/M%C3%A9todo_de_Gauss-Seidel<jupyter_code>import numpy as np
def gaussseidel(A,b,x0,N,TOL):
nlin = len(A); #num. de linhas
x = np.zeros(nlin); #cria x
#iteracoes
k = 1;
while (k <= N):
#iteracao de Jacobi
for i in range(nlin):
s = 0.0;
for j in range(i):
s = s + A[i,j]*x[j]
for j in range(i+1,nlin):
s = s + A[i,j]*x0[j]
x[i] = (b[i] - s)/A[i,i]
#criterio de parada
if (np.linalg.norm(x-x0) < TOL):
return x,k
#prepara nova iteracao
k = k+1;
x0 = np.copy(x)
return x0,k
A = np.matrix([[2.0,1.0],[5.0,7.0]])
b = np.array([11.0,13.0])
guess = np.array([1.0,1.0])
sol,n = gaussseidel(A,b,guess,100,0.000001)
print "A:"
print(A)
print "b:"
print(b)
print "x:"
print(sol)
print "||Ax-b||:"
print(np.linalg.norm(A.dot(sol)-b))
print "Numero de iteracoes:"
print(n)
<jupyter_output>A:
[[ 2. 1.]
[ 5. 7.]]
b:
[ 11. 13.]
x:
[ 7.1111107 -3.22222193]
||Ax-b||:
5.3245767262e-07
Numero de iteracoes:
16
<jupyter_text>1) Implemente o método de Gradiente Conjugado para a solução de um sistema de equações lineares. Defina um critério de parada para o seu método, e explique-o. https://en.wikipedia.org/wiki/Conjugate_gradient_method
2) Teste muito bem todos esses algoritmos!<jupyter_code>import numpy as np
def conjugateGradient(A,b,x0,N,TOL):
n = len(A)
x = np.zeros(n)
k = 0
r0 = np.copy(b - A.dot(x0))
p = np.copy(r0)
while(k < N):
alpha = r0.dot(r0)/p.dot(A.dot(p))
x = np.copy(x0 + alpha * p)
r = np.copy(r0 - alpha * A.dot(p))
if (np.linalg.norm(r) < TOL):
return x, k
p = np.copy(r + p * r.dot(r) / r0.dot(r0))
# nova iteração
k = k + 1
x0 = np.copy(x)
r0 = np.copy(r)
return x0, k
# GERAR UMA MATRIZ SIMÉTRICA POSITIVA DEFINIDA DIAGONAL DOMINANTE PARA FINS DE TESTE
D = np.array([[13.,0.,0.,0.],[0.,2.,0.,0.],[0.,0.,5.,0.],[0.,0.,0.,11.]]) # Matriz diagal de autovalores positivos
P = np.array([[1.,1.,1.,1.],[1.,1.,-1.,-1.],[-1.,1.,-1.,1.],[1.,-1.,-1.,1.]]) # Matriz de colunas ortogonais
P = P/np.sqrt(4.)# Normalizar colunas para gerar matriz ortogonal
Pt = P.transpose()
Pinv = np.linalg.inv(P)
print ("Pt - Pinv: (Deve ser 0 para P ser ortogonal)\n" + str(Pt-Pinv)) # Teste se P é ortogonal
A = Pt.dot(D.dot(P))# A = P^(-1)DP
#A = np.array([[1577.0,3460.0,753.0],[3460.0,37052.0,4278.0],[753.0,4278.0,594.0]])
b = np.array([21.0,13.0,12.0,11.0])
chute = np.array([1.0,1.0,1.0,1.0])
#np.linalg.cholesky(A) # <- Só para garantir que a matriz é positiva definida
sol,n = conjugateGradient(A,b,chute,100,0.000001)
print "A: (Matriz gerada em bloco de código acima simérica, positiva definida)"
print(A)
print "b:"
print(b)
print "x:"
print(sol)
print "||Ax-b||:"
print(np.linalg.norm(A.dot(sol)-b))
print "Numero de iteracoes:"
print(n)<jupyter_output>Pt - Pinv: (Deve ser 0 para P ser ortogonal)
[[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]]
A: (Matriz gerada em bloco de código acima simérica, positiva definida)
[[ 7.75 -0.25 1.25 4.25]
[-0.25 7.75 4.25 1.25]
[ 1.25 4.25 7.75 -0.25]
[ 4.25 1.25 -0.25 7.75]]
b:
[ 21. 13. 12. 11.]
x:
[ 3.08024476 1.86206294 0.01206294 -0.56975524]
||Ax-b||:
0.0
Numero de iteracoes:
3
|
no_license
|
/Exercicios lista metodos iterativos- Rafael.ipynb
|
rafarubim/analise-numerica
| 4 |
<jupyter_start><jupyter_text># Sample Jupyter Notebook to call COVID data provider
<jupyter_code>func_url = 'http://localhost:7071/api/covidata'
import pandas as pd
from ipywidgets import Image
import requests
df = pd.read_csv(func_url+"?country=US")
df.head()
Image(value=requests.get(func_url+"?country=US&output=plot").content)<jupyter_output><empty_output>
|
no_license
|
/notebooks/sample.ipynb
|
shwars/e2e_covid
| 1 |
<jupyter_start><jupyter_text>## Image pyramids
Take a look at how downsampling with image pyramids works.
First, we'll read in an image then construct and display a few layers of an image pyramid.<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/rainbow_flag.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
level_1 = cv2.pyrDown(image)
level_2 = cv2.pyrDown(level_1)
level_3 = cv2.pyrDown(level_2)
# Display the images
f, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, figsize=(20,10))
ax1.set_title('original')
ax1.imshow(image)
ax2.imshow(level_1)
ax2.set_xlim([0, image.shape[1]])
ax2.set_ylim([0, image.shape[0]])
ax3.imshow(level_2)
ax3.set_xlim([0, image.shape[1]])
ax3.set_ylim([0, image.shape[0]])
ax4.imshow(level_3)
ax4.set_xlim([0, image.shape[1]])
ax4.set_ylim([0, image.shape[0]])
<jupyter_output><empty_output>
|
permissive
|
/1_4_Feature_Vectors/1. Image Pyramids.ipynb
|
svedagiriml/CVND_Exercises
| 1 |
<jupyter_start><jupyter_text># Lesson 1: Introduction to Deep Learning with PyTorch
- [@AlfredoCanziani](https://twitter.com/alfredocanziani)
- [@GokuMohandas](https://twitter.com/GokuMohandas)### Create the data<jupyter_code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random
import math
seed=12345
random.seed(seed)
# torch.manual_seed(seed)
tf.random.set_seed(seed)
N = 1000 # num_samples_per_class
D = 2 # dimensions
C = 3 # num_classes
H = 100 # num_hidden_units
X = []
y = []
for i in range(C):
index = 0
r = np.linspace(0, 1, N)
t = np.linspace(
i * 2 * math.pi / C,
(i + 2) * 2 * math.pi / C,
N
) + np.random.normal(N) * 0.1
for ix in range(N * i, N * (i + 1)):
X.append(r[index] * np.array([math.sin(t[index]), math.cos(t[index])]))
y.append(i)
index += 1
X = tf.stack(X)
y = tf.stack(y)
print("SHAPES:")
print("-------------------")
print("X:", tuple(X.shape))
print("y:", tuple(y.shape))
def plot_data(X, y, d=.0, auto=False):
"""
Plot the data.
"""
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, s=20, cmap=plt.cm.Spectral)
plt.axis('square')
plt.axis((-1.1, 1.1, -1.1, 1.1))
if auto is True: plt.axis('equal')
# plt.savefig('spiral{:.2f}.png'.format(d))
# Create the data
plot_data(X.numpy(), y.numpy())
def plot_model(X, y, model, e=.0, auto=False):
"""
Plot the model from torch weights.
"""
X = X.numpy()
y = y.numpy(),
w1 = model.layers[0].get_weights()[0]
b1 = model.layers[0].get_weights()[1]
w2 = model.layers[1].get_weights()[0]
b2 = model.layers[1].get_weights()[1]
h = 0.01
x_min, x_max = (-1.1, 1.1)
y_min, y_max = (-1.1, 1.1)
if auto is True:
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel()], w1) + b1), w2) + b2
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.axis((-1.1, 1.1, -1.1, 1.1))
plt.axis('square')
if auto is True:
plt.axis((xx.min(), xx.max(), yy.min(), yy.max()))
# plt.savefig('train{:03.2f}.png'.format(e))<jupyter_output><empty_output><jupyter_text>### Linear model<jupyter_code>learning_rate = 1e-3
lambda_l2 = 1e-5
def linear_model(D_in, H, D_out):
model = keras.Sequential([
layers.Dense(H, activation='linear', input_shape=D_in),
layers.Dense(D_out, activation='linear'),
])
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)
model.compile(loss='mse',
optimizer=optimizer)
return model
model_1 = linear_model(X.shape,H,C)
model_1.fit(X, y, epochs=100)
model_1.summary()
# Plot trained model
plot_model(X, y, model)<jupyter_output><empty_output><jupyter_text>### Two-layered network<jupyter_code>learning_rate = 1e-3
lambda_l2 = 1e-5
def two_layer_network(D_in, H, D_out):
model = keras.Sequential([
layers.Dense(H, activation='relu', input_shape=D_in),
layers.Dense(D_out),
])
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(loss='binary_crossentropy',
optimizer=optimizer)
return model
model_2 = two_layer_network(X.shape, H, C)
model_2.fit(X,y, epochs = 100)
model_2.summary()
# Plot trained model
plot_model(X, y, model_2)<jupyter_output><empty_output><jupyter_text>### Proper training procedure#### Create datasets<jupyter_code>split_ratio = 0.8 # train-test split
num_epochs = 100
batch_size = 64
log_every = 25
training = random.sample(range(X.shape[0]), int(X.shape[0] * split_ratio))
testing = [i for i in range(X.shape[0]) if not i in training]
X_train = tf.convert_to_tensor(X.numpy()[training,:])
X_test = tf.convert_to_tensor(X.numpy()[testing,:])
y_train = tf.convert_to_tensor(y.numpy()[training])
y_test = tf.convert_to_tensor(y.numpy()[testing])
X_train.shape, y_train.shape<jupyter_output><empty_output><jupyter_text>#### Training<jupyter_code>learning_rate = 1e-3
lambda_l2 = 1e-5
dropout_p = 0.1
decay_rate = 0.9999
max_grad_norm = 5.0
def customized_network(D_in, H, D_out):
model = keras.Sequential([
layers.Dense(H, activation='relu', input_shape=D_in),
layers.Dense(D_out,activation='softmax'),
layers.Dropout(dropout_p),
])
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(loss='binary_crossentropy',
optimizer=optimizer)
return model
model_3 = customized_network(X_train.shape, H, C)
model_3.fit(X_train,y_train, epochs = 100)
model_3.summary()
# Plot trained model
plot_model(X, y, model_3)<jupyter_output><empty_output>
|
no_license
|
/HW1/Pytorch and Tensorflow/lesson1_Tensorflow.ipynb
|
tunghoangt/DeepNeuralNet
| 5 |
<jupyter_start><jupyter_text># Preparando dados no HDFS<jupyter_code>!ls
!hdfs dfs -ls /
!hdfs dfs -mkdir /semantix
!hdfs dfs -ls /
!ls -lha
!hdfs dfs -put NASA* /semantix
!hdfs dfs -ls /semantix<jupyter_output>Found 2 items
-rw-r--r-- 1 root hdfs 167813770 2019-08-17 20:45 /semantix/NASA_access_log_Aug95
-rw-r--r-- 1 root hdfs 205242368 2019-08-17 20:45 /semantix/NASA_access_log_Jul95
<jupyter_text># Carregando dados no spark<jupyter_code>df1 = spark.read.text("/semantix/NASA_access_log_Aug95")
df1.show(10)
df2 = spark.read.text("/semantix/NASA_access_log_Jul95")
df2.show(10)<jupyter_output>+--------------------+
| value|
+--------------------+
|199.72.81.55 - - ...|
|unicomp6.unicomp....|
|199.120.110.21 - ...|
|burger.letters.co...|
|199.120.110.21 - ...|
|burger.letters.co...|
|burger.letters.co...|
|205.212.115.106 -...|
|d104.aa.net - - [...|
|129.94.144.152 - ...|
+--------------------+
only showing top 10 rows
<jupyter_text>## Verificando carregamento<jupyter_code>df1.count()
df2.count()<jupyter_output><empty_output><jupyter_text>## Juntando os arquivos<jupyter_code>df = df1.union(df2)
df.count()<jupyter_output><empty_output><jupyter_text># Tratamento dos dados e organização do dataframe<jupyter_code>df_table = df.select(
F.regexp_extract('value', r'^([^\s]+\s)', 1).alias('HOST'),
F.regexp_extract('value', r'^.*\[(\d\d/\w{3}/\d{4}:\d{2}:\d{2}:\d{2} -\d{4})]', 1).alias('TIMESTAMP'),
F.regexp_extract('value', r'^.*"\w+\s+([^\s]+)\s+HTTP.*"', 1).alias('REQUISITION'),
F.regexp_extract('value', r'^.*"\s+([^\s]+)', 1).alias('STATUS'),
F.regexp_extract('value', r'^.*\s+(\d+)$', 1).alias('SIZE')
)
df_table.show()
df_table.count()<jupyter_output><empty_output><jupyter_text>## Tratamento de data e tipagem<jupyter_code>df_table = df_table\
.where(F.trim(df_table.TIMESTAMP) != "")\
.select(df_table.HOST,
F.when(df_table.TIMESTAMP.like("%Aug%"),\
F.from_unixtime(F.unix_timestamp(\
F.regexp_replace(df_table.TIMESTAMP, "Aug", "08"), 'dd/MM/yyyy'))
)\
.otherwise(F.from_unixtime(F.unix_timestamp(
F.regexp_replace(df_table.TIMESTAMP, "Jul", "07"), 'dd/MM/yyyy')))\
.cast("date")\
.alias("DATE"),
df_table.REQUISITION, df_table.STATUS.cast('integer'), df_table.SIZE.cast('integer')
)
df_table.show()<jupyter_output>+--------------------+----------+--------------------+------+-----+
| HOST| DATE| REQUISITION|STATUS| SIZE|
+--------------------+----------+--------------------+------+-----+
| in24.inetnebr.com |1995-08-01|/shuttle/missions...| 200| 1839|
| uplherc.upl.com |1995-08-01| /| 304| 0|
| uplherc.upl.com |1995-08-01|/images/ksclogo-m...| 304| 0|
| uplherc.upl.com |1995-08-01|/images/MOSAIC-lo...| 304| 0|
| uplherc.upl.com |1995-08-01|/images/USA-logos...| 304| 0|
|ix-esc-ca2-07.ix....|1995-08-01|/images/launch-lo...| 200| 1713|
| uplherc.upl.com |1995-08-01|/images/WORLD-log...| 304| 0|
|slppp6.intermind....|1995-08-01|/history/skylab/s...| 200| 1687|
|piweba4y.prodigy....|1995-08-01|/images/launchmed...| 200|11853|
|slppp6.intermind....|1995-08-01|/history/skylab/s...| 200| 9202|
|slppp6.intermind....|1995-08-01|/images/ksclogosm...| 200| 3635|
|ix-esc-ca2-07.ix....|1995-08-01|/history/apollo[...]<jupyter_text>## Questões### 1. Número de hosts únicos.<jupyter_code>df_table.select(df_table.HOST).distinct().count()<jupyter_output><empty_output><jupyter_text>### 2. O total de erros 404.<jupyter_code>df_table.where(df_table.STATUS == 404).count()<jupyter_output><empty_output><jupyter_text>### 3. Os 5 URLs que mais causaram erro 404.<jupyter_code>df_table.select(df_table.HOST,df_table.STATUS)\
.where(df_table.STATUS == 404)\
.groupBy(df_table.HOST)\
.count()\
.withColumnRenamed("count", "COUNT_404")\
.orderBy("COUNT_404", ascending = False)\
.select("HOST", "COUNT_404")\
.show(5)<jupyter_output>+--------------------+---------+
| HOST|COUNT_404|
+--------------------+---------+
|hoohoo.ncsa.uiuc....| 251|
|piweba3y.prodigy....| 157|
|jbiagioni.npt.nuw...| 132|
|piweba1y.prodigy....| 114|
|www-d4.proxy.aol....| 91|
+--------------------+---------+
only showing top 5 rows
<jupyter_text>### 4. Quantidade de erros 404 por dia.<jupyter_code>df_table.select(df_table.DATE, df_table.STATUS)\
.where(df_table.STATUS == 404)\
.groupBy(df_table.DATE)\
.count()\
.withColumnRenamed("count", "COUNT_404")\
.orderBy("DATE")\
.show(30)<jupyter_output>+----------+---------+
| DATE|COUNT_404|
+----------+---------+
|1995-07-01| 316|
|1995-07-02| 291|
|1995-07-03| 474|
|1995-07-04| 359|
|1995-07-05| 497|
|1995-07-06| 640|
|1995-07-07| 570|
|1995-07-08| 302|
|1995-07-09| 348|
|1995-07-10| 398|
|1995-07-11| 471|
|1995-07-12| 471|
|1995-07-13| 532|
|1995-07-14| 413|
|1995-07-15| 254|
|1995-07-16| 257|
|1995-07-17| 406|
|1995-07-18| 465|
|1995-07-19| 639|
|1995-07-20| 428|
|1995-07-21| 334|
|1995-07-22| 192|
|1995-07-23| 233|
|1995-07-24| 328|
|1995-07-25| 461|
|1995-07-26| 336|
|1995-07-27| 336|
|1995-07-28| 94|
|1995-08-01| 243|
|1995-08-03| 304|
+----------+---------+
only showing top 30 rows
<jupyter_text>### 5. O total de bytes retornados.<jupyter_code>df_table.select(F.sum(df_table.SIZE).alias("TOTAL")).show()
gb=1073741824
size_table = df_table.select(F.sum(df_table.SIZE).alias("TOTAL"))
size_table.select("TOTAL").withColumn('TOTAL_GB', (size_table.TOTAL/gb).cast('integer')).show()<jupyter_output>+-----------+--------+
| TOTAL|TOTAL_GB|
+-----------+--------+
|65524314915| 61|
+-----------+--------+
|
no_license
|
/ProcessoSemantix.ipynb
|
atworkdante/semantix
| 11 |
<jupyter_start><jupyter_text># Tensor Transformations<jupyter_code>from __future__ import print_function
import torch
import numpy as np
from datetime import date
date.today()
torch.__version__
np.__version__<jupyter_output><empty_output><jupyter_text>NOTE on notation
* _x, _y, _z, ...: NumPy 0-d or 1-d arrays
* _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays
* x, y, z, ...: 0-d or 1-d tensors
* X, Y, Z, ...: 2-d or higher dimensional tensors## CastingQ1. Convert a torch Tensor `X` to numpy array `_X`.<jupyter_code>X = torch.randn(3,2,1)
_X = X.numpy()
print(type(_X))<jupyter_output><class 'numpy.ndarray'>
<jupyter_text>Q2. Convert a torch Tensor X to a python list Y.<jupyter_code>X = torch.randn(3,2,1)
Y = X.tolist()
print(type(Y))<jupyter_output><class 'list'>
<jupyter_text>Q3. Convert the data type of X to float32.<jupyter_code>X = torch.IntTensor(3, 2).fill_(0)
print("X=>", X.type())
Y = X.float()
print("Y=>", Y.type())<jupyter_output>X=> torch.IntTensor
Y=> torch.FloatTensor
<jupyter_text>Q4. Convert the data type of X to int32.<jupyter_code>X = torch.LongTensor(3, 2).fill_(0)
print("X=>", X.type())
Y = ...
print("Y=>", Y.type())<jupyter_output>X=> torch.LongTensor
Y=> torch.IntTensor
<jupyter_text>## Size and ResizeQ5. Get the size and type of X.<jupyter_code>X = torch.randn(3, 2, 1)
y = X.shape
tp = X.type()
print("size=", y)
print("type=", tp)
<jupyter_output>size= torch.Size([3, 2, 1])
type= torch.FloatTensor
<jupyter_text>Q6. Get the total number of elements in X.<jupyter_code>X = torch.randn(3, 2, 1)
y = X.numel()
print(y)
<jupyter_output>6
<jupyter_text>Q7. Get the number of dimensions of X.<jupyter_code>X = torch.randn(3, 2, 1)
y = X.ndimension()
print(y)
<jupyter_output>3
<jupyter_text>Q8. Resize X to (10, 3).<jupyter_code>X = torch.ones(5, 6)
Y = X.resize_(10, 3)
print(Y.shape)<jupyter_output>torch.Size([10, 3])
<jupyter_text>Q9. Remove all the dimensions of size 1 in X.<jupyter_code>X = torch.randn(10, 10, 1, 1)
Y = X.squeeze()
print(Y.size())
print(X.size())<jupyter_output>torch.Size([10, 10])
torch.Size([10, 10, 1, 1])
<jupyter_text>Q10. Remove only the third dimension in X.<jupyter_code>X = torch.randn(10, 10, 1, 1)
Y = X.squeeze(2)
# ??torch.squeez2
print(Y.size())<jupyter_output>torch.Size([10, 10, 1])
<jupyter_text>Q11. Add a dimension of 1 at the end of X.<jupyter_code>X = torch.ones(10, 10, 3)
Y = X.unsqueeze(-1)
print(Y.size())<jupyter_output>torch.Size([10, 10, 3, 1])
<jupyter_text>## Indexing, Slicing, Joining, Mutating OpsQ12. Return the columns of indices y.<jupyter_code>X = torch.arange(1, 22).resize_(3, 7)
print(X)
y = torch.LongTensor([0, 2, 4, 5])
print(X.index_select(1, y))
??torch.index_select<jupyter_output>
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
[torch.FloatTensor of size 3x7]
1 3 5 6
8 10 12 13
15 17 19 20
[torch.FloatTensor of size 3x4]
<jupyter_text>Q13. Apply `mask` to `x`.<jupyter_code>x = torch.arange(1, 11)
print("x=", x)
mask = x.gt(5)
print("masked=", x.masked_select(mask))
<jupyter_output>x=
1
2
3
4
5
6
7
8
9
10
[torch.FloatTensor of size 10]
masked=
6
7
8
9
10
[torch.FloatTensor of size 5]
<jupyter_text>Q14. Split X into 5 same-sized tensors along the second dimension.<jupyter_code>X = torch.arange(1, 11).resize_(2, 5)
print("X=", X)
Ys = torch.chunk(X, 5, 1)
print("Chunks=", Ys)<jupyter_output>X=
1 2 3 4 5
6 7 8 9 10
[torch.FloatTensor of size 2x5]
Chunks= (
1
6
[torch.FloatTensor of size 2x1]
,
2
7
[torch.FloatTensor of size 2x1]
,
3
8
[torch.FloatTensor of size 2x1]
,
4
9
[torch.FloatTensor of size 2x1]
,
5
10
[torch.FloatTensor of size 2x1]
)
<jupyter_text>Q16. Remove the second dimension of X and return all resulting slices.<jupyter_code>X = torch.arange(1, 11).resize_(2, 5)
print("X=", X)
ys = torch.unbind(X, 1)
print("Sliced tensors=", ys)<jupyter_output>X=
1 2 3 4 5
6 7 8 9 10
[torch.FloatTensor of size 2x5]
Sliced tensors= (
1
6
[torch.FloatTensor of size 2]
,
2
7
[torch.FloatTensor of size 2]
,
3
8
[torch.FloatTensor of size 2]
,
4
9
[torch.FloatTensor of size 2]
,
5
10
[torch.FloatTensor of size 2]
)
<jupyter_text>Q17. Stack x, y, and z vertically.<jupyter_code>x = torch.Tensor([1, 4])
y = torch.Tensor([2, 5])
z = torch.Tensor([3, 6])
O = torch.stack((x, y, z))
print(O)
<jupyter_output>
1 4
2 5
3 6
[torch.FloatTensor of size 3x2]
<jupyter_text>Q18. Repeat X 1 and 2 times along each dimension .<jupyter_code>X = torch.arange(1, 7).resize_(2, 3)
print("X=", X)
print(X.dim())
Y = X.repeat(1, 2)
print("Repeated=", Y)
<jupyter_output>X=
1 2 3
4 5 6
[torch.FloatTensor of size 2x3]
2
Repeated=
1 2 3 1 2 3
4 5 6 4 5 6
[torch.FloatTensor of size 2x6]
<jupyter_text>Q19. Concatenate X and Y along the first dimension.<jupyter_code>X = torch.Tensor([[1, 2, 3], [4, 5, 6]])
Y = torch.Tensor([[7, 8, 9], [10, 11, 12]])
Z = torch.cat((X, Y), dim=0)
print(Z)<jupyter_output>
1 2 3
4 5 6
7 8 9
10 11 12
[torch.FloatTensor of size 4x3]
<jupyter_text>Q20. Swap the first and second dimension of X.<jupyter_code>X = torch.randn(3, 2)
print("X=", X)
Y = X.t()
print("Transposed=", Y)<jupyter_output>X=
0.0214 -0.0313
-0.7030 -0.4384
-2.0496 -0.1647
[torch.FloatTensor of size 3x2]
Transposed=
0.0214 -0.7030 -2.0496
-0.0313 -0.4384 -0.1647
[torch.FloatTensor of size 2x3]
<jupyter_text>Q21. Permute X's dimensions such that the new tensor has shape (3, 1, 2).<jupyter_code>X = torch.ones(1, 2, 3)
Y = X.permute(2, 0, 1)
print(Y.size())
<jupyter_output>torch.Size([3, 1, 2])
<jupyter_text>## SortingQ22. Sort X along the second dimension.<jupyter_code>X = torch.Tensor([[1,4],[3,1]])
print("X=", X)
sorted_tensor, sorted_indices = ...
print("sorted tensor=", sorted_tensor)
print("sorted indices=", sorted_indices)
<jupyter_output>X=
1 4
3 1
[torch.FloatTensor of size 2x2]
sorted tensor=
1 4
1 3
[torch.FloatTensor of size 2x2]
sorted indices=
0 1
1 0
[torch.LongTensor of size 2x2]
<jupyter_text>## CountingQ23. Get the indices of all nonzero elements in X.<jupyter_code>X = torch.Tensor([[0,1,7,0,0],[3,0,0,2,19]])
y = ...
print(y)
<jupyter_output>
0 1
0 2
1 0
1 3
1 4
[torch.LongTensor of size 5x2]
|
permissive
|
/0-libraries/pytorch-exercises/Chapter1_Tensors/1-tensor-transformations.ipynb
|
Christomesh/deep-learning-implementations
| 23 |
<jupyter_start><jupyter_text>##### Copyright 2019 The TensorFlow Authors.<jupyter_code>#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.<jupyter_output><empty_output><jupyter_text># TensorFlow 2 quickstart for beginners
View on TensorFlow.org
Run in Google Colab
View source on GitHub
Download notebook
This short introduction uses [Keras](https://www.tensorflow.org/guide/keras/overview) to:
1. Build a neural network that classifies images.
2. Train this neural network.
3. And, finally, evaluate the accuracy of the model.This is a [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To follow this tutorial, run the notebook in Google Colab by clicking the button at the top of this page.
1. In Colab, connect to a Python runtime: At the top-right of the menu bar, select *CONNECT*.
2. Run all the notebook code cells: Select *Runtime* > *Run all*.Download and install TensorFlow 2. Import TensorFlow into your program:
Note: Upgrade `pip` to install the TensorFlow 2 package. See the [install guide](https://www.tensorflow.org/install) for details.<jupyter_code>import tensorflow as tf<jupyter_output><empty_output><jupyter_text>Load and prepare the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). Convert the samples from integers to floating-point numbers:<jupyter_code>mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0<jupyter_output>Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
8192/11490434 [..............................] - ETA: 0s<jupyter_text>Build the `tf.keras.Sequential` model by stacking layers. Choose an optimizer and loss function for training:<jupyter_code>model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])<jupyter_output><empty_output><jupyter_text>For each example the model returns a vector of "[logits](https://developers.google.com/machine-learning/glossary#logits)" or "[log-odds](https://developers.google.com/machine-learning/glossary#log-odds)" scores, one for each class.<jupyter_code>predictions = model(x_train[:1]).numpy()
predictions<jupyter_output><empty_output><jupyter_text>The `tf.nn.softmax` function converts these logits to "probabilities" for each class: <jupyter_code>tf.nn.softmax(predictions).numpy()<jupyter_output><empty_output><jupyter_text>Note: It is possible to bake this `tf.nn.softmax` in as the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to
provide an exact and numerically stable loss calculation for all models when using a softmax output. The `losses.SparseCategoricalCrossentropy` loss takes a vector of logits and a `True` index and returns a scalar loss for each example.<jupyter_code>loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)<jupyter_output><empty_output><jupyter_text>This loss is equal to the negative log probability of the true class:
It is zero if the model is sure of the correct class.
This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to `-tf.log(1/10) ~= 2.3`.<jupyter_code>loss_fn(y_train[:1], predictions).numpy()
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])<jupyter_output><empty_output><jupyter_text>The `Model.fit` method adjusts the model parameters to minimize the loss: <jupyter_code>model.fit(x_train, y_train, epochs=5)<jupyter_output>Epoch 1/5
<jupyter_text>The `Model.evaluate` method checks the models performance, usually on a "[Validation-set](https://developers.google.com/machine-learning/glossary#validation-set)" or "[Test-set](https://developers.google.com/machine-learning/glossary#test-set)".<jupyter_code>model.evaluate(x_test, y_test, verbose=2)<jupyter_output>313/313 - 0s - loss: 0.0825 - accuracy: 0.9753
<jupyter_text>The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the [TensorFlow tutorials](https://www.tensorflow.org/tutorials/).If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:<jupyter_code>probability_model = tf.keras.Sequential([
model,
tf.keras.layers.Softmax()
])
probability_model(x_test[:5])<jupyter_output><empty_output>
|
no_license
|
/TF-Iris-rasp3b/beginner.ipynb
|
Neelanjan-Goswami/AWS-Sagemaker_-_AWS-IoT-Analytics
| 11 |
<jupyter_start><jupyter_text>### BasicsThe basics of Python Programming* #### Printing Stuff<jupyter_code>print("Hello, World!")<jupyter_output>Hello, World!
<jupyter_text>* #### String substitution<jupyter_code>print("Hello, {0}, have a good {1}.".format("Saurav", "day"))
print("Hello, {name}, have a good {time}.".format(time="day", name="Saurav"))
name = "Saurav"
print("Hello ", name)<jupyter_output>Hello Saurav
<jupyter_text>* #### Taking Input<jupyter_code>name = input("Enter your name: ")
age = int(input("Enter your age: "))<jupyter_output>Enter your age: 23
<jupyter_text>* #### Operators<jupyter_code>ans = 1 + 2 - 3 * 4 / 5 // 6 % 7 ** 8
print(ans)
2 <= 3 and 4 > 5 or 6 != 7<jupyter_output><empty_output><jupyter_text>* #### Data TypesPython int data types can represent numbers as big as your virtual memory. However, representing floats may result in prescision errors. Boolean values True and False are equal to numerical values 1 and 0.
Strings can be any sequence of characters between two quotes, single and double both included. They can be sliced, index, concatanated or repeated.<jupyter_code>name = 'Saurav'
len(name)
name[2]
name[-1]
name + name
name * 3
name[0:3]
name[1:3]
name[2:]
name[:4]<jupyter_output><empty_output><jupyter_text>* #### Lists, Tuples, Dicts<jupyter_code>list = [1, 2, 3, 4]
len(list)
list[2]
list[1:2]
tuple = ("abc", "xyz")
len(tuple)
dict = {"name": "Saurav", "age": 24}
len(dict)
dict.keys()<jupyter_output><empty_output><jupyter_text>* #### Conditionals<jupyter_code>a = True
if a == True:
print("True")
elif a == False:
print("False")
else:
print("Unknown")<jupyter_output>True
<jupyter_text>* #### Loops<jupyter_code>a = 0
while a <5:
print(a)
a+=1
for i in range(5):
print(i)
name = "Saurav"
for i, j in enumerate(name):
print(i, j)<jupyter_output>0 S
1 a
2 u
3 r
4 a
5 v
<jupyter_text>### Functions<jupyter_code>def addNumbers(x, y):
return x + y
addNumbers(2, 2)<jupyter_output><empty_output><jupyter_text>A function can also take default arguments.<jupyter_code>def addNumbers(x=2, y=3):
return x +y
addNumbers()<jupyter_output><empty_output><jupyter_text>### Classes And Objects<jupyter_code>class Calculator(object):
def __init__(self, x, y):
self.x = x
self.y = y
def add(self):
return self.x + self.y
def subtract(self):
return self.x - self.y
c = Calculator(2, 3)
print(c.add())
print(c.subtract())<jupyter_output>5
-1
|
no_license
|
/CorePython/00_Basics.ipynb
|
uniquerockrz/learning-python
| 11 |
<jupyter_start><jupyter_text>## 1. Import and observe dataset
We all love watching movies! There are some movies we like, some we don't. Most people have a preference for movies of a similar genre. Some of us love watching action movies, while some of us like watching horror. Some of us like watching movies that have ninjas in them, while some of us like watching superheroes.
Movies within a genre often share common base parameters. Consider the following two movies:
Both movies, 2001: A Space Odyssey and Close Encounters of the Third Kind, are movies based on aliens coming to Earth. I've seen both, and they indeed share many similarities. We could conclude that both of these fall into the same genre of movies based on intuition, but that's no fun in a data science context. In this notebook, we will quantify the similarity of movies based on their plot summaries available on IMDb and Wikipedia, then separate them into groups, also known as clusters. We'll create a dendrogram to represent how closely the movies are related to each other.
Let's start by importing the dataset and observing the data provided.<jupyter_code># Import modules
import numpy as np
import pandas as pd
import nltk
# Set seed for reproducibility
np.random.seed(5)
# Read in IMDb and Wikipedia movie data (both in same file)
movies_df = pd.read_csv("datasets/movies.csv")
print("Number of movies loaded: %s " % (len(movies_df)))
# Display the data
movies_df.head()<jupyter_output>Number of movies loaded: 100
<jupyter_text>## 2. Combine Wikipedia and IMDb plot summaries
The dataset we imported currently contains two columns titled wiki_plot and imdb_plot. They are the plot found for the movies on Wikipedia and IMDb, respectively. The text in the two columns is similar, however, they are often written in different tones and thus provide context on a movie in a different manner of linguistic expression. Further, sometimes the text in one column may mention a feature of the plot that is not present in the other column. For example, consider the following plot extracts from The Godfather:
Wikipedia: "On the day of his only daughter's wedding, Vito Corleone"
IMDb: "In late summer 1945, guests are gathered for the wedding reception of Don Vito Corleone's daughter Connie"
While the Wikipedia plot only mentions it is the day of the daughter's wedding, the IMDb plot also mentions the year of the scene and the name of the daughter.
Let's combine both the columns to avoid the overheads in computation associated with extra columns to process.<jupyter_code># Combine wiki_plot and imdb_plot into a single column
movies_df["plot"] = movies_df["wiki_plot"].astype(str) + "\n" + \
movies_df["imdb_plot"].astype(str)
# Inspect the new DataFrame
movies_df.head()<jupyter_output><empty_output><jupyter_text>## 3. Tokenization
Tokenization is the process by which we break down articles into individual sentences or words, as needed. Besides the tokenization method provided by NLTK, we might have to perform additional filtration to remove tokens which are entirely numeric values or punctuation.
While a program may fail to build context from "While waiting at a bus stop in 1981" (Forrest Gump), because this string would not match in any dictionary, it is possible to build context from the words "while", "waiting" or "bus" because they are present in the English dictionary.
Let us perform tokenization on a small extract from The Godfather.<jupyter_code># Tokenize a paragraph into sentences and store in sent_tokenized
sent_tokenized = [sent for sent in nltk.sent_tokenize("""
Today (May 19, 2016) is his only daughter's wedding.
Vito Corleone is the Godfather.
""")]
# Word Tokenize first sentence from sent_tokenized, save as words_tokenized
words_tokenized = [word for word in nltk.word_tokenize(sent_tokenized[0])]
# Remove tokens that do not contain any letters from words_tokenized
import re
filtered = [word for word in words_tokenized if re.search('[a-zA-Z]', word)]
# Display filtered words to observe words after tokenization
filtered<jupyter_output><empty_output><jupyter_text>## 4. Stemming
Stemming is the process by which we bring down a word from its different forms to the root word. This helps us establish meaning to different forms of the same words without having to deal with each form separately. For example, the words 'fishing', 'fished', and 'fisher' all get stemmed to the word 'fish'.
Consider the following sentences:
"Young William Wallace witnesses the treachery of Longshanks" ~ Gladiator
"escapes to the city walls only to witness Cicero's death" ~ Braveheart
Instead of building separate dictionary entries for both witnesses and witness, which mean the same thing outside of quantity, stemming them reduces them to 'wit'.
There are different algorithms available for stemming such as the Porter Stemmer, Snowball Stemmer, etc. We shall use the Snowball Stemmer.<jupyter_code># Import the SnowballStemmer to perform stemming
from nltk.stem.snowball import SnowballStemmer
# Create an English language SnowballStemmer object
stemmer = SnowballStemmer("english")
# Print filtered to observe words without stemming
print("Without stemming: ", filtered)
# Stem the words from filtered and store in stemmed_words
stemmed_words = [stemmer.stem(t) for t in filtered]
# Print the stemmed_words to observe words after stemming
print("After stemming: ", stemmed_words)<jupyter_output>Without stemming: ['Today', 'May', 'is', 'his', 'only', 'daughter', "'s", 'wedding']
After stemming: ['today', 'may', 'is', 'his', 'onli', 'daughter', "'s", 'wed']
<jupyter_text>## 5. Club together Tokenize & Stem
We are now able to tokenize and stem sentences. But we may have to use the two functions repeatedly one after the other to handle a large amount of data, hence we can think of wrapping them in a function and passing the text to be tokenized and stemmed as the function argument. Then we can pass the new wrapping function, which shall perform both tokenizing and stemming instead of just tokenizing, as the tokenizer argument while creating the TF-IDF vector of the text.
What difference does it make though? Consider the sentence from the plot of The Godfather: "Today (May 19, 2016) is his only daughter's wedding." If we do a 'tokenize-only' for this sentence, we have the following result:
'today', 'may', 'is', 'his', 'only', 'daughter', "'s", 'wedding'
But when we do a 'tokenize-and-stem' operation we get:
'today', 'may', 'is', 'his', 'onli', 'daughter', "'s", 'wed'
All the words are in their root form, which will lead to a better establishment of meaning as some of the non-root forms may not be present in the NLTK training corpus.<jupyter_code># Define a function to perform both stemming and tokenization
def tokenize_and_stem(text):
# Tokenize by sentence, then by word
tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
# Filter out raw tokens to remove noise
filtered_tokens = [token for token in tokens if re.search('[a-zA-Z]', token)]
# Stem the filtered_tokens
stems = [stemmer.stem(t) for t in filtered_tokens]
return stems
words_stemmed = tokenize_and_stem("Today (May 19, 2016) is his only daughter's wedding.")
print(words_stemmed)<jupyter_output>['today', 'may', 'is', 'his', 'onli', 'daughter', "'s", 'wed']
<jupyter_text>## 6. Create TfidfVectorizer
Computers do not understand text. These are machines only capable of understanding numbers and performing numerical computation. Hence, we must convert our textual plot summaries to numbers for the computer to be able to extract meaning from them. One simple method of doing this would be to count all the occurrences of each word in the entire vocabulary and return the counts in a vector. Enter CountVectorizer.
Consider the word 'the'. It appears quite frequently in almost all movie plots and will have a high count in each case. But obviously, it isn't the theme of all the movies! Term Frequency-Inverse Document Frequency (TF-IDF) is one method which overcomes the shortcomings of CountVectorizer. The Term Frequency of a word is the measure of how often it appears in a document, while the Inverse Document Frequency is the parameter which reduces the importance of a word if it frequently appears in several documents.
For example, when we apply the TF-IDF on the first 3 sentences from the plot of The Wizard of Oz, we are told that the most important word there is 'Toto', the pet dog of the lead character. This is because the movie begins with 'Toto' biting someone due to which the journey of Oz begins!
In simplest terms, TF-IDF recognizes words which are unique and important to any given document. Let's create one for our purposes.<jupyter_code># Import TfidfVectorizer to create TF-IDF vectors
from sklearn.feature_extraction.text import TfidfVectorizer
# Instantiate TfidfVectorizer object with stopwords and tokenizer
# parameters for efficient processing of text
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000,
min_df=0.2, stop_words='english',
use_idf=True, tokenizer=tokenize_and_stem,
ngram_range=(1,3))<jupyter_output><empty_output><jupyter_text>## 7. Fit transform TfidfVectorizer
Once we create a TF-IDF Vectorizer, we must fit the text to it and then transform the text to produce the corresponding numeric form of the data which the computer will be able to understand and derive meaning from. To do this, we use the fit_transform() method of the TfidfVectorizer object.
If we observe the TfidfVectorizer object we created, we come across a parameter stopwords. 'stopwords' are those words in a given text which do not contribute considerably towards the meaning of the sentence and are generally grammatical filler words. For example, in the sentence 'Dorothy Gale lives with her dog Toto on the farm of her Aunt Em and Uncle Henry', we could drop the words 'her' and 'the', and still have a similar overall meaning to the sentence. Thus, 'her' and 'the' are stopwords and can be conveniently dropped from the sentence.
On setting the stopwords to 'english', we direct the vectorizer to drop all stopwords from a pre-defined list of English language stopwords present in the nltk module. Another parameter, ngram_range, defines the length of the ngrams to be formed while vectorizing the text.<jupyter_code># Fit and transform the tfidf_vectorizer with the "plot" of each movie
# to create a vector representation of the plot summaries
tfidf_matrix = tfidf_vectorizer.fit_transform([x for x in movies_df["plot"]])
print(tfidf_matrix.shape)<jupyter_output>(100, 564)
<jupyter_text>## 8. Import KMeans and create clusters
To determine how closely one movie is related to the other by the help of unsupervised learning, we can use clustering techniques. Clustering is the method of grouping together a number of items such that they exhibit similar properties. According to the measure of similarity desired, a given sample of items can have one or more clusters.
A good basis of clustering in our dataset could be the genre of the movies. Say we could have a cluster '0' which holds movies of the 'Drama' genre. We would expect movies like Chinatown or Psycho to belong to this cluster. Similarly, the cluster '1' in this project holds movies which belong to the 'Adventure' genre (Lawrence of Arabia and the Raiders of the Lost Ark, for example).
K-means is an algorithm which helps us to implement clustering in Python. The name derives from its method of implementation: the given sample is divided into K clusters where each cluster is denoted by the mean of all the items lying in that cluster.
We get the following distribution for the clusters:
<jupyter_code># Import k-means to perform clustering
from sklearn.cluster import KMeans
# Create a KMeans object with 5 clusters and save as km
km = KMeans(n_clusters=5)
# Fit the k-means object with tfidf_matrix
km.fit(tfidf_matrix)
clusters = km.labels_.tolist()
# Create a column cluster to denote the generated cluster for each movie
movies_df["cluster"] = clusters
# Display number of films per cluster (clusters from 0 to 4)
movies_df['cluster'].value_counts() <jupyter_output><empty_output><jupyter_text>## 9. Calculate similarity distance
Consider the following two sentences from the movie The Wizard of Oz:
"they find in the Emerald City"
"they finally reach the Emerald City"
If we put the above sentences in a CountVectorizer, the vocabulary produced would be "they, find, in, the, Emerald, City, finally, reach" and the vectors for each sentence would be as follows:
1, 1, 1, 1, 1, 1, 0, 0
1, 0, 0, 1, 1, 1, 1, 1
When we calculate the cosine angle formed between the vectors represented by the above, we get a score of 0.667. This means the above sentences are very closely related. Similarity distance is 1 - cosine similarity angle. This follows from that if the vectors are similar, the cosine of their angle would be 1 and hence, the distance between then would be 1 - 1 = 0.
Let's calculate the similarity distance for all of our movies.<jupyter_code># Import cosine_similarity to calculate similarity of movie plots
from sklearn.metrics.pairwise import cosine_similarity
# Calculate the similarity distance
similarity_distance = 1 - cosine_similarity(tfidf_matrix)<jupyter_output><empty_output><jupyter_text>## 10. Import Matplotlib, Linkage, and Dendrograms
We shall now create a tree-like diagram (called a dendrogram) of the movie titles to help us understand the level of similarity between them visually. Dendrograms help visualize the results of hierarchical clustering, which is an alternative to k-means clustering. Two pairs of movies at the same level of hierarchical clustering are expected to have similar strength of similarity between the corresponding pairs of movies. For example, the movie Fargo would be as similar to North By Northwest as the movie Platoon is to Saving Private Ryan, given both the pairs exhibit the same level of the hierarchy.
Let's import the modules we'll need to create our dendrogram.<jupyter_code># Import matplotlib.pyplot for plotting graphs
import matplotlib.pyplot as plt
# Configure matplotlib to display the output inline
%matplotlib inline
# Import modules necessary to plot dendrogram
from scipy.cluster.hierarchy import linkage, dendrogram<jupyter_output><empty_output><jupyter_text>## 11. Create merging and plot dendrogram
We shall plot a dendrogram of the movies whose similarity measure will be given by the similarity distance we previously calculated. The lower the similarity distance between any two movies, the lower their linkage will make an intercept on the y-axis. For instance, the lowest dendrogram linkage we shall discover will be between the movies, It's a Wonderful Life and A Place in the Sun. This indicates that the movies are very similar to each other in their plots.<jupyter_code># Create mergings matrix
mergings = linkage(similarity_distance, method='complete')
# Plot the dendrogram, using title as label column
dendrogram_ = dendrogram(mergings,
labels=[x for x in movies_df["title"]],
leaf_rotation=90,
leaf_font_size=16,
)
# Adjust the plot
fig = plt.gcf()
_ = [lbl.set_color('r') for lbl in plt.gca().get_xmajorticklabels()]
fig.set_size_inches(108, 21)
# Show the plotted dendrogram
plt.show()<jupyter_output><empty_output><jupyter_text>## 12. Which movies are most similar?
We can now determine the similarity between movies based on their plots! To wrap up, let's answer one final question: which movie is most similar to the movie Braveheart?<jupyter_code># Answer the question
ans = "Gladiator"
print(ans)<jupyter_output>Gladiator
|
no_license
|
/Projects/Find Movie Similarity from Plot Summaries/Movie_Prediction.ipynb
|
Gttz/DataCamp_Courses
| 12 |
<jupyter_start><jupyter_text>#1. Preprocess your data so that you can feed it into ANN models.
#2. Split your data into training and test sets.<jupyter_code>input_dim = 784 # 28*28
output_dim = nb_classes = 10
batch_size = 128
nb_epoch = 20
X_train = X_train.reshape(60000, input_dim)
X_test = X_test.reshape(10000, input_dim)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
from tensorflow.keras.utils import to_categorical
Y_train = to_categorical(y_train, nb_classes)
Y_test = to_categorical(y_test, nb_classes)
import matplotlib.pyplot as plt
plt.figure(figsize=(20,10))
for i in range(1,21):
plt.subplot(4,5,i)
plt.imshow(X_train[120+i].reshape(28,28), cmap="gray")
plt.title("Label of the image: {}".format(y_train[120 + i]))
plt.tight_layout()
plt.show()<jupyter_output><empty_output><jupyter_text>#3. Try different ANN models and train them on your training set.##3.1. Number of layers.<jupyter_code>from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense<jupyter_output><empty_output><jupyter_text>###1 Hidden Layer<jupyter_code># 1 Hidden Layer
model_1 = Sequential()
# our first dense layer
model_1.add(Dense(128, input_shape=(784,), activation="relu"))
# last layer is the output layer.
model_1.add(Dense(10, activation="softmax"))
# compile model
model_1.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_1.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_1 = model_1.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_1[0])
print('Test accuracy:', score_1[1])<jupyter_output>Test score: 0.45044970512390137
Test accuracy: 0.8402000069618225
<jupyter_text>###2 Hidden Layers<jupyter_code># 2 Hidden Layers
model_2 = Sequential()
# our first dense layer
model_2.add(Dense(128, input_shape=(784,), activation="relu"))
# our second dense layer
model_2.add(Dense(128, activation="relu"))
# last layer is the output layer.
model_2.add(Dense(10, activation="softmax"))
# compile model
model_2.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_2.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_2 = model_2.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_2[0])
print('Test accuracy:', score_2[1])<jupyter_output>Test score: 0.4185384213924408
Test accuracy: 0.8507999777793884
<jupyter_text>###3 Hidden Layers<jupyter_code># 3 Hidden Layers
model_3 = Sequential()
# our first dense layer
model_3.add(Dense(128, input_shape=(784,), activation="relu"))
# our second dense layer
model_3.add(Dense(128, activation="relu"))
# our third dense layer
model_3.add(Dense(128, activation="relu"))
# last layer is the output layer.
model_3.add(Dense(10, activation="softmax"))
# compile model
model_3.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_3.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_3 = model_3.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_3[0])
print('Test accuracy:', score_3[1])<jupyter_output>Test score: 0.4012981653213501
Test accuracy: 0.8551999926567078
<jupyter_text>###4 Hidden Layers<jupyter_code># 4 Hidden Layers
model_4 = Sequential()
# our first dense layer
model_4.add(Dense(128, input_shape=(784,), activation="relu"))
# our second dense layer
model_4.add(Dense(128, activation="relu"))
# our third dense layer
model_4.add(Dense(128, activation="relu"))
# our fourth dense layer
model_4.add(Dense(128, activation="relu"))
# last layer is the output layer.
model_4.add(Dense(10, activation="softmax"))
# compile model
model_4.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_4.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_4 = model_4.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_4[0])
print('Test accuracy:', score_4[1])<jupyter_output>Test score: 0.39575937390327454
Test accuracy: 0.8571000099182129
<jupyter_text>###5. Hidden Layers<jupyter_code># 5 Hidden Layers
model_5 = Sequential()
# our first dense layer
model_5.add(Dense(128, input_shape=(784,), activation="relu"))
# our second dense layer
model_5.add(Dense(128, activation="relu"))
# our third dense layer
model_5.add(Dense(128, activation="relu"))
# our fourth dense layer
model_5.add(Dense(128, activation="relu"))
# our fifth dense layer
model_5.add(Dense(128, activation="relu"))
# last layer is the output layer.
model_5.add(Dense(10, activation="softmax"))
# compile model
model_5.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_5.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_5 = model_5.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_5[0])
print('Test accuracy:', score_5[1])
layers = [1, 2, 3, 4, 5]
scores = [score_1[1], score_2[1], score_3[1], score_4[1], score_5[1]]
plt.figure(figsize=(10,6))
plt.plot(layers,scores)
plt.title("Model Score vs Hidden Layers")
plt.xlabel("Hidden Layers")
plt.xticks(layers)
plt.ylabel("Test Accuracy")
plt.grid(alpha=0.5)
plt.show()<jupyter_output><empty_output><jupyter_text>###We move forward using 4 hidden layers.##3.2 Activation Functions<jupyter_code># ReLU activation
model_relu = Sequential()
# our first dense layer
model_relu.add(Dense(128, input_shape=(784,), activation="relu"))
# our second dense layer
model_relu.add(Dense(128, activation="relu"))
# our third dense layer
model_relu.add(Dense(128, activation="relu"))
# our fourth dense layer
model_relu.add(Dense(128, activation="relu"))
# last layer is the output layer.
model_relu.add(Dense(10, activation="softmax"))
# compile model
model_relu.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_relu.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_relu = model_relu.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_relu[0])
print('Test accuracy:', score_relu[1])
model_sigm = Sequential()
# our first dense layer
model_sigm.add(Dense(128, input_shape=(784,), activation="sigmoid"))
# our second dense layer
model_sigm.add(Dense(128, activation="sigmoid"))
# our third dense layer
model_sigm.add(Dense(128, activation="sigmoid"))
# our fourth dense layer
model_sigm.add(Dense(128, activation="sigmoid"))
# last layer is the output layer.
model_sigm.add(Dense(10, activation="softmax"))
# compile model
model_sigm.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_sigm.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_sigm = model_sigm.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_sigm[0])
print('Test accuracy:', score_sigm[1])
model_tanh = Sequential()
# our first dense layer
model_tanh.add(Dense(128, input_shape=(784,), activation="tanh"))
# our second dense layer
model_tanh.add(Dense(128, activation="tanh"))
# our third dense layer
model_tanh.add(Dense(128, activation="tanh"))
# our fourth dense layer
model_tanh.add(Dense(128, activation="tanh"))
# last layer is the output layer.
model_tanh.add(Dense(10, activation="softmax"))
# compile model
model_tanh.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_tanh.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_tanh = model_tanh.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_tanh[0])
print('Test accuracy:', score_tanh[1])
model_swish = Sequential()
# our first dense layer
model_swish.add(Dense(128, input_shape=(784,), activation="swish"))
# our second dense layer
model_swish.add(Dense(128, activation="swish"))
# our third dense layer
model_swish.add(Dense(128, activation="swish"))
# our fourth dense layer
model_swish.add(Dense(128, activation="swish"))
# last layer is the output layer.
model_swish.add(Dense(10, activation="softmax"))
# compile model
model_swish.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_swish.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_swish = model_swish.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_swish[0])
print('Test accuracy:', score_swish[1])
activation = ['ReLU', 'Sigmoid', 'tanh', 'Swish']
scores = [score_relu[1], score_sigm[1], score_tanh[1], score_swish[1]]
plt.figure(figsize=(10,6))
plt.bar(activation,scores)
plt.title("Model Score vs Activation Function")
plt.xlabel("Activation Function")
plt.ylabel("Test Accuracy")
plt.grid(alpha=0.5, axis= 'y')
plt.show()<jupyter_output><empty_output><jupyter_text>###ReLU and Hyperbolic Tanger perform very similarly. We elect to use ReLU since it's non-saturating.##3.3 Number of Neurons in the Layers<jupyter_code># 256 in the first layer with a 1/2 reduction after
model_256r = Sequential()
# our first dense layer
model_256r.add(Dense(256, input_shape=(784,), activation="relu"))
# our second dense layer
model_256r.add(Dense(128, activation="relu"))
# our third dense layer
model_256r.add(Dense(64, activation="relu"))
# our fourth dense layer
model_256r.add(Dense(32, activation="relu"))
# last layer is the output layer.
model_256r.add(Dense(10, activation="softmax"))
# compile model
model_256r.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_256r.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_256r = model_256r.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_256r[0])
print('Test accuracy:', score_256r[1])
# 256 in each layer
model_256c = Sequential()
# our first dense layer
model_256c.add(Dense(256, input_shape=(784,), activation="relu"))
# our second dense layer
model_256c.add(Dense(256, activation="relu"))
# our third dense layer
model_256c.add(Dense(256, activation="relu"))
# our fourth dense layer
model_256c.add(Dense(256, activation="relu"))
# last layer is the output layer.
model_256c.add(Dense(10, activation="softmax"))
# compile model
model_256c.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_256c.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_256c = model_256c.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_256c[0])
print('Test accuracy:', score_256c[1])
# 512 in the first layer with a 1/2 reduction after
model_512r = Sequential()
# our first dense layer
model_512r.add(Dense(512, input_shape=(784,), activation="relu"))
# our second dense layer
model_512r.add(Dense(256, activation="relu"))
# our third dense layer
model_512r.add(Dense(128, activation="relu"))
# our fourth dense layer
model_512r.add(Dense(64, activation="relu"))
# last layer is the output layer.
model_512r.add(Dense(10, activation="softmax"))
# compile model
model_512r.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_512r.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_512r = model_512r.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_512r[0])
print('Test accuracy:', score_512r[1])
# 512 in each layer
model_512c = Sequential()
# our first dense layer
model_512c.add(Dense(512, input_shape=(784,), activation="relu"))
# our second dense layer
model_512c.add(Dense(512, activation="relu"))
# our third dense layer
model_512c.add(Dense(512, activation="relu"))
# our fourth dense layer
model_512c.add(Dense(512, activation="relu"))
# last layer is the output layer.
model_512c.add(Dense(10, activation="softmax"))
# compile model
model_512c.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_512c.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_512c = model_swish.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_512c[0])
print('Test accuracy:', score_512c[1])
# 256, 128, 128, 64
model_256t = Sequential()
# our first dense layer
model_256t.add(Dense(256, input_shape=(784,), activation="relu"))
# our second dense layer
model_256t.add(Dense(128, activation="relu"))
# our third dense layer
model_256t.add(Dense(128, activation="relu"))
# our fourth dense layer
model_256t.add(Dense(64, activation="relu"))
# last layer is the output layer.
model_256t.add(Dense(10, activation="softmax"))
# compile model
model_256t.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_256t.fit(X_train, Y_train, batch_size=batch_size, epochs=20, verbose=1)
score_256t = model_256t.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_256t[0])
print('Test accuracy:', score_256t[1])<jupyter_output>Test score: 0.37574777007102966
Test accuracy: 0.8639000058174133
<jupyter_text>###The best combination of training speed, and test accuracy came with hidden layers of 256, 128, 64 and 32 neurons.##3.4 Batch Size<jupyter_code># batch size of 1024
model_1024 = Sequential()
# our first dense layer
model_1024.add(Dense(256, input_shape=(784,), activation="relu"))
# our second dense layer
model_1024.add(Dense(128, activation="relu"))
# our third dense layer
model_1024.add(Dense(64, activation="relu"))
# our fourth dense layer
model_1024.add(Dense(32, activation="relu"))
# last layer is the output layer.
model_1024.add(Dense(10, activation="softmax"))
# compile model
model_1024.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_1024.fit(X_train, Y_train, batch_size=1024, epochs=20, verbose=1)
score_1024 = model_1024.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_1024[0])
print('Test accuracy:', score_1024[1])
# batch size of 256
model_256 = Sequential()
# our first dense layer
model_256.add(Dense(256, input_shape=(784,), activation="relu"))
# our second dense layer
model_256.add(Dense(128, activation="relu"))
# our third dense layer
model_256.add(Dense(64, activation="relu"))
# our fourth dense layer
model_256.add(Dense(32, activation="relu"))
# last layer is the output layer.
model_256.add(Dense(10, activation="softmax"))
# compile model
model_256.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_256.fit(X_train, Y_train, batch_size=256, epochs=20, verbose=1)
score_256 = model_256.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_256[0])
print('Test accuracy:', score_256[1])
# batch size of 64
model_64 = Sequential()
# our first dense layer
model_64.add(Dense(256, input_shape=(784,), activation="relu"))
# our second dense layer
model_64.add(Dense(128, activation="relu"))
# our third dense layer
model_64.add(Dense(64, activation="relu"))
# our fourth dense layer
model_64.add(Dense(32, activation="relu"))
# last layer is the output layer.
model_64.add(Dense(10, activation="softmax"))
# compile model
model_64.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_64.fit(X_train, Y_train, batch_size=64, epochs=20, verbose=1)
score_64 = model_64.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_64[0])
print('Test accuracy:', score_64[1])
# batch size of 16
model_16 = Sequential()
# our first dense layer
model_16.add(Dense(256, input_shape=(784,), activation="relu"))
# our second dense layer
model_16.add(Dense(128, activation="relu"))
# our third dense layer
model_16.add(Dense(64, activation="relu"))
# our fourth dense layer
model_16.add(Dense(32, activation="relu"))
# last layer is the output layer.
model_16.add(Dense(10, activation="softmax"))
# compile model
model_16.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
# setting verbose=1 prints out some results after each epoch
model_16.fit(X_train, Y_train, batch_size=16, epochs=20, verbose=1)
score_16 = model_16.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score_16[0])
print('Test accuracy:', score_16[1])<jupyter_output>Test score: 0.335713654756546
Test accuracy: 0.8852999806404114
<jupyter_text>###Batch size of 16 takes a very long time to train, and the difference between training and test accuracy is almost 4%, indicating some significant over-fitting. We settle on batch size of 64.#General Discussion
This dataset proved to be much more difficult to achieve a high accuracy on than the handwritten digits set. While tuning the number of layers, neurons in each layer, and batch size we ran into the same type of trade off. Increasing the complexity of the model, and its evaluation lead to much longer training times, as there were more parameters/equations to solve. The increased complexity often yielded higher training accuracies, but there was a point where the corresponding increase in test accuracy was negligible, and we saw overfitting in action.
In the end, we settled on a model with the following characteristics:
4 Hidden Layers: 256, 128, 64 and 32 neurons in each.
ReLU Activation Function
Batch size of 64
Training Accuracy: 90.06%
Test Accuracy: 87.29%<jupyter_code><jupyter_output><empty_output>
|
no_license
|
/deep-learning-challenge/deep_learning_challenge.ipynb
|
ADEnnaco/thinkful-coursework
| 11 |
<jupyter_start><jupyter_text># 新做的<jupyter_code>data = []
label2 = []
label3 = []
for i in range(1,25):
a = np.load('/content/drive/My Drive/Newdata/ordered_by_patient/150_6_116_116_' + str(i) +'_flipped.npy')
c = np.load('/content/drive/My Drive/Newdata/ordered_by_patient/two_labels/' + str(i) + '.npy') #两个label
d = np.load('/content/drive/My Drive/Newdata/ordered_by_patient/percent_output_' + str(i) + '.npy') #三个label
data.append(a)
label2.append(c)
label3.append(d)
Data, Label2, Label3 = shuffle(data, label2, label3, random_state = 20)
P = np.array(Data)
P = np.reshape(P,(3600,6,116,116,1))
L2 = np.array(Label2)
L2 = np.reshape(L2,(3600,2))
L3 = np.array(Label3)
L3 = np.reshape(L3,(3600,3))
print(P.shape)
print(L2.shape)
print(L3.shape)
#输入数据
train_data = P[:2700]
test_data = P[2700:]
# 全部打乱 没有按照病人
Data, Label2, Label3 = shuffle(Data, Label2, Label3, random_state = 20)
# 2 label
train_label = L2[:2700]
test_label = L2[2700:]
print(train_data.shape)
print(train_label.shape)
# 3 label
train_label = L3[:2700]
test_label = L3[2700:]
print(train_data.shape)
print(train_label.shape)
train_data, train_label = shuffle(train_data, train_label, random_state = 20)
# count = 0
# for i in range(24):
# for j in range(150):
# # if Label3[i,j,0] + Label3[i,j,1] + Label3[i,j,2] == 1:
# if L3[i,j,0] + L3[i,j,1] + L3[i,j,2] == 1:
# count+=1
# print(count)
count = 0
for i in range(3600):
if L3[i,0] + L3[i,1] + L3[i,2] == 1:
count+=1
print(count)
# print(L3)
# # data1 = np.load('/content/drive/My Drive/Newdata/original/allpre.npy')
# # data2 = np.load('/content/drive/My Drive/Newdata/original/allpost.npy')
# # data3 = np.load('/content/drive/My Drive/Newdata/original/allfu.npy')
# # data = np.concatenate((data1,data2,data3), axis = 0)
# data0 = np.load('/content/drive/My Drive/Newdata/original/allcontrol.npy')
# data1 = np.load('/content/drive/My Drive/Newdata/changed/allpre.npy')
# data2 = np.load('/content/drive/My Drive/Newdata/changed/allpost.npy')
# data3 = np.load('/content/drive/My Drive/Newdata/changed/allfu.npy')
# data = np.concatenate((data0,data1,data2,data3), axis = 0)
# label = np.load('/content/drive/My Drive/Newdata/original/4800.npy')
# data, label = shuffle(data,label,random_state=20)
# # data, label = shuffle(data,label[1200:],random_state=20)
# # np.save("/content/drive/My Drive/Newdata/all4800",Data)
# np.save("/content/drive/My Drive/Newdata/shuffle_changedin4800",data)
# np.save("/content/drive/My Drive/Newdata/shuffle_changedout4800",label)
def Model():
model = Sequential()
initializer = tf.keras.initializers.HeNormal()
model.add(Conv3D(32, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer, input_shape=(6,116,116, 1)))
model.add(MaxPooling3D(pool_size=(1, 2, 2)))
model.add(Dropout(0.3))
# model.add(Conv3D(64, kernel_size=(2, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(Dropout(0.3))
model.add(Conv3D(64, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
model.add(MaxPooling3D(pool_size=(1, 2, 2)))
model.add(Dropout(0.3))
model.add(Conv3D(128, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
model.add(Dropout(0.3))
# model.add(Conv3D(128, kernel_size=(2, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(Dropout(0.3))
model.add(Conv3D(128, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
model.add(MaxPooling3D(pool_size=(1, 2, 2)))
model.add(Dropout(0.3))
# model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(Dropout(0.3))
# model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(Dropout(0.3))
model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
model.add(MaxPooling3D(pool_size=(1, 2, 2)))
model.add(Dropout(0.3))
# model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(1, 2, 2)))
# model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(1, 2, 2)))
# model.add(Dropout(0.3))
# model.add(Conv3D(512, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(1, 2, 2)))
# model.add(Dropout(0.3))
# model.add(Conv3D(1024, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(1, 2, 2)))
# model.add(Dropout(0.3))
# model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(Conv3D(32, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(Conv3D(32, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(Conv3D(32, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(2, 2, 2)))
# model.add(BatchNormalization())
# model.add(Dropout(0.3))
# model.add(Conv3D(64, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(2, 2, 2)))
# model.add(BatchNormalization())
# model.add(Dropout(0.5))
# model.add(Conv3D(128, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(2, 2, 2)))
# model.add(Dropout(0.5))
# model.add(BatchNormalization())
# model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer=initializer))
# model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization())
# model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(1024, activation='relu', kernel_initializer=initializer))
model.add(Dense(3))
model.summary()
from keras.optimizers import RMSprop,Adam,SGD
model.compile(optimizer = SGD(learning_rate=0.0001, momentum=0.9),
loss = 'mean_squared_error',
metrics = ['mae'])
return model
model = Model()
# model = Model()
num_epochs = 100
batchsize = 32
history = model.fit(train_data, train_label, epochs=num_epochs, batch_size=batchsize, verbose=1)
mse = history.history['loss']
mae = history.history['mae']
# mse1 = history1.history['loss']
# mae1 = history1.history['mae']
# mse2 = history2.history['loss']
# mae2 = history2.history['mae']
# mse3 = history3.history['loss']
# mae3 = history3.history['mae']
# mse4 = history4.history['loss']
# mae4 = history4.history['mae']
# mse5 = history5.history['loss']
# mae5 = history5.history['mae']
# mse6 = history6.history['loss']
# mae6 = history6.history['mean_absolute_error']
# mse7 = history7.history['loss']
# mae7 = history7.history['mean_absolute_error']
# mse8 = history8.history['loss']
# mae8 = history8.history['mean_absolute_error']
# model = ZF_Net()
# model1 = VGG16()
# model2 = ResNet50()
# model3 = Xception()
# model4 = DenseNet121()
# model5 = MobileNet()
# model6 = get_model9()
epochs = range(1,len(mse) +1)
plt.plot(epochs, mse, 'b', label='Proposed Net')
plt.plot(epochs, mae, 'b--', label='mae')
# plt.plot(epochs, mse1, 'r', label='VGG16')
# plt.plot(epochs, mae1, 'r--', label='mae1')
# plt.plot(epochs, mse2, 'g', label='ResNet50')
# plt.plot(epochs, mae2, 'g--', label='mae2')
# plt.plot(epochs, mse3, 'c', label='Xception')
# plt.plot(epochs, mae3, 'c--', label='mae3')
# plt.plot(epochs, mse4, 'm', label='DenseNet121')
# plt.plot(epochs, mae4, 'm--', label='mae4')
# plt.plot(epochs, mse5, 'y', label='MobileNet')
# plt.plot(epochs, mae5, 'y--', label='mae5')
# plt.plot(epochs, mse6, 'k', label='model9')
# plt.plot(epochs, mae6, 'k--', label='mae6')
# plt.plot(epochs, mse7, '#9467bd', label='mse7')
# plt.plot(epochs, mae7, '#9467bd',linestyle='dashed', label='mae7')
# plt.plot(epochs, mse8, '#e377c2', label='mse8')
# plt.plot(epochs, mae8, '#e377c2',linestyle='dashed', label='mae8')
plt.legend(bbox_to_anchor=(0.8, 1), loc='upper left', borderaxespad=0.)
# plt.title('loss')
plt.xlabel('Epochs')
plt.ylabel('Loss & MAE')
# plt.show()
# plt.savefig('/content/drive/My Drive/images/pretrained_models_3_wo.png' ,dpi=1200)
newx = test_data
m = model.predict(newx) * 100
# m1 = model1.predict(newx) * 100
# m2 = model2.predict(newx) * 100
# m3 = model3.predict(newx) * 100
# m4 = model4.predict(newx) * 100
# m5 = model5.predict(newx) * 100
print(m.shape)
print(test_label[0])
print(m[0])
c1 = test_label[:,0] * 100# 对测试集的真实数值c1进行大小排列
c2 = test_label[:,1] * 100# 对测试集的真实数值c2进行大小排列
c3 = test_label[:,2] * 100# 对测试集的真实数值c3进行大小排列
# print(y_data_c1)
index1 = np.argsort(c1) #c1提取出按照从小到大的对应index
index2 = np.argsort(c2) #c2提取出按照从小到大的对应index
index3 = np.argsort(c3) #c3提取出按照从小到大的对应index
c1.sort()
c2.sort()
c3.sort()
# print(index1)
yc1 = []
yc2 = []
yc3 = []
# y1c1 = []
# y1c2 = []
# y1c3 = []
# y2c1 = []
# y2c2 = []
# y2c3 = []
# y3c1 = []
# y3c2 = []
# y3c3 = []
# y4c1 = []
# y4c2 = []
# y4c3 = []
# y5c1 = []
# y5c2 = []
# y5c3 = []
for i in range(900):
yc1.append([m[i,0],index1[i]])
yc2.append([m[i,1],index2[i]])
yc3.append([m[i,2],index3[i]])
# y1c1.append([m1[i,0],index1[i]])
# y1c2.append([m1[i,1],index2[i]])
# y1c3.append([m1[i,2],index3[i]])
# y2c1.append([m2[i,0],index1[i]])
# y2c2.append([m2[i,1],index2[i]])
# y2c3.append([m2[i,2],index3[i]])
# y3c1.append([m3[i,0],index1[i]])
# y3c2.append([m3[i,1],index2[i]])
# y3c3.append([m3[i,2],index3[i]])
# y4c1.append([m4[i,0],index1[i]])
# y4c2.append([m4[i,1],index2[i]])
# y4c3.append([m4[i,2],index3[i]])
# y5c1.append([m5[i,0],index1[i]])
# y5c2.append([m5[i,1],index2[i]])
# y5c3.append([m5[i,2],index3[i]])
yc1.sort(key = lambda x: x[1])
yc2.sort(key = lambda x: x[1])
yc3.sort(key = lambda x: x[1])
# y1c1.sort(key = lambda x: x[1])
# y1c2.sort(key = lambda x: x[1])
# y1c3.sort(key = lambda x: x[1])
# y2c1.sort(key = lambda x: x[1])
# y2c2.sort(key = lambda x: x[1])
# y2c3.sort(key = lambda x: x[1])
# y3c1.sort(key = lambda x: x[1])
# y3c2.sort(key = lambda x: x[1])
# y3c3.sort(key = lambda x: x[1])
# y4c1.sort(key = lambda x: x[1])
# y4c2.sort(key = lambda x: x[1])
# y4c3.sort(key = lambda x: x[1])
# y5c1.sort(key = lambda x: x[1])
# y5c2.sort(key = lambda x: x[1])
# y5c3.sort(key = lambda x: x[1])
samples = 900
x_data = range(samples)
# y_data_c1 = test_label[:,0]
# y_data_c2 = test_label[:,1]
# y_data_c3 = test_label[:,2]
yc1 = np.array(yc1)
yc2 = np.array(yc2)
yc3 = np.array(yc3)
# y1c1 = np.array(y1c1)
# y1c2 = np.array(y1c2)
# y1c3 = np.array(y1c3)
# y2c1 = np.array(y2c1)
# y2c2 = np.array(y2c2)
# y2c3 = np.array(y2c3)
# y3c1 = np.array(y3c1)
# y3c2 = np.array(y3c2)
# y3c3 = np.array(y3c3)
# y4c1 = np.array(y4c1)
# y4c2 = np.array(y4c2)
# y4c3 = np.array(y4c3)
# y5c1 = np.array(y5c1)
# y5c2 = np.array(y5c2)
# y5c3 = np.array(y5c3)
# y_predict_c1 = m[:,0]
# y_predict_c2 = m[:,1]
# y_predict_c3 = m[:,2]
# y_predict1_c1 = m1[:,0]
# y_predict1_c2 = m1[:,1]
# y_predict1_c3 = m1[:,2]
# y_predict2_c1 = m2[:,0]
# y_predict2_c2 = m2[:,1]
# y_predict2_c3 = m2[:,2]
# y_predict3_c1 = m3[:,0]
# y_predict3_c2 = m3[:,1]
# y_predict3_c3 = m3[:,2]
# y_predict4_c1 = m4[:,0]
# y_predict4_c2 = m4[:,1]
# y_predict4_c3 = m4[:,2]
# y_predict5_c1 = m5[:,0]
# y_predict5_c2 = m5[:,1]
# y_predict5_c3 = m5[:,2]
fig, (predict_c1, predict_c2, predict_c3) = plt.subplots(3,figsize=(20,5))
fig.suptitle('3 visual conditions')
predict_c1.plot(x_data, c1[:samples] , 'c', label='truth_c1')
predict_c2.plot(x_data, c2[:samples] , 'c', label='truth_c2')
predict_c3.plot(x_data, c3[:samples] , 'c', label='truth_c3')
predict_c1.plot(x_data, yc1[:samples,0] , 'r', label='My proposed_c1')
predict_c2.plot(x_data, yc2[:samples,0], 'r', label='My proposed_c2')
predict_c3.plot(x_data, yc3[:samples,0], 'r', label='My proposed_c3')
# predict_c1.plot(x_data, y1c1[:samples,0] , 'b', label='VGG16_c1')
# predict_c2.plot(x_data, y1c2[:samples,0] , 'b', label='VGG16_c2')
# predict_c3.plot(x_data, y1c3[:samples,0] , 'b', label='VGG16_c3')
# predict_c1.plot(x_data, y2c1[:samples,0] , 'k', label='ResNet50_c1')
# predict_c2.plot(x_data, y2c2[:samples,0] , 'k', label='ResNet50_c2')
# predict_c3.plot(x_data, y2c3[:samples,0] , 'k', label='ResNet50_c3')
# predict_c1.plot(x_data, y3c1[:samples,0] , 'm', label='Xception_c1')
# predict_c2.plot(x_data, y3c2[:samples,0] , 'm', label='Xception_c2')
# predict_c3.plot(x_data, y3c3[:samples,0] , 'm', label='Xception_c3')
# predict_c1.plot(x_data, y4c1[:samples,0] , 'y', label='DenseNet121_c1')
# predict_c2.plot(x_data, y4c2[:samples,0] , 'y', label='DenseNet121_c2')
# predict_c3.plot(x_data, y4c3[:samples,0] , 'y', label='DenseNet121_c3')
# predict_c1.plot(x_data, y5c1[:samples,0] , 'g', label='MobileNet_c1')
# predict_c2.plot(x_data, y5c2[:samples,0] , 'g', label='MobileNet_c2')
# predict_c3.plot(x_data, y5c3[:samples,0] , 'g', label='MobileNet_c3')
predict_c1.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
predict_c2.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
predict_c3.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
# plt.plot(x_data, y_data_c1 , 'c', label='c1')
# plt.plot(x_data, y_data_c2 , 'r', label='c2')
# plt.plot(x_data, y_data_c3 , 'b', label='c3')
# plt.plot(x_data, y_predict1_c1 , 'c--', label='predict1_c1')
# plt.plot(x_data, y_predict1_c2 , 'r--', label='predict1_c2')
# plt.plot(x_data, y_predict1_c3 , 'b--', label='predict1_c3')
# plt.plot(x_data, y_predict2_c1 , 'c:', label='predict2_c1')
# plt.plot(x_data, y_predict2_c2 , 'r:', label='predict2_c2')
# plt.plot(x_data, y_predict2_c3 , 'b:', label='predict2_c3')
# plt.plot(x_data, y_predict3_c1 , 'c-.', label='predict3_c1')
# plt.plot(x_data, y_predict3_c2 , 'r-.', label='predict3_c2')
# plt.plot(x_data, y_predict3_c3 , 'b-.', label='predict3_c3')
# plt.legend(loc="upper right",fontsize = 'small',bbox_to_anchor=(0.5, -0.05))
# plt.title('loss')
# plt.xlabel('Epochs')
# plt.ylabel('Loss')
# plt.show()
# fig.savefig('/content/drive/My Drive/33333333_wo.png' ,dpi=800)
def calculate_mse(predict): #对比三种视觉情况 每个model的mse
y_data_c1 = test_label[:,0]
y_data_c2 = test_label[:,1]
y_data_c3 = test_label[:,2]
y_predict1_c1 = predict[:,0]/100
y_predict1_c2 = predict[:,1]/100
y_predict1_c3 = predict[:,2]/100
tmp1 = 0
tmp2 = 0
tmp3 = 0
for i in range(900):
tmp1 = tmp1 + (y_data_c1[i]- y_predict1_c1[i])**2
tmp2 = tmp2 + (y_data_c2[i]- y_predict1_c2[i])**2
tmp3 = tmp3 + (y_data_c3[i]- y_predict1_c3[i])**2
c1 = tmp1/900
c2 = tmp2/900
c3 = tmp3/900
# c1 = K.mean(K.square( y_data_c1- y_predict1_c1), axis=-1)
# c2 = K.mean(K.square( y_data_c2- y_predict1_c2), axis=-1)
# c3 = K.mean(K.square( y_data_c3- y_predict1_c3), axis=-1)
return c1, c2, c3
r = r2_score(test_label, m/100, multioutput='raw_values')
# r1 = r2_score(test_label, m1/100, multioutput='raw_values')
# r2 = r2_score(test_label, m2/100, multioutput='raw_values')
# r3 = r2_score(test_label, m3/100, multioutput='raw_values')
# r4 = r2_score(test_label, m4/100, multioutput='raw_values')
# r5 = r2_score(test_label, m5/100, multioutput='raw_values')
print(r)
# print(r1)
# print(r2)
# print(r3)
# print(r4)
# print(r5)
Model = calculate_mse(m)
# Model1 = calculate_mse(m1)
# Model2 = calculate_mse(m2)
# Model3 = calculate_mse(m3)
# Model4 = calculate_mse(m4)
# Model5 = calculate_mse(m5)
print(Model)
# print(Model1)
# print(Model2)
# print(Model3)
# print(Model4)
# print(Model5)
data = np.load('/content/drive/My Drive/Newdata/shuffle_changedin3600.npy')
label = np.load('/content/drive/My Drive/Newdata/shuffle_changedout3600.npy')
data = np.reshape(data,(3600,30,116,116,1))
split_rate= 0.7
split = int(split_rate*3600)
train_data = data[:split]
train_label = label[:split]
test_data = data[split:]
test_label = label[split:]
def k_fold(k,train_data,train_targets):
num_val_samples = len(train_data)//k
val_mses = []
val_maes = []
mse_History = []
mae_History = []
num_epochs = 50
batchsize = 6
R2 = []
Comparision = []
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples],train_data[(i + 1) * num_val_samples:]], axis=0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples],train_targets[(i + 1) * num_val_samples:]], axis=0)
model = Model()
history = model.fit(partial_train_data, partial_train_targets, epochs=num_epochs, batch_size=batchsize, verbose=0)
comparision1 = (model.predict(test_data))*100
Test_label = test_label*100
Comparision.append(comparision1)
# r2_score(Test_label, comparision1, multioutput='raw_values')
R2.append(r2_score(Test_label, comparision1, multioutput='raw_values'))
# val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
# mse_history = history.history['loss']
# mae_history = history.history['mean_absolute_error']
# val_mses.append(val_mse)
# val_maes.append(val_mae)
# mse_History.append(mse_history)
# mae_History.append(mae_history)
# return val_mses, val_maes, mse_History, mae_History
return R2,Comparision
R2_score, Predict_results = k_fold(5,train_data,train_label)
# average_mae_history = [(np.mean([x[i] for x in mae_history]))*100 for i in range(50)]
# average_mse_history = [(np.mean([x[i] for x in mse_history]))*100 for i in range(50)]
# #model 3Dcnn
# epochs = range(1,len(average_mse_history)+1)
# plt.plot(epochs, average_mae_history,'c', label='mae' )
# plt.plot(epochs, average_mse_history,'m', label='mse' )
# plt.xlabel('Epochs')
# plt.ylabel('Validation')
# plt.show()
data = np.load('/content/drive/My Drive/Newdata/shuffle_changedin4800.npy')
label = np.load('/content/drive/My Drive/Newdata/shuffle_changedout4800.npy')
data = np.reshape(data,(4800,30,116,116,1))
split_rate= 0.7
split = int(split_rate*4800)
train_data = data[:split]
train_label = label[:split]
test_data = data[split:]
test_label = label[split:]
num_epochs = 50
batchsize = 6
model = Model()
history = model.fit(train_data, train_label, epochs=num_epochs, batch_size=batchsize, verbose=0)
# mse_history = history.history['loss']
# mae_history = history.history['mean_absolute_error']
comparision1 = (model.predict(test_data))*100
x_data = range(1080)
Test_label = test_label*100
y_data_c1 = Test_label[:,0]
y_data_c2 = Test_label[:,1]
y_data_c3 = Test_label[:,2]
y_predict1_c1 = comparision1[:,0]
y_predict1_c2 = comparision1[:,1]
y_predict1_c3 = comparision1[:,2]
fig, (predict_c1, predict_c2, predict_c3) = plt.subplots(3,figsize=(18,6))
fig.suptitle('3 visual conditions_changed')
# predict_c1.axes([0, 0.6, 1, 1])
# plt.subplots(figsize=(50, 10))
# plt.figure(figsize=(30,10)
# fig, ax = plt.subplots(figsize=(20, 10))
# predict_c1.figure(figsize=(30,10)
# bins = np.linspace(-1, 1, 21) #横坐标起始和结束值,分割成21份
# predict_c1.figure(figsize=(13,5)) #图像大小
# predict_c1.xticks(bins) #设置x轴
# predict_c1.xlim(-1, 1) #x轴开始和结束位置
# f = plt.figure(figsize=(20, 2))
# ax = f.add_subplot(111)
# predict_c1.figure(figsize=(13,5))
# predict_c1.autoscale(enable=True, axis='x', tight=True)#plt.axis('tight')
# predict_c1
predict_c1.plot(x_data, y_data_c1 , 'r', label='truth_c1')
predict_c2.plot(x_data, y_data_c2 , 'r', label='truth_c2')
predict_c3.plot(x_data, y_data_c3 , 'r', label='truth_c3')
predict_c1.plot(x_data, y_predict1_c1 , 'g', label='predict1_c1')
predict_c2.plot(x_data, y_predict1_c2 , 'g', label='predict1_c2')
predict_c3.plot(x_data, y_predict1_c3 , 'g', label='predict1_c3')
predict_c1.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
predict_c2.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
predict_c3.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
plt.savefig('/content/drive/My Drive/changed3600_50epo_0.7.png' ,dpi=1200)
from sklearn.metrics import r2_score
r2_score(Test_label, comparision1, multioutput='raw_values')
<jupyter_output><empty_output><jupyter_text>#以下不要动系列 都是以前做的东西<jupyter_code>def Model():
model = Sequential()
model.add(Conv3D(32, kernel_size=(2, 3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(30,116,116, 1)))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization())
# model.add(Dropout(0.3))
model.add(Conv3D(64, kernel_size=(2, 3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization())
# model.add(Dropout(0.5))
model.add(Conv3D(128, kernel_size=(2, 3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
# model.add(Dropout(0.5))
model.add(BatchNormalization())
model.add(Conv3D(256, kernel_size=(1, 3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization())
# model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(3, activation='softmax'))
model.summary()
from keras.optimizers import RMSprop,Adam
model.compile(optimizer = Adam(lr=0.00001),
loss = 'mean_squared_error',
metrics = ['mae'])
return model
data = np.load('/content/drive/My Drive/Newdata/shuffle_changedin3600.npy')
label = np.load('/content/drive/My Drive/Newdata/shuffle_changedout3600.npy')
data = np.reshape(data,(3600,30,116,116,1))
split_rate= 0.7
split = int(split_rate*3600)
train_data = data[:split]
train_label = label[:split]
test_data = data[split:]
test_label = label[split:]
def k_fold(k,train_data,train_targets):
num_val_samples = len(train_data)//k
val_mses = []
val_maes = []
mse_History = []
mae_History = []
num_epochs = 50
batchsize = 6
R2 = []
Comparision = []
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples],train_data[(i + 1) * num_val_samples:]], axis=0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples],train_targets[(i + 1) * num_val_samples:]], axis=0)
model = Model()
history = model.fit(partial_train_data, partial_train_targets, epochs=num_epochs, batch_size=batchsize, verbose=0)
comparision1 = (model.predict(test_data))*100
Test_label = test_label*100
Comparision.append(comparision1)
# r2_score(Test_label, comparision1, multioutput='raw_values')
R2.append(r2_score(Test_label, comparision1, multioutput='raw_values'))
# val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
# mse_history = history.history['loss']
# mae_history = history.history['mean_absolute_error']
# val_mses.append(val_mse)
# val_maes.append(val_mae)
# mse_History.append(mse_history)
# mae_History.append(mae_history)
# return val_mses, val_maes, mse_History, mae_History
return R2,Comparision
R2_score, Predict_results = k_fold(5,train_data,train_label)
# average_mae_history = [(np.mean([x[i] for x in mae_history]))*100 for i in range(50)]
# average_mse_history = [(np.mean([x[i] for x in mse_history]))*100 for i in range(50)]
# #model 3Dcnn
# epochs = range(1,len(average_mse_history)+1)
# plt.plot(epochs, average_mae_history,'c', label='mae' )
# plt.plot(epochs, average_mse_history,'m', label='mse' )
# plt.xlabel('Epochs')
# plt.ylabel('Validation')
# plt.show()
data = np.load('/content/drive/My Drive/Newdata/shuffle_changedin4800.npy')
label = np.load('/content/drive/My Drive/Newdata/shuffle_changedout4800.npy')
data = np.reshape(data,(4800,30,116,116,1))
split_rate= 0.7
split = int(split_rate*4800)
train_data = data[:split]
train_label = label[:split]
test_data = data[split:]
test_label = label[split:]
num_epochs = 50
batchsize = 6
model = Model()
history = model.fit(train_data, train_label, epochs=num_epochs, batch_size=batchsize, verbose=0)
# mse_history = history.history['loss']
# mae_history = history.history['mean_absolute_error']
comparision1 = (model.predict(test_data))*100
x_data = range(1080)
Test_label = test_label*100
y_data_c1 = Test_label[:,0]
y_data_c2 = Test_label[:,1]
y_data_c3 = Test_label[:,2]
y_predict1_c1 = comparision1[:,0]
y_predict1_c2 = comparision1[:,1]
y_predict1_c3 = comparision1[:,2]
fig, (predict_c1, predict_c2, predict_c3) = plt.subplots(3,figsize=(18,6))
fig.suptitle('3 visual conditions_changed')
# predict_c1.axes([0, 0.6, 1, 1])
# plt.subplots(figsize=(50, 10))
# plt.figure(figsize=(30,10)
# fig, ax = plt.subplots(figsize=(20, 10))
# predict_c1.figure(figsize=(30,10)
# bins = np.linspace(-1, 1, 21) #横坐标起始和结束值,分割成21份
# predict_c1.figure(figsize=(13,5)) #图像大小
# predict_c1.xticks(bins) #设置x轴
# predict_c1.xlim(-1, 1) #x轴开始和结束位置
# f = plt.figure(figsize=(20, 2))
# ax = f.add_subplot(111)
# predict_c1.figure(figsize=(13,5))
# predict_c1.autoscale(enable=True, axis='x', tight=True)#plt.axis('tight')
# predict_c1
predict_c1.plot(x_data, y_data_c1 , 'r', label='truth_c1')
predict_c2.plot(x_data, y_data_c2 , 'r', label='truth_c2')
predict_c3.plot(x_data, y_data_c3 , 'r', label='truth_c3')
predict_c1.plot(x_data, y_predict1_c1 , 'g', label='predict1_c1')
predict_c2.plot(x_data, y_predict1_c2 , 'g', label='predict1_c2')
predict_c3.plot(x_data, y_predict1_c3 , 'g', label='predict1_c3')
predict_c1.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
predict_c2.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
predict_c3.legend(bbox_to_anchor=(0.9, 1), loc='upper left', borderaxespad=0.)
plt.savefig('/content/drive/My Drive/changed3600_50epo_0.7.png' ,dpi=1200)
from sklearn.metrics import r2_score
r2_score(Test_label, comparision1, multioutput='raw_values')
print(R2_score)
predict_c1.set_size_inches(18.5, 10.5, forward = True)
# !cat ~/.keras/keras.json
# from keras import backend
# backend.set_image_data_format('channels_first')
# print(backend.image_data_format())
# !cat ~/.keras/keras.json
# # print(open("~/.keras/keras.json").read())
# content = {"epsilon": 1e-07,
# "floatx": "float32",
# "image_data_format": "channels_first",
# "backend": "tensorflow"}
# with open(" ~/.keras/keras.json",'w') as f:
# f.write(str(content))
# # print(open("~/.keras/keras.json").read())
# tmp = data2[3,0,:,:]
# # np.set_printoptions(threshold=np.inf) #打印全部内容 没有缩写
# print(tmp.shape)
# # print(tmp[:,:])
# tmp=tmp.astype(np.float) # conver to float and plot
# # %matplotlib qt
# %matplotlib inline
# plt.imshow(tmp,cmap='GnBu')
# plt.savefig('/content/drive/My Drive/input_example.png' ,dpi=2400)
# aaa = np.load('/content/drive/My Drive/output/M2.npy')
# ppp = aaa[0]
# print(ppp.shape)
# ppp=ppp.astype(np.float) # conver to float and plot
# # %matplotlib qt
# %matplotlib inline
# plt.imshow(ppp,cmap='gray')
# plt.savefig('/content/drive/My Drive/output_example.png' ,dpi=2400)
def SPLIT(data,split):
data, label = shuffle(data, Label, random_state=20)
train_data = data[:split]
train_label = label[:split]
test_data = data[split:]
test_label = label[split:]
return train_data, train_label, test_data, test_label
train_data, train_label, test_data, test_label = SPLIT(data,72)
# train_data1, train_label1, test_data1, test_label1 = SPLIT(data1,72)
# train_data2, train_label2, test_data2, test_label2 = SPLIT(data2,72)
def k_fold(k,train_data,train_targets):
num_val_samples = len(train_data)//k
val_mses = []
val_maes = []
mse_History = []
mae_History = []
num_epochs = 100
batchsize = 6
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples],train_data[(i + 1) * num_val_samples:]], axis=0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples],train_targets[(i + 1) * num_val_samples:]], axis=0)
model = threeD_CNN1()
history = model.fit(partial_train_data, partial_train_targets, epochs=num_epochs, batch_size=batchsize, verbose=0)
val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
mse_history = history.history['loss']
mae_history = history.history['mean_absolute_error']
val_mses.append(val_mse)
val_maes.append(val_mae)
mse_History.append(mse_history)
mae_History.append(mae_history)
return val_mses, val_maes, mse_History, mae_History
val_mses1, val_maes1, mse_history1, mae_history1 = k_fold(4,train_data,train_label)
average_mae_history1 = [(np.mean([x[i] for x in mae_history1]))*100 for i in range(100)]
average_mse_history1= [(np.mean([x[i] for x in mse_history1]))*100 for i in range(100)]
#model 3Dcnn
epochs = range(1,len(average_mse_history1)+1)
plt.plot(epochs, average_mae_history1,'c', label='mae' )
plt.plot(epochs, average_mse_history1,'m', label='mse' )
plt.xlabel('Epochs')
plt.ylabel('Validation')
plt.show()
# batchsize = 6
# model1 = threeD_CNN()
# history1 = model1.fit(train_data, train_label, epochs = 100, batch_size = batchsize, verbose=0)
# model2 = threeD_CNN1()
# history2 = model2.fit(train_data, train_label, epochs = 100, batch_size = batchsize,verbose=0)
# model3 = threeD_CNN2()
# history3 = model3.fit(train_data, train_label, epochs = 100,batch_size = batchsize, verbose=0)
def k_fold(k,train_data,train_targets):
num_val_samples = len(train_data)//k
val_mses = []
val_maes = []
mse_History = []
mae_History = []
num_epochs = 100
batchsize = 6
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples],train_data[(i + 1) * num_val_samples:]], axis=0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples],train_targets[(i + 1) * num_val_samples:]], axis=0)
model = threeD_CNN2()
history = model.fit(partial_train_data, partial_train_targets, epochs=num_epochs, batch_size=batchsize, verbose=0)
val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
mse_history = history.history['loss']
mae_history = history.history['mean_absolute_error']
val_mses.append(val_mse)
val_maes.append(val_mae)
mse_History.append(mse_history)
mae_History.append(mae_history)
return val_mses, val_maes, mse_History, mae_History
val_mses2, val_maes2, mse_history2, mae_history2 = k_fold(4,train_data,train_label)
average_mae_history2 = [(np.mean([x[i] for x in mae_history2]))*100 for i in range(100)]
average_mse_history2 = [(np.mean([x[i] for x in mse_history2]))*100 for i in range(100)]
#model 3Dcnn
epochs = range(1,len(average_mse_history2)+1)
plt.plot(epochs, average_mae_history2,'c', label='mae' )
plt.plot(epochs, average_mse_history2,'m', label='mse' )
plt.xlabel('Epochs')
plt.ylabel('Validation')
plt.show()
# batchsize = 6
# model1 = threeD_CNN()
# history1 = model1.fit(train_data, train_label, epochs = 100, batch_size = batchsize, verbose=0)
# model2 = threeD_CNN1()
# history2 = model2.fit(train_data, train_label, epochs = 100, batch_size = batchsize,verbose=0)
# model3 = threeD_CNN2()
# history3 = model3.fit(train_data, train_label, epochs = 100,batch_size = batchsize, verbose=0)
epochs = range(1,len(average_mse_history)+1)
plt.plot(epochs, average_mae_history,'c', label='mae' )
plt.plot(epochs, average_mse_history,'c--', label='mse' )
plt.plot(epochs, average_mae_history1,'m', label='mae1' )
plt.plot(epochs, average_mse_history1,'m--', label='mse1' )
plt.plot(epochs, average_mae_history2,'r', label='mae2' )
plt.plot(epochs, average_mse_history2,'r--', label='mse2' )
plt.xlabel('Epochs')
plt.ylabel('Average_performance')
plt.legend()
# plt.show()
plt.savefig('/content/drive/My Drive/original_30_3models_comparison.png' ,dpi=1200)
mse1 = history1.history['loss']
mae1 = history1.history['mean_absolute_error']
mse2 = history2.history['loss']
mae2 = history2.history['mean_absolute_error']
mse3 = history3.history['loss']
mae3 = history3.history['mean_absolute_error']
# val_loss = history.history['val_loss']
# acc = history.history['acc']
# val_acc=history.history['val_acc']
epochs = range(1,len(mse1) +1)
plt.plot(epochs, mse1, 'c', label='mse1')
plt.plot(epochs, mse2, 'm', label='mse2')
plt.plot(epochs, mse3, 'r', label='mse3')
plt.plot(epochs, mae1, 'c--', label='mae1')
plt.plot(epochs, mae2, 'm--', label='mae2')
plt.plot(epochs, mae3, 'r--', label='mae3')
plt.title('loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# plt.savefig('/content/drive/My Drive/original_30_3models_comparison.png' ,dpi=2400)
comparision1 = (model1.predict(test_data))*100
comparision2 =(model2.predict(test_data))*100
comparision3 = (model3.predict(test_data))*100
print(comparision1[0,0])
print(comparision2)
x_data = range(24)
Test_label = test_label*100
y_data_c1 = Test_label[:,0]
y_data_c2 = Test_label[:,1]
y_data_c3 = Test_label[:,2]
y_predict1_c1 = comparision1[:,0]
y_predict1_c2 = comparision1[:,1]
y_predict1_c3 = comparision1[:,2]
y_predict2_c1 = comparision2[:,0]
y_predict2_c2 = comparision2[:,1]
y_predict2_c3 = comparision2[:,2]
y_predict3_c1 = comparision3[:,0]
y_predict3_c2 = comparision3[:,1]
y_predict3_c3 = comparision3[:,2]
fig, (predict_c1, predict_c2, predict_c3) = plt.subplots(3)
fig.suptitle('3 visual conditions')
predict_c1.plot(x_data, y_data_c1 , 'c', label='truth_c1')
predict_c2.plot(x_data, y_data_c2 , 'r', label='truth_c2')
predict_c3.plot(x_data, y_data_c3 , 'b', label='truth_c3')
predict_c1.plot(x_data, y_predict1_c1 , 'c--', label='predict1_c1')
predict_c2.plot(x_data, y_predict1_c2 , 'r--', label='predict1_c2')
predict_c3.plot(x_data, y_predict1_c3 , 'b--', label='predict1_c3')
predict_c1.plot(x_data, y_predict2_c1 , 'c:', label='predict2_c1')
predict_c2.plot(x_data, y_predict2_c2 , 'r:', label='predict2_c2')
predict_c3.plot(x_data, y_predict2_c3 , 'b:', label='predict2_c3')
predict_c1.plot(x_data, y_predict3_c1 , 'c-.', label='predict2_c1')
predict_c2.plot(x_data, y_predict3_c2 , 'r-.', label='predict2_c2')
predict_c3.plot(x_data, y_predict3_c3 , 'b-.', label='predict2_c3')
predict_c1.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
predict_c2.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
predict_c3.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
# plt.title('loss')
# plt.xlabel('Epochs')
# plt.ylabel('Loss')
# plt.show()
# fig.savefig('/content/drive/My Drive/ready_data_6C/matrix_input/1.png' ,bbox_inches='tight')
print(comparision1.shape)
y_data_c1 = test_label[:,0]
print(y_data_c1.shape)
def calculate_mse(predict): #对比三种视觉情况 每个model的mse
y_data_c1 = test_label[:,0]
y_data_c2 = test_label[:,1]
y_data_c3 = test_label[:,2]
y_predict1_c1 = predict[:,0]
y_predict1_c2 = predict[:,1]
y_predict1_c3 = predict[:,2]
tmp1 = 0
tmp2 = 0
tmp3 = 0
for i in range(24):
tmp1 = tmp1 + (y_data_c1[i]- y_predict1_c1[i])**2
tmp2 = tmp2 + (y_data_c2[i]- y_predict1_c2[i])**2
tmp3 = tmp3 + (y_data_c3[i]- y_predict1_c3[i])**2
c1 = tmp1 /24
c2 = tmp2 /24
c3 = tmp3 /24
# c1 = K.mean(K.square( y_data_c1- y_predict1_c1), axis=-1)
# c2 = K.mean(K.square( y_data_c2- y_predict1_c2), axis=-1)
# c3 = K.mean(K.square( y_data_c3- y_predict1_c3), axis=-1)
return c1 , c2 ,c3
# from keras import backend as K
Model1 = calculate_mse(comparision1)
Model2 = calculate_mse(comparision2)
print(Model1)
print(Model2)
<jupyter_output><empty_output>
|
no_license
|
/colab/Colab Notebooks/30到6_Matrix_run_file.ipynb
|
jackxuxu/Thesis
| 2 |
<jupyter_start><jupyter_text>## Challenge
* The Inter-American Development Bank is asking the Kaggle community for help with income qualification for some of the world's poorest families. Are you up for the challenge?
* Here's the backstory: Many social programs have a hard time making sure the right people are given enough aid. It’s especially tricky when a program focuses on the poorest segment of the population. The world’s poorest typically can’t provide the necessary income and expense records to prove that they qualify.## Road Map of Clean Data / Feature Engineering / Encoding / Remove Null Value
1. Load Data.....
1. Clean features...
1. Extracting features...
1. Encoding Data....
1. Fill NA value.....
1. Prepared Model.....
1. Predict Value on test Dataset......
1. Submit your result .......<jupyter_code>def dprint(*args, **kwargs):
print("[{}] ".format(datetime.datetime.now().strftime("%Y-%m-%d %H:%M")) + \
" ".join(map(str,args)), **kwargs)
id_name = 'Id'
target_name = 'Target'
# Load data
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
train['is_test'] = 0
test['is_test'] = 1
df_all = pd.concat([train, test], axis=0)
dprint('Clean features...')
cols = ['dependency']
for c in tqdm(cols):
x = df_all[c].values
strs = []
for i, v in enumerate(x):
try:
val = float(v)
except:
strs.append(v)
val = np.nan
x[i] = val
strs = np.unique(strs)
for s in strs:
df_all[c + '_' + s] = df_all[c].apply(lambda x: 1 if x == s else 0)
df_all[c] = x
df_all[c] = df_all[c].astype(float)
dprint("Done.")
dprint("Extracting features...")
def extract_features(df):
df['bedrooms_to_rooms'] = df['bedrooms']/df['rooms']
df['rent_to_rooms'] = df['v2a1']/df['rooms']
df['rent_to_bedrooms'] = df['v2a1']/df['bedrooms']
df['tamhog_to_rooms'] = df['tamhog']/df['rooms'] # tamhog - size of the household
df['tamhog_to_bedrooms'] = df['tamhog']/df['bedrooms']
df['r4t3_to_tamhog'] = df['r4t3']/df['tamhog'] # r4t3 - Total persons in the household
df['r4t3_to_rooms'] = df['r4t3']/df['rooms'] # r4t3 - Total persons in the household
df['r4t3_to_bedrooms'] = df['r4t3']/df['bedrooms']
df['rent_to_r4t3'] = df['v2a1']/df['r4t3']
df['v2a1_to_r4t3'] = df['v2a1']/(df['r4t3'] - df['r4t1'])
df['hhsize_to_rooms'] = df['hhsize']/df['rooms']
df['hhsize_to_bedrooms'] = df['hhsize']/df['bedrooms']
df['rent_to_hhsize'] = df['v2a1']/df['hhsize']
df['qmobilephone_to_r4t3'] = df['qmobilephone']/df['r4t3']
df['qmobilephone_to_v18q1'] = df['qmobilephone']/df['v18q1']
extract_features(train)
extract_features(test)
dprint("Done.")
from sklearn.preprocessing import LabelEncoder
def encode_data(df):
yes_no_map = {'no': 0, 'yes': 1}
df['dependency'] = df['dependency'].replace(yes_no_map).astype(np.float32)
df['edjefe'] = df['edjefe'].replace(yes_no_map).astype(np.float32)
df['edjefa'] = df['edjefa'].replace(yes_no_map).astype(np.float32)
df['idhogar'] = LabelEncoder().fit_transform(df['idhogar'])
dprint("Encoding Data....")
encode_data(train)
encode_data(test)
dprint("Done...")
def do_features(df):
feats_div = [('children_fraction', 'r4t1', 'r4t3'),
('working_man_fraction', 'r4h2', 'r4t3'),
('all_man_fraction', 'r4h3', 'r4t3'),
('human_density', 'tamviv', 'rooms'),
('human_bed_density', 'tamviv', 'bedrooms'),
('rent_per_person', 'v2a1', 'r4t3'),
('rent_per_room', 'v2a1', 'rooms'),
('mobile_density', 'qmobilephone', 'r4t3'),
('tablet_density', 'v18q1', 'r4t3'),
('mobile_adult_density', 'qmobilephone', 'r4t2'),
('tablet_adult_density', 'v18q1', 'r4t2'),
#('', '', ''),
]
feats_sub = [('people_not_living', 'tamhog', 'tamviv'),
('people_weird_stat', 'tamhog', 'r4t3')]
for f_new, f1, f2 in feats_div:
df['fe_' + f_new] = (df[f1] / df[f2]).astype(np.float32)
for f_new, f1, f2 in feats_sub:
df['fe_' + f_new] = (df[f1] - df[f2]).astype(np.float32)
# aggregation rules over household
aggs_num = {'age': ['min', 'max', 'mean'],
'escolari': ['min', 'max', 'mean']
}
aggs_cat = {'dis': ['mean']}
for s_ in ['estadocivil', 'parentesco', 'instlevel']:
for f_ in [f_ for f_ in df.columns if f_.startswith(s_)]:
aggs_cat[f_] = ['mean', 'count']
# aggregation over household
for name_, df_ in [('18', df.query('age >= 18'))]:
df_agg = df_.groupby('idhogar').agg({**aggs_num, **aggs_cat}).astype(np.float32)
df_agg.columns = pd.Index(['agg' + name_ + '_' + e[0] + "_" + e[1].upper() for e in df_agg.columns.tolist()])
df = df.join(df_agg, how='left', on='idhogar')
del df_agg
# do something advanced above...
# Drop SQB variables, as they are just squres of other vars
df.drop([f_ for f_ in df.columns if f_.startswith('SQB') or f_ == 'agesq'], axis=1, inplace=True)
# Drop id's
df.drop(['Id', 'idhogar'], axis=1, inplace=True)
# Drop repeated columns
df.drop(['hhsize', 'female', 'area2'], axis=1, inplace=True)
return df
dprint("Do_feature Engineering....")
train = do_features(train)
test = do_features(test)
dprint("Done....")
dprint("Fill Na value....")
train = train.fillna(0)
test = test.fillna(0)
dprint("Done....")
train.shape,test.shape
cols_to_drop = [
id_name,
target_name,
]
X = train.drop(cols_to_drop, axis=1, errors='ignore')
y = train[target_name].values
y = pd.get_dummies(y).values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42, stratify=y)
X_train.shape,y_train.shape,X_test.shape,y_test.shape
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import Adam
from keras import regularizers
input_dim = len(X_train.columns)
input_dim
input_dim = len(X_train.columns)
batch_size=32
model = Sequential()
# first input layer with first hidden layer in a single statement
model.add( Dense(120, input_shape=(input_dim,), activation='tanh') )
# 10 is the size(no. of neurons) of first hidden layer, 4 is the no. of features in the input layer
# input_shape=(4,) can also be written as input_dim=4
# second hiden layer
model.add(Dense(64,activation='relu')) # 8 = no. of neurons in second hidden layer
# third hiden layer
model.add(Dense(32,activation='relu')) # 6 = no. of neurons in third hidden layer
# ouput layer
model.add(Dense(4,activation='softmax')) # 3 = no. of neurons in output layer as three categories of labels are there
# compile method receives three arguments: "an optimizer", "a loss function" and "a list of metrics"
model.compile(Adam(lr=0.02),'mse', ['accuracy'])
# we use "binary_crossentropy" for binary classification problems and
# "categorical_crossentropy" for multiclass classification problems
# the compile statement can also be written as:-
# model.compile(optimizer=Adam(lr=0.04), loss='categorical_crossentropy',metrics=['accuracy'])
# we can give more than one metrics like ['accuracy', 'mae', 'mape']
model.summary()
model.fit(X_train, y_train, steps_per_epoch=850 ,epochs = 10)
scores = model.evaluate(X_test, y_test)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
predictions = model.predict_classes(test)
list(set(predictions))
sub = pd.read_csv("../input/sample_submission.csv")
sub['Target'] = predictions
sub.to_csv("keras.csv", index= False)<jupyter_output><empty_output>
|
no_license
|
/datasets/costa-rican-household-poverty-prediction/kernels/0.12---nikitpatel---keras-deeplearning-classification.ipynb
|
mindis/GDS
| 1 |
<jupyter_start><jupyter_text># 1) Derive the descriptive statistics of the data and discuss the points you find remarkable.<jupyter_code>usEducation.head()
fill_list = ["enroll", "total_revenue", "federal_revenue",
"state_revenue", "federal_revenue", "total_expenditure",
"instruction_expenditure", "support_services_expenditure",
"other_expenditure", "capital_outlay_expenditure", "grades_pk_g",
"grades_kg_g", "grades_4_g", "grades_8_g", "grades_12_g", "grades_all_g",
"avg_math_4_score", "avg_math_8_score",'avg_reading_4_score','avg_reading_4_score','avg_reading_8_score']
for col in fill_list:
usEducation.loc[:, col] = usEducation.loc[:, col].fillna(usEducation.loc[:, col].mean())
usEducation.isnull().sum()/usEducation.isnull().count()
years = usEducation["year"].unique()
for col in fill_list:
for year in years:
usEducation.loc[usEducation["year"] == year, col] = usEducation.loc[usEducation["year"] == year, col].fillna(
usEducation[usEducation["year"] == year][col].mean())
usEducation.describe()<jupyter_output><empty_output><jupyter_text>1) Total Revenue and Federal Revenue are close in numbers. Meaning most revenue goes to federal.
2)Total Expenditure is greater than Total Revenue when we compare the mean, median, and max. More money was spent than what we received as revenue.
3) Instruction Expenditure is about half of Total Revenue.
4) On average the math scores for each grades are higher than average reading score. Std for average math is about the same
# 2) Choose a state (e.g. California) and draw a line graph of its total revenues and total expenditures along the years.<jupyter_code>plt.plot(usEducation.loc[usEducation.state=='MINNESOTA', 'year'],
usEducation.loc[usEducation.state=='MINNESOTA', 'total_revenue'],
label ='Total Revenue')
plt.title("Minnesota's Total Revenue VS Total Expenditure")
plt.plot(usEducation.loc[usEducation.state=='MINNESOTA', 'year'],
usEducation.loc[usEducation.state=='MINNESOTA', 'total_expenditure'],
label ='Total Expenditure')
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text># 3) In your chosen state, which of the lessons are the students more successful, math or reading?<jupyter_code>plt.title('The difference in reading and math scores for 4')
plt.plot(usEducation.loc[usEducation.state=='MINNESOTA', 'year'],
usEducation.loc[usEducation.state=='MINNESOTA', 'avg_math_4_score'],
label ='Avg Math Score')
plt.plot(usEducation.loc[usEducation.state=='MINNESOTA', 'year'],
usEducation.loc[usEducation.state == 'MINNESOTA', 'avg_reading_4_score'],
label ='Avg Reading Score')
plt.legend()
plt.show()
plt.title('The difference in reading and math scores for 8')
plt.plot(usEducation.loc[usEducation.state=='MINNESOTA', 'year'],
usEducation.loc[usEducation.state=='MINNESOTA', 'avg_math_4_score'],
label = 'Avg Math Score')
plt.plot(usEducation.loc[usEducation.state=='MINNESOTA', 'year'],
usEducation.loc[usEducation.state =='MINNESOTA', 'avg_reading_8_score'],
label = 'Avg Reading Score')
plt.show()
<jupyter_output><empty_output><jupyter_text>It seems students are more successful in math than reading.# 4) What are the distributions of the math and reading scores in the sample?<jupyter_code>plt.figure(figsize=(20,10))
plt.subplot(2,2,1)
plt.hist(usEducation.avg_math_4_score.dropna())
plt.title("histogram of {}".format("avg_math_4_score"))
plt.subplot(2,2,2)
plt.hist(usEducation.avg_reading_4_score.dropna())
plt.title("histogram of {}".format("avg_reading_4_score"))
plt.subplot(2,2,3)
plt.hist(usEducation.avg_math_8_score.dropna())
plt.title("histogram of {}".format("avg_math_8_score"))
plt.subplot(2,2,4)
plt.hist(usEducation.avg_reading_8_score.dropna())
plt.title("histogram of {}".format("avg_reading_8_score"))
plt.show()<jupyter_output><empty_output><jupyter_text>Distributions are not normal, all are right skweed.# 4) 5. Notice that there are too many missing values for math and reading scores. Fill out the missing values using mean, median and linear interpolation. Then compare the effects of these techniques on the distributions of the score variables.¶<jupyter_code>postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'useducation'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
usEducation = pd.read_sql_query('select * from useducation',con=engine)
engine.dispose()
usEducation.columns = usEducation.columns.str.lower()
plt.figure(figsize=(20,20))
plt.subplot(4,4,1)
plt.hist(usEducation.avg_math_4_score.dropna())
plt.title("histogram of {} (original)".format("avg_math_4_score"))
plt.subplot(4,4,2)
plt.hist(usEducation.avg_math_4_score.interpolate())
plt.title("histogram of {} (interpolated)".format("avg_reading_4_score"))
plt.subplot(4,4,3)
plt.hist(usEducation.avg_math_4_score.fillna(usEducation.avg_math_4_score.median()))
plt.title("histogram of {} (filled with median)".format("avg_math_4_score"))
plt.subplot(4,4,4)
plt.hist(usEducation.avg_math_4_score.fillna(usEducation.avg_math_4_score.mean()))
plt.title("histogram of {} (filled with mean)".format("avg_math_4_score"))
plt.subplot(4,4,5)
plt.hist(usEducation.avg_reading_4_score.dropna())
plt.title("histogram of {} (original)".format("avg_reading_4_score"))
plt.subplot(4,4,6)
plt.hist(usEducation.avg_reading_4_score.interpolate())
plt.title("histogram of {} (interpolated)".format("avg_reading_4_score"))
plt.subplot(4,4,7)
plt.hist(usEducation.avg_reading_4_score.fillna(usEducation.avg_reading_4_score.median()))
plt.title("histogram of {} (filled with median)".format("avg_reading_4_score"))
plt.subplot(4,4,8)
plt.hist(usEducation.avg_reading_4_score.fillna(usEducation.avg_reading_4_score.mean()))
plt.title("histogram of {} (filled with mean)".format("avg_reading_4_score"))
plt.subplot(4,4,9)
plt.hist(usEducation.avg_math_8_score.dropna())
plt.title("histogram of {} (original)".format("avg_math_8_score"))
plt.subplot(4,4,10)
plt.hist(usEducation.avg_math_8_score.interpolate())
plt.title("histogram of {} (interpolated)".format("avg_math_8_score"))
plt.subplot(4,4,11)
plt.hist(usEducation.avg_math_8_score.fillna(usEducation.avg_math_8_score.median()))
plt.title("histogram of {} (filled with median)".format("avg_math_8_score"))
plt.subplot(4,4,12)
plt.hist(usEducation.avg_math_8_score.fillna(usEducation.avg_math_8_score.mean()))
plt.title("histogram of {} (filled with mean)".format("avg_math_8_score"))
plt.subplot(4,4,13)
plt.hist(usEducation.avg_reading_8_score.dropna())
plt.title("histogram of {} (original)".format("avg_reading_8_score"))
plt.subplot(4,4,14)
plt.hist(usEducation.avg_reading_8_score.interpolate().dropna())
plt.title("histogram of {} (interpolated)".format("avg_reading_8_score"))
plt.subplot(4,4,15)
plt.hist(usEducation.avg_reading_8_score.fillna(usEducation.avg_reading_8_score.median()))
plt.title("histogram of {} (filled with median)".format("avg_reading_8_score"))
plt.subplot(4,4,16)
plt.hist(usEducation.avg_reading_8_score.fillna(usEducation.avg_reading_8_score.mean()))
plt.title("histogram of {} (filled with mean)".format("avg_reading_8_score"))
plt.tight_layout()
plt.show()<jupyter_output><empty_output>
|
no_license
|
/explorationPhase1.ipynb
|
mysticalScientist23/HR-Employee-Attrition
| 5 |
<jupyter_start><jupyter_text><jupyter_code>R = 6371.0 #* 1000 #in m
atan = math.atan
sin = math.sin
cos = math.cos
pi = math.pi
answer = 4
distancelist = []
for i in range(0,answer):
if i+1 == (answer):
lon1 = latlist[answer-1]
lon2 = latlist[0]
lat1 = lonlist[answer-1]
lat2 = lonlist[0]
dlon = lon2 - lon1
dlat = lat2 - lat1
#Haversine formula
a = math.sin(dlat / 2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2)**2
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
distance = R * c
print('The distance between coordinate %s and coordinate %s is %s' %(i, 0, round(distance,4) ))
else:
lon1 = latlist[i]
lon2 = latlist[i+1]
lat1 = lonlist[i]
lat2 = lonlist[i+1]
dlon = lon2 - lon1
dlat = lat2 - lat1
#Haversine formula
a = math.sin(dlat / 2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2)**2
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
distance = R * c
print('The distance between coordinate %s and coordinate %s is %s ' %(i, i+1, round(distance,4) ))
distancelist.append(distance)
origindist = []
for i in range(1,answer):
lon1 = latlist[0]
lon2 = latlist[i]
lat1 = lonlist[0]
lat2 = lonlist[i]
dlon = lon2 - lon1
dlat = lat2 - lat1
#Haversine formula
a = math.sin(dlat / 2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2)**2
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
distance = R * c
origindist.append(distance)
print(lonlist)
print(latlist)
plt.plot(lonlist[0],latlist[0], 'bo')
plt.plot(lonlist[1],latlist[1], 'go')
plt.plot(lonlist[2],latlist[2], 'yo')
plt.plot(lonlist[3],latlist[3], 'ro')
def slope(x1,y1,x2,y2):
m = round( ((y2-y1)/(x2-x1)) ,6)
return m
mlist = []
for i in range(0, answer):
if i+1 == answer:
m = slope(lonlist[0],latlist[0],lonlist[answer-1],latlist[answer-1])
else:
m= slope(lonlist[i],latlist[i],lonlist[i+1],latlist[i+1])
mlist.append(m)
originslope = []
for i in range(1, answer):
m= slope(lonlist[0],latlist[0],lonlist[i],latlist[i])
print(m)
originslope.append(m)
mlist
#assuming the first lat and lon are 0, 0
# rise/run use slope, not angles
def coordinates_fuc(i,x,y,m,d):
print(i)
print(m)
angle = atan(m)
print(angle)
if i < 2:
if angle < 0:
angle = pi + angle
else:
if angle < 0:
angle = 2*pi + angle
x = d*cos(angle)
y = d*sin(angle)
print(x)
print(y)
print("\n")
return x,y
xlist = [0]
ylist = [0]
for i in range (0, answer-1):
x,y = coordinates_fuc(i,xlist[0],ylist[0],originslope[i],origindist[i])
xlist.append(x)
ylist.append(y)
print(xlist)
print(ylist)
plt.plot(xlist[0],ylist[0], 'bo')
plt.plot(xlist[1],ylist[1], 'go')
plt.plot(xlist[2],ylist[2], 'yo')
plt.plot(xlist[3],ylist[3], 'ro')
xmin = min(xlist)
ymin = min(ylist)
for i in range(0, answer):
xlist[i] = xlist[i] - xmin
ylist[i] = ylist[i] - ymin
plt.plot(xlist[0],ylist[0], 'bo')
plt.plot(xlist[1],ylist[1], 'go')
plt.plot(xlist[2],ylist[2], 'yo')
plt.plot(xlist[3],ylist[3], 'ro')
def cvalue(x1,y1,x2,y2,m):
c1 = round((-m*x1 + y1),7)
c2 = round((-m*x2 + y2),7)
print(c1)
print(c2)
if c1 != c2:
print('Error')
return c1
clist = []
for i in range(0, answer):
if i+1 == answer:
c = cvalue(xlist[0],ylist[0],xlist[answer-1],ylist[answer-1],mlist[answer-1])
else:
c = cvalue(xlist[i],ylist[i],xlist[i+1],ylist[i+1],mlist[i])
clist.append(c)
print(clist)
xmax = max(xlist)
ymax = max(ylist)
#SPLIT PROBLEM IN TWO
xlist.append( (xlist[1] + xlist[0])/2 )
ylist.append( (ylist[1] + ylist[0])/2 )
xlist.append( (xlist[2] + xlist[3])/2 )
ylist.append( (ylist[2] + ylist[3])/2 )
#SPLIT PROBLEM IN TWO
xlist.append( (xlist[0] + xlist[3])/2 )
ylist.append( (ylist[0] + ylist[3])/2 )
xlist.append( (xlist[1] + xlist[2])/2 )
ylist.append( (ylist[1] + ylist[2])/2 )
#SPLIT PROBLEM IN TWO
xlist.append( (xlist[4] + xlist[5])/2 )
ylist.append( (ylist[4] + ylist[5])/2 )
xlist.append( (xlist[6] + xlist[7])/2 )
ylist.append( (ylist[6] + ylist[7])/2 )
print(xlist)
print(ylist)
m= slope(xlist[4],ylist[4],xlist[5],ylist[5])
mlist.append(m)
c = cvalue(xlist[4],ylist[4],xlist[5],ylist[5],mlist[4])
clist.append(c)
m= slope(xlist[6],ylist[6],xlist[7],ylist[7])
mlist.append(m)
c = cvalue(xlist[6],ylist[6],xlist[7],ylist[7],mlist[5])
clist.append(c)
print(clist)
print(mlist)
plt.plot(xlist[0],ylist[0], 'bo')
plt.plot(xlist[1],ylist[1], 'go')
plt.plot(xlist[2],ylist[2], 'yo')
plt.plot(xlist[3],ylist[3], 'ro')
plt.plot(xlist[4],ylist[4], 'mo')
plt.plot(xlist[5],ylist[5], 'co')
plt.plot(xlist[6],ylist[6], 'mo')
plt.plot(xlist[7],ylist[7], 'co')
plt.plot(xlist[8],ylist[8], 'ro')
plt.plot(xlist[9],ylist[9], 'bo')<jupyter_output><empty_output><jupyter_text>### The pyomo code is taken from Souridi (2020)
(A. Soroudi, “Lecture 14 - Heuristic Methods,” University College Dublin, Dublin, 2020.)<jupyter_code>#Section A
xinitiallist = []
yinitiallist = []
for j in range(0,2000): #more the better,
x = random.randint(0, ceil(xmax)) # random generation
y = random.randint(0, ceil(ymax))
if (y<=mlist[0]*x+clist[0]) & (y<=mlist[1]*x+clist[1]) & (y<=mlist[5]*x+clist[5]) & (y>=mlist[4]*x+clist[4]):
xinitiallist.append(x)
yinitiallist.append(y)
print(len(xinitiallist))
model = AbstractModel()
model.N = Param(mutable=True)
model.i = RangeSet(1, model.N)
model.j = Set(initialize=model.i)
# model.L=Param(initialize=100000,mutable=True)
# def initval(model,i):
# return random.uniform(0,model.L)
model.L=Param(initialize=100000,mutable=True)
def initvalx(model,i):
k = randrange(0,len(xinitiallist))
return xinitiallist[k]
def initvaly(model,i):
k = randrange(0,len(yinitiallist))
return yinitiallist[k]
model.x = Var(model.i , bounds=(0,xmax+1), within=NonNegativeReals, initialize=initvalx)
model.y = Var(model.i ,bounds=(0,ymax+1) , within=NonNegativeReals, initialize=initvaly)
model.r = Var(bounds=(0,xmax), within=NonNegativeReals)
def C1_rule(model,i,j):
if i<j:
return (model.x[i]-model.x[j])**2+(model.y[i]-model.y[j])**2 >= model.r**2
else:
return Constraint.Skip
model.C = Constraint(model.i,model.j, rule=C1_rule)
def C2rule(model,i):
return model.y[i] <= mlist[0]*model.x[i] + clist[0]
model.C2 = Constraint(model.i,rule=C2rule )
def C3rule(model,i):
return model.y[i] <= mlist[1]*model.x[i] + clist[1]
model.C3 = Constraint(model.i, rule=C3rule )
def C4rule(model,i):
return model.y[i] <= mlist[5]*model.x[i] + clist[5]
model.C4 = Constraint(model.i, rule=C4rule )
def C5rule(model,i):
return model.y[i] >= mlist[4]*model.x[i] + clist[4]
model.C5 = Constraint(model.i, rule=C5rule )
# y=ax+b y=1x+3
model.obj = Objective(expr=model.r, sense=maximize)
#opt = SolverFactory('ipopt')
# Section A
plt.plot(xlist[1],ylist[1], 'go')
plt.plot(xlist[4],ylist[4], 'mo')
plt.plot(xlist[7],ylist[7], 'co')
plt.plot(xlist[8],ylist[8], 'bo')
print("Analysing wind turbine %s" %wt)
print("%s has a radius of %s" %(wt,radius))
print('Min distance should be %s km' %(round(min_distance,3)))
print('\n')
model.N=38 # 38 = 1.812 (exceeded)
instance = model.create_instance()
#results = opt.solve(instance) # solves and updates instance
results =SolverFactory('multistart').solve(instance)
X=[value(instance.x[i]) for i in instance.i]
Y=[value(instance.y[i]) for i in instance.i]
plt.figure(figsize =(20,10))
plt.scatter( X,Y,s=25,color='blue')
plt.xlabel('X',size=20)
plt.ylabel('Y',size=20)
plt.title('Wind farms for N=%s' %( (str(value(instance.N)))), size=20)
print('Min distance is ',round(value(instance.r),3))
plt.show()
#Section B
xinitiallist = []
yinitiallist = []
for j in range(0,2000): #more the better,
x = random.randint(0, ceil(xmax)) # random generation
y = random.randint(0, ceil(ymax))
if (y>=mlist[5]*x+clist[5]) & (y<=mlist[1]*x+clist[1]) & (y<=mlist[2]*x+clist[2]) & (y>=mlist[4]*x+clist[4]):
xinitiallist.append(x)
yinitiallist.append(y)
print(len(xinitiallist))
model = AbstractModel()
model.N = Param(mutable=True)
model.i = RangeSet(1, model.N)
model.j = Set(initialize=model.i)
# model.L=Param(initialize=100000,mutable=True)
# def initval(model,i):
# return random.uniform(0,model.L)
model.L=Param(initialize=100000,mutable=True)
def initvalx(model,i):
k = randrange(0,len(xinitiallist))
return xinitiallist[k]
def initvaly(model,i):
k = randrange(0,len(yinitiallist))
return yinitiallist[k]
model.x = Var(model.i , bounds=(0,xmax+1), within=NonNegativeReals, initialize=initvalx)
model.y = Var(model.i ,bounds=(0,ymax+1) , within=NonNegativeReals, initialize=initvaly)
model.r = Var(bounds=(0,xmax), within=NonNegativeReals)
def C1_rule(model,i,j):
if i<j:
return (model.x[i]-model.x[j])**2+(model.y[i]-model.y[j])**2 >= model.r**2
else:
return Constraint.Skip
model.C = Constraint(model.i,model.j, rule=C1_rule)
def C2rule(model,i):
return model.y[i] >= mlist[5]*model.x[i] + clist[5]
model.C2 = Constraint(model.i,rule=C2rule )
def C3rule(model,i):
return model.y[i] <= mlist[1]*model.x[i] + clist[1]
model.C3 = Constraint(model.i, rule=C3rule )
def C4rule(model,i):
return model.y[i] <= mlist[2]*model.x[i] + clist[2]
model.C4 = Constraint(model.i, rule=C4rule )
def C5rule(model,i):
return model.y[i] >= mlist[4]*model.x[i] + clist[4]
model.C5 = Constraint(model.i, rule=C5rule )
# y=ax+b y=1x+3
model.obj = Objective(expr=model.r, sense=maximize)
#opt = SolverFactory('ipopt')
# Section B
plt.plot(xlist[2],ylist[2], 'yo')
plt.plot(xlist[5],ylist[5], 'co')
plt.plot(xlist[7],ylist[7], 'co')
plt.plot(xlist[8],ylist[8], 'bo')
print("Analysing wind turbine %s" %wt)
print("%s has a radius of %s" %(wt,radius))
print('Min distance should be %s metres' %(round(min_distance,3)))
print('\n')
model.N= 35 # 35 = 1.641
instance = model.create_instance()
#results = opt.solve(instance) # solves and updates instance
results =SolverFactory('multistart').solve(instance)
X=[value(instance.x[i]) for i in instance.i]
Y=[value(instance.y[i]) for i in instance.i]
plt.figure(figsize =(20,10))
plt.scatter( X,Y,s=25,color='blue')
plt.xlabel('X',size=20)
plt.ylabel('Y',size=20)
plt.title('Wind farms for N=%s' %( (str(value(instance.N)))), size=20)
print('Min distance is ',round(value(instance.r),3))
plt.show()
#Section C
xinitiallist = []
yinitiallist = []
for j in range(0,2000): #more the better,
x = random.randint(0, ceil(xmax)) # random generation
y = random.randint(0, ceil(ymax))
if (y>=mlist[5]*x+clist[5]) & (y<=mlist[4]*x+clist[4]) & (y<=mlist[2]*x+clist[2]) & (y>=mlist[3]*x+clist[3]):
xinitiallist.append(x)
yinitiallist.append(y)
print(len(xinitiallist))
model = AbstractModel()
model.N = Param(mutable=True)
model.i = RangeSet(1, model.N)
model.j = Set(initialize=model.i)
# model.L=Param(initialize=100000,mutable=True)
# def initval(model,i):
# return random.uniform(0,model.L)
model.L=Param(initialize=100000,mutable=True)
def initvalx(model,i):
k = randrange(0,len(xinitiallist))
return xinitiallist[k]
def initvaly(model,i):
k = randrange(0,len(yinitiallist))
return yinitiallist[k]
model.x = Var(model.i , bounds=(0,xmax+1), within=NonNegativeReals, initialize=initvalx)
model.y = Var(model.i ,bounds=(0,ymax+1) , within=NonNegativeReals, initialize=initvaly)
model.r = Var(bounds=(0,xmax), within=NonNegativeReals)
def C1_rule(model,i,j):
if i<j:
return (model.x[i]-model.x[j])**2+(model.y[i]-model.y[j])**2 >= model.r**2
else:
return Constraint.Skip
model.C = Constraint(model.i,model.j, rule=C1_rule)
def C2rule(model,i):
return model.y[i] >= mlist[5]*model.x[i] + clist[5]
model.C2 = Constraint(model.i,rule=C2rule )
def C3rule(model,i):
return model.y[i] <= mlist[4]*model.x[i] + clist[4]
model.C3 = Constraint(model.i, rule=C3rule )
def C4rule(model,i):
return model.y[i] <= mlist[2]*model.x[i] + clist[2]
model.C4 = Constraint(model.i, rule=C4rule )
def C5rule(model,i):
return model.y[i] >= mlist[3]*model.x[i] + clist[3]
model.C5 = Constraint(model.i, rule=C5rule )
# y=ax+b y=1x+3
model.obj = Objective(expr=model.r, sense=maximize)
#opt = SolverFactory('ipopt')
# Section C
plt.plot(xlist[3],ylist[3], 'ro')
plt.plot(xlist[5],ylist[5], 'co')
plt.plot(xlist[6],ylist[6], 'mo')
plt.plot(xlist[8],ylist[8], 'bo')
print("Analysing wind turbine %s" %wt)
print("%s has a radius of %s" %(wt,radius))
print('Min distance should be %s metres' %(round(min_distance,3)))
print('\n')
model.N= 34 #36 = 1.913 (exceeded) #34 = 1.979
instance = model.create_instance()
#results = opt.solve(instance) # solves and updates instance
results =SolverFactory('multistart').solve(instance)
X=[value(instance.x[i]) for i in instance.i]
Y=[value(instance.y[i]) for i in instance.i]
plt.figure(figsize =(20,10))
plt.scatter( X,Y,s=25,color='blue')
plt.xlabel('X',size=20)
plt.ylabel('Y',size=20)
plt.title('Wind farms for N=%s' %( (str(value(instance.N)))), size=20)
print('Min distance is ',round(value(instance.r),3))
plt.show()
#Section D
xinitiallist = []
yinitiallist = []
for j in range(0,20000): #more the better,
x = random.randint(0, ceil(xmax)) # random generation
y = random.randint(0, ceil(ymax))
if (y<=mlist[0]*x+clist[0]) & (y<=mlist[4]*x+clist[4]) & (y<=mlist[5]*x+clist[5]) & (y>=mlist[3]*x+clist[3]):
xinitiallist.append(x)
yinitiallist.append(y)
print(len(xinitiallist))
model = AbstractModel()
model.N = Param(mutable=True)
model.i = RangeSet(1, model.N)
model.j = Set(initialize=model.i)
# model.L=Param(initialize=100000,mutable=True)
# def initval(model,i):
# return random.uniform(0,model.L)
model.L=Param(initialize=100000,mutable=True)
def initvalx(model,i):
k = randrange(0,len(xinitiallist))
return xinitiallist[k]
def initvaly(model,i):
k = randrange(0,len(yinitiallist))
return yinitiallist[k]
model.x = Var(model.i , bounds=(0,xmax+1), within=NonNegativeReals, initialize=initvalx)
model.y = Var(model.i ,bounds=(0,ymax+1) , within=NonNegativeReals, initialize=initvaly)
model.r = Var(bounds=(0,xmax), within=NonNegativeReals)
def C1_rule(model,i,j):
if i<j:
return (model.x[i]-model.x[j])**2+(model.y[i]-model.y[j])**2 >= model.r**2
else:
return Constraint.Skip
model.C = Constraint(model.i,model.j, rule=C1_rule)
def C2rule(model,i):
return model.y[i] <= mlist[0]*model.x[i] + clist[0]
model.C2 = Constraint(model.i,rule=C2rule )
def C3rule(model,i):
return model.y[i] <= mlist[4]*model.x[i] + clist[4]
model.C3 = Constraint(model.i, rule=C3rule )
def C4rule(model,i):
return model.y[i] <= mlist[5]*model.x[i] + clist[5]
model.C4 = Constraint(model.i, rule=C4rule )
def C5rule(model,i):
return model.y[i] >= mlist[3]*model.x[i] + clist[3]
model.C5 = Constraint(model.i, rule=C5rule )
# y=ax+b y=1x+3
model.obj = Objective(expr=model.r, sense=maximize)
#opt = SolverFactory('ipopt')
# Section D
plt.plot(xlist[0],ylist[0], 'bo')
plt.plot(xlist[4],ylist[4], 'mo')
plt.plot(xlist[6],ylist[6], 'mo')
plt.plot(xlist[8],ylist[8], 'bo')
print("Analysing wind turbine %s" %wt)
print("%s has a radius of %s" %(wt,radius))
print('Min distance should be %s metres' %(round(min_distance,3)))
print('\n')
model.N= 40 #40 = 2.059 #41 = 2.01, 48 infeasible
instance = model.create_instance()
#results = opt.solve(instance) # solves and updates instance
results =SolverFactory('multistart').solve(instance)
X=[value(instance.x[i]) for i in instance.i]
Y=[value(instance.y[i]) for i in instance.i]
plt.figure(figsize =(20,10))
plt.scatter( X,Y,s=25,color='blue')
plt.xlabel('X',size=20)
plt.ylabel('Y',size=20)
plt.title('Wind farms for N=%s' %( (str(value(instance.N)))), size=20)
print('Min distance is ',round(value(instance.r),3))
plt.show()
#instance.C.pprint()<jupyter_output><empty_output>
|
no_license
|
/Section 2 - Optimising Wind Farm Spacing Rosslare Harbour to Kilmore Quay.ipynb
|
16431496/optimisation_code
| 2 |
<jupyter_start><jupyter_text>## Task 4: **Healthcare Sector Analysis**
Discovering facts from data in healthcare sector<jupyter_code>import os
import re
import sys
import json
import nltk
import pandas as pd
import numpy as np
from scipy.stats import norm, ttest_ind
from collections import defaultdict
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import PolynomialFeatures
from sklearn.svm import SVR
# from google.colab import drive
# drive.mount('/content/drive')
def read_data(root_dir, data_folder, csv_file):
if data_folder is not None:
print("\nReading data from " + csv_file)
file_dir = os.path.join(root_dir, data_folder, csv_file)
return_df = pd.read_csv(file_dir)
return return_df
else:
print("\nReading data from " + csv_file)
file_dir = os.path.join(root_dir, csv_file)
return_df = pd.read_csv(file_dir)
return return_df
def merge_dataframe(dst_dataframe, new_part):
no_nan_new_part = new_part.copy().fillna('')
if dst_dataframe is None:
dst_dataframe = no_nan_new_part.copy()
return dst_dataframe
else:
dst_dataframe = pd.concat([dst_dataframe, no_nan_new_part])
return dst_dataframe
'''
Basic environ params
'''
root_dir = "\\".join(os.path.abspath('').split('\\')[:-1])
# root_dir = '/content/drive/My Drive/Penn Inequality Project'
task_folder = "task4"
'''
Main adjustable params
'''
# Section of codes, which are listed in every cell
company_analyzed = False
reference_level_analyzed = False
incumbent_point_analyzed = False
skill_level_analyzed = False
min_surveying_years = 3
min_level_counts = 5
max_level_counts = 20
print("Read csv file from path " + root_dir)
data_df = read_data(root_dir, None, 'reclassified_all_data.csv')
data_df.columns<jupyter_output><empty_output><jupyter_text>## Company Analysis### 1. Raw analysis of all companies in healthcare sector<jupyter_code>if not company_analyzed:
# Mean
data_df = data_df[['KF_ID', 'CalendarYear', 'SectorName', 'Base Salary', 'RegionName', 'OwnershipTypeDesc', 'NumOfEmpDesc', 'ReferenceLevelNum', 'IncumbentPointCount', 'JobName', 'RegionName']]
healthcare_df = data_df[data_df['SectorName']=="healthcare"]
healthcare_df = healthcare_df[healthcare_df['ReferenceLevelNum'] != 99]
print("A total of " + str(healthcare_df.shape[0]) + " data pieces are related with healthcare industry\n")
healthcare_mean_salary = healthcare_df['Base Salary'].mean()
print("The mean salary of the healthcare industry is " + str(healthcare_mean_salary) + "\n")
annual_mean = []
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
company_year_mean = year_healthcare_df[['KF_ID', 'Base Salary']].groupby(by = ['KF_ID']).mean()
annual_mean_salary = company_year_mean['Base Salary'].mean()
annual_mean.append(annual_mean_salary)
fig1 = plt.figure()
plt.title("The annual mean salary of healthcare industry")
plt.ylabel("Mean salary value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), annual_mean)
plt.show()
# Median
healthcare_median_salary = healthcare_df['Base Salary'].median()
print("\nThe median salary of the healthcare industry is " + str(healthcare_median_salary) + "\n")
annual_median = []
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
company_year_median = year_healthcare_df[['KF_ID', 'Base Salary']].groupby(by = ['KF_ID']).median()
annual_median_salary = company_year_median['Base Salary'].median()
annual_median.append(annual_median_salary)
fig2 = plt.figure()
plt.title("The annual median salary of healthcare industry")
plt.ylabel("Median salary value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), annual_median)
plt.show()
print("\nMean minus Median plot")
fig3 = plt.figure()
plt.title("Difference between median & mean in healthcare industry")
plt.ylabel("Difference of median & mean value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), np.array(annual_mean) - np.array(annual_median))
plt.show()<jupyter_output>A total of 4998321 data pieces are related with healthcare industry
The mean salary of the healthcare industry is 64665.53658618564
<jupyter_text>### 2. Analysis on companies<jupyter_code>if not company_analyzed:
print("Companies in healthcare industries include: \n")
print(sorted(pd.unique(healthcare_df['KF_ID'])))
print("\nA total of " + str(len(sorted(pd.unique(healthcare_df['KF_ID'])))) + " companies")
# Year distribution
company_year_dict = defaultdict(list)
year_company_dict = defaultdict(list)
for company in sorted(pd.unique(healthcare_df['KF_ID'])):
company_healthcare_df = healthcare_df[healthcare_df['KF_ID'] == company]
for year in range(2008, 2020):
if company_healthcare_df[company_healthcare_df['CalendarYear'] == year].shape[0] > 0:
company_year_dict[int(company)].append(year)
with open(os.path.join(root_dir, "task4", "Healthcare_Companies_Distribution_Across_Years.json"), 'w') as output_file:
print("\nSaving the distribtion of companies across the years from 2008 to 2019 in json file")
json.dump(company_year_dict, output_file)
print("\nOf all healthcare companies, counting the number of years included in this survey\n")
len_distribution = defaultdict(int)
for company in company_year_dict:
len_distribution[len(company_year_dict[company])] += 1
fig1 = plt.figure()
plt.title("Healthcare companies surveyed, number of years")
plt.xlabel("Years surveyed")
plt.ylabel("Number of companies")
plt.bar(list(len_distribution.keys()), list(len_distribution.values()))
plt.show()
print("\nDistribution of companies across the years")
count_companies = defaultdict(int)
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
for company in pd.unique(year_healthcare_df['KF_ID']):
count_companies[year] += 1
fig2 = plt.figure()
plt.title("Healthcare companies surveyed across the years")
plt.xlabel("Year")
plt.ylabel("Number of companies")
plt.bar(list(count_companies.keys()), list(count_companies.values()))
plt.show()
# Reference level distribution
company_level_dict = defaultdict(list)
for company in sorted(pd.unique(healthcare_df['KF_ID'])):
company_healthcare_df = healthcare_df[healthcare_df['KF_ID'] == company]
for level in pd.unique(company_healthcare_df['ReferenceLevelNum']):
company_level_dict[int(company)].append(int(level))
with open(os.path.join(root_dir, "task4", "Healthcare_Companies_Distribution_Across_ReferLevel.json"), 'w') as output_file:
print("\nSaving the distribtion of companies across reference levels in json file")
json.dump(company_level_dict, output_file)
print("\nOf all healthcare companies, counting the number of reference levels included in this survey\n")
len_distribution = defaultdict(int)
for company in company_level_dict:
len_distribution[len(company_level_dict[company])] += 1
fig3 = plt.figure()
plt.title("Healthcare companies surveyed, number of reference levels")
plt.xlabel("Count on reference levels surveyed")
plt.ylabel("Number of companies")
plt.bar(list(len_distribution.keys()), list(len_distribution.values()))
plt.show()
# Incumbent point distribution
company_point_dict = defaultdict(list)
for company in sorted(pd.unique(healthcare_df['KF_ID'])):
company_healthcare_df = healthcare_df[healthcare_df['KF_ID'] == company]
for point in pd.unique(company_healthcare_df['IncumbentPointCount']):
company_point_dict[int(company)].append(int(level))
with open(os.path.join(root_dir, "task4", "Healthcare_Companies_Distribution_Across_IncumbPoint.json"), 'w') as output_file:
print("\nSaving the distribtion of companies across incumbent points in json file")
json.dump(company_point_dict, output_file)
print("\nOf all healthcare companies, counting the number of incumbent points included in this survey\n")
len_distribution = defaultdict(int)
for company in company_level_dict:
len_distribution[len(company_point_dict[company])] += 1
fig3 = plt.figure()
plt.title("Healthcare companies surveyed, number of incumbent points")
plt.xlabel("Count on incmubent points surveyed")
plt.ylabel("Number of companies")
plt.bar(list(len_distribution.keys()), list(len_distribution.values()))
plt.show()
<jupyter_output>Companies in healthcare industries include:
[19, 20, 22, 25, 26, 27, 30, 47, 48, 52, 53, 58, 60, 68, 69, 83, 93, 94, 97, 99, 100, 106, 109, 111, 113, 135, 139, 140, 145, 154, 155, 160, 162, 163, 165, 175, 188, 190, 191, 202, 203, 205, 207, 208, 210, 211, 212, 213, 214, 216, 217, 218, 221, 224, 228, 232, 236, 245, 246, 248, 249, 252, 254, 255, 256, 257, 258, 259, 260, 262, 263, 264, 275, 278, 290, 292, 296, 299, 301, 307, 311, 334, 344, 345, 347, 349, 351, 353, 355, 359, 368, 371, 372, 373, 381, 383, 385, 388, 391, 401, 403, 404, 420, 421, 422, 423, 424, 428, 429, 431, 439, 440, 449, 454, 455, 467, 483, 484, 486, 491, 493, 494, 506, 510, 519, 520, 521, 522, 531, 533, 538, 539, 544, 548, 559, 565, 569, 578, 579, 580, 581, 595, 624, 644, 650, 653, 657, 660, 674, 681, 687, 693, 709, 723, 724, 727, 742, 743, 746, 750, 755, 764, 773, 779, 780, 781, 784, 786, 787, 793, 814, 815, 819, 821, 825, 832, 833, 834, 836, 843, 857, 858, 859, 860, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872[...]<jupyter_text>### 3. Improved analysis by deleting companies that are only included in the survey for 1-2 years<jupyter_code># The following analysis gets rid of companies only taking 1 or 2 years of survey
if not company_analyzed:
new_healthcare_df = None
for company in company_year_dict:
if len(company_year_dict[company]) > min_surveying_years:
new_healthcare_df = merge_dataframe(new_healthcare_df, healthcare_df[healthcare_df['KF_ID'] == company])
print("Of companies over " + str(min_surveying_years) + " years of survey\n")
print("A total of " + str(new_healthcare_df.shape[0]) + " data pieces are related with healthcare industry\n")
print("\nDistribution of companies across the years after improvement\n")
count_companies = defaultdict(int)
for year in range(2008, 2020):
year_healthcare_df = new_healthcare_df[new_healthcare_df['CalendarYear'] == year]
for company in pd.unique(year_healthcare_df['KF_ID']):
count_companies[year] += 1
fig0 = plt.figure()
plt.title("Healthcare companies surveyed across the years")
plt.xlabel("Year")
plt.ylabel("Number of companies")
plt.bar(list(count_companies.keys()), list(count_companies.values()))
plt.show()
# Mean
healthcare_mean_salary = new_healthcare_df['Base Salary'].mean()
print("The mean salary of the healthcare industry is " + str(healthcare_mean_salary) + "\n")
annual_mean = []
for year in range(2008, 2020):
year_healthcare_df = new_healthcare_df[new_healthcare_df['CalendarYear'] == year]
company_year_mean = year_healthcare_df[['KF_ID', 'Base Salary']].groupby(by = ['KF_ID']).mean()
annual_mean_salary = company_year_mean['Base Salary'].mean()
annual_mean.append(annual_mean_salary)
fig1 = plt.figure()
plt.title("The annual mean salary of healthcare industry")
plt.ylabel("Mean salary value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), annual_mean)
plt.show()
# Median
healthcare_median_salary = new_healthcare_df['Base Salary'].median()
print("\nThe median salary of the healthcare industry is " + str(healthcare_median_salary) + "\n")
annual_median = []
for year in range(2008, 2020):
year_healthcare_df = new_healthcare_df[new_healthcare_df['CalendarYear'] == year]
company_year_median = year_healthcare_df[['KF_ID', 'Base Salary']].groupby(by = ['KF_ID']).median()
annual_median_salary = company_year_median['Base Salary'].median()
annual_median.append(annual_median_salary)
fig2 = plt.figure()
plt.title("The annual median salary of healthcare industry")
plt.ylabel("Median salary value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), annual_median)
plt.show()
print("\nMean minus Median plot")
fig3 = plt.figure()
plt.title("Difference between median & mean in healthcare industry")
plt.ylabel("Difference of median & mean value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), np.array(annual_mean) - np.array(annual_median))
plt.show()<jupyter_output>Of companies over 3 years of survey
A total of 4756131 data pieces are related with healthcare industry
Distribution of companies across the years after improvement
<jupyter_text>### Results comparison between raw & processed:
The companies are deleted mainly in year 2012-2016
Not much difference found in year 2012-2016, but the mean & median value changes evidently in 2017-2019.
### 4. Improved analysis by deleting companies that are containing too few or too many reference levels<jupyter_code># The following analysis gets rid of companies having too few reference levels, or too many reference levels
if not company_analyzed:
new_healthcare_df = None
for company in company_year_dict:
if (len(company_year_dict[company]) > min_level_counts) and (len(company_year_dict[company]) < max_level_counts):
new_healthcare_df = merge_dataframe(new_healthcare_df, healthcare_df[healthcare_df['KF_ID'] == company])
print("Of companies over " + str(min_surveying_years) + " years of survey\n")
print("A total of " + str(new_healthcare_df.shape[0]) + " data pieces are related with healthcare industry\n")
# Mean
healthcare_mean_salary = new_healthcare_df['Base Salary'].mean()
print("The mean salary of the healthcare industry is " + str(healthcare_mean_salary) + "\n")
annual_mean = []
for year in range(2008, 2020):
year_healthcare_df = new_healthcare_df[new_healthcare_df['CalendarYear'] == year]
company_year_mean = year_healthcare_df[['KF_ID', 'Base Salary']].groupby(by = ['KF_ID']).mean()
annual_mean_salary = company_year_mean['Base Salary'].mean()
annual_mean.append(annual_mean_salary)
fig1 = plt.figure()
plt.title("The annual mean salary of healthcare industry")
plt.ylabel("Mean salary value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), annual_mean)
plt.show()
# Median
healthcare_median_salary = new_healthcare_df['Base Salary'].median()
print("\nThe median salary of the healthcare industry is " + str(healthcare_median_salary) + "\n")
annual_median = []
for year in range(2008, 2020):
year_healthcare_df = new_healthcare_df[new_healthcare_df['CalendarYear'] == year]
company_year_median = year_healthcare_df[['KF_ID', 'Base Salary']].groupby(by = ['KF_ID']).median()
annual_median_salary = company_year_median['Base Salary'].median()
annual_median.append(annual_median_salary)
fig2 = plt.figure()
plt.title("The annual median salary of healthcare industry")
plt.ylabel("Median salary value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), annual_median)
plt.show()
print("\nMean minus Median plot")
fig3 = plt.figure()
plt.title("Difference between median & mean in healthcare industry")
plt.ylabel("Difference of median & mean value in dollar")
plt.xlabel("Year")
plt.plot(range(2008, 2020), np.array(annual_mean) - np.array(annual_median))
plt.show()<jupyter_output>Of companies over 3 years of survey
A total of 4439477 data pieces are related with healthcare industry
The mean salary of the healthcare industry is 65123.505266949236
<jupyter_text>### Results comparison between raw & processed:
Not much difference found in year 2012-2016, but the mean & median value changes evidently in 2017-2019.
We can conclude that many conpanies only participating into the survey in year 2017-2019, influencing the quality of data.
Similar to changes from the improvements of deleting companies of 1-2 years of enrollment.### 5. Relevance analysis among variables<jupyter_code># Only counting ReferenceLevelNum and IncumbentPointCount into regression
var_df = healthcare_df[['ReferenceLevelNum', 'IncumbentPointCount']]
salary_df = healthcare_df['Base Salary']
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
linear_reg = LinearRegression()
linear_reg.fit(train_var, train_salary)
pred_salary = linear_reg.predict(test_var)
print("The coeficients of linear regression are " + str(linear_reg.coef_))
print("The intercept of linear regresssion is " + str(linear_reg.intercept_) + "\n")
print("The R2 score: ")
print("%.3f\n" % (r2_score(test_salary, pred_salary)))
print("The RMSE error of regression model is %.3f" % (np.sqrt(mean_squared_error(test_salary, pred_salary))))
# Counting ReferenceLevelNum, IncumbentPointCount, Num into regression
var_df = healthcare_df[['ReferenceLevelNum', 'IncumbentPointCount', 'NumOfEmpDesc', 'Base Salary']].dropna()
salary_df = var_df['Base Salary']
var_df = var_df[['ReferenceLevelNum', 'IncumbentPointCount', 'NumOfEmpDesc']]
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
linear_reg = LinearRegression()
linear_reg.fit(train_var, train_salary)
pred_salary = linear_reg.predict(test_var)
print("The coeficients of linear regression are " + str(linear_reg.coef_))
print("The intercept of linear regresssion is " + str(linear_reg.intercept_) + "\n")
print("The R2 score: ")
print("%.3f\n" % (r2_score(test_salary, pred_salary)))
print("The RMSE error of regression model is %.3f" % (np.sqrt(mean_squared_error(test_salary, pred_salary))))<jupyter_output>The coeficients of linear regression are [ 641.50014431 159.16886281 -427.24991186]
The intercept of linear regresssion is 6193.7344144186645
The R2 score:
0.569
The RMSE error of regression model is 22538.013
<jupyter_text>### Results comparison befroe & after NumOfEmpDesc added to regression:
The R2 score and RMSE error do not improve very much after the factor of Number of employees added. We can conclude that NumOfEmpDesc is not primary factor affecting Base Salary.## Reference Level Analysis ### 1. Raw analysis of all levels in healthcare sector<jupyter_code>if not reference_level_analyzed:
level_count = defaultdict(int)
level_mean = defaultdict(float)
level_median = defaultdict(float)
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
level_healthcare_df = healthcare_df[healthcare_df['ReferenceLevelNum'] == level]
level_count[int(level)] = np.log10(level_healthcare_df.shape[0])
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
level_mean[int(level)] = company_level_mean.mean()
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
level_median[int(level)] = company_level_median.median()
print("\nDistribution across reference levels\n")
fig1 = plt.figure()
plt.title("Reference level distribution of healthcare industry")
plt.ylabel("Log scaled number of data points")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_count.values()))
plt.show()
print("\nMean value of salary across reference levels\n")
fig2 = plt.figure()
plt.title("Mean Salary across reference level of healthcare industry")
plt.ylabel("Mean salary of data points")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_mean.values()))
plt.show()
print("\nMedian values across reference levels\n")
fig3 = plt.figure()
plt.title("Median Salary across reference level of healthcare industry")
plt.ylabel("Median salary of data points")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_median.values()))
plt.show()
<jupyter_output>
Distribution across reference levels
<jupyter_text>### Results from reference level analysis:
The data points with level greater than 33 is not accurate. Level 99 is not applicable, so delete this level in all analysis in this notebook.### 2. Yearly analysis on reference level of healthcare industry<jupyter_code>if not reference_level_analyzed:
print("\nDistribution across reference levels\n")
fig1 = plt.figure(figsize=(8,4))
plt.title("Reference level distribution of healthcare industry")
plt.ylabel("Percentage of data points")
plt.xlabel("Reference Level")
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
year_level_count = defaultdict(lambda: defaultdict(int))
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
level_healthcare_df = year_healthcare_df[year_healthcare_df['ReferenceLevelNum'] == level]
year_level_count[int(level)] = level_healthcare_df.shape[0]
all_counts = sum(year_level_count.values())
for level in year_level_count:
year_level_count[level] /= all_counts
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_count.values()), label = str(year))
plt.legend()
plt.show()
print("\nMean value of salary across reference levels\n")
fig2 = plt.figure(figsize=(8,4))
plt.title("Mean Salary across reference level of healthcare industry")
plt.ylabel("Mean salary of data points")
plt.xlabel("Reference Level")
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
year_level_mean = defaultdict(lambda: defaultdict(float))
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
level_healthcare_df = year_healthcare_df[year_healthcare_df['ReferenceLevelNum'] == level]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
year_level_mean[int(level)] = company_level_mean.mean()
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_mean.values()), label = str(year))
plt.legend()
plt.show()
print("\nMedian values across reference levels\n")
fig3 = plt.figure(figsize=(8,4))
plt.title("Median Salary across reference level of healthcare industry")
plt.ylabel("Median salary of data points")
plt.xlabel("Reference Level")
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
year_level_median = defaultdict(lambda: defaultdict(float))
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
level_healthcare_df = year_healthcare_df[year_healthcare_df['ReferenceLevelNum'] == level]
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
year_level_median[int(level)] = company_level_median.median()
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_median.values()), label = str(year))
plt.legend()
plt.show()
<jupyter_output>
Distribution across reference levels
<jupyter_text>### Results from yearly reference level analysis:
From the year 2017, the mean & median values of level 10-25 falls greatly, which explains the downgoing values in the previous plot of company-year relationship
Data in year 2012 might contain some errors, since the mean & median salary of high reference level is comparably lower than those in other years.### 3. Regression analysis on reference level of healthcare industry<jupyter_code>if not reference_level_analyzed:
# Linear regression on reference level and salary
var_df = healthcare_df[['ReferenceLevelNum']]
salary_df = healthcare_df['Base Salary']
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
linear_reg = LinearRegression()
linear_reg.fit(train_var, train_salary)
pred_salary = linear_reg.predict(test_var)
print("The coeficients of linear regression are " + str(linear_reg.coef_))
print("The intercept of linear regresssion is " + str(linear_reg.intercept_) + "\n")
print("The R2 score: ")
print("%.3f\n" % (r2_score(test_salary, pred_salary)))
print("The RMSE error of regression model is %.3f" % (np.sqrt(mean_squared_error(test_salary, pred_salary))))
if not reference_level_analyzed:
# Polynomial regression on reference level
r2_scores = []
RMSEs = []
min_deg = 1
max_deg = 10
for degree in range(min_deg, max_deg+1):
var_df = healthcare_df[['ReferenceLevelNum']]
salary_df = healthcare_df['Base Salary']
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
linear_reg = LinearRegression()
train_var = PolynomialFeatures(degree).fit_transform(train_var)
test_var = PolynomialFeatures(degree).fit_transform(test_var)
linear_reg.fit(train_var, train_salary)
pred_salary = linear_reg.predict(test_var)
r2_scores.append(r2_score(test_salary, pred_salary))
RMSEs.append(np.sqrt(mean_squared_error(test_salary, pred_salary)))
fig1 = plt.figure()
plt.title("Regression analysis across reference level of healthcare industry")
plt.ylabel("R2 score")
plt.xlabel("Degree of regression")
plt.xticks(range(min_deg, max_deg+1, 1))
plt.plot(range(min_deg, max_deg+1), r2_scores)
plt.show()
fig2 = plt.figure()
plt.title("Regression analysis across reference level of healthcare industry")
plt.ylabel("RMSE")
plt.xlabel("Degree of regression")
plt.xticks(range(min_deg, max_deg+1, 1))
plt.plot(range(min_deg, max_deg+1), RMSEs)
plt.show()
if not reference_level_analyzed:
level_mean = defaultdict(float)
level_median = defaultdict(float)
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
level_healthcare_df = healthcare_df[healthcare_df['ReferenceLevelNum'] == level]
level_mean[int(level)] = level_healthcare_df['Base Salary'].dropna().mean()
level_median[int(level)] = level_healthcare_df['Base Salary'].dropna().median()
var_df = healthcare_df[['ReferenceLevelNum']]
salary_df = healthcare_df['Base Salary']
linear_reg = LinearRegression()
var_df = PolynomialFeatures(6).fit_transform(var_df)
linear_reg.fit(var_df, salary_df)
pred_result = []
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
pred = [[level]]
pred = PolynomialFeatures(6).fit_transform(pred)
pred_result.append(linear_reg.predict(pred))
fig1 = plt.figure()
plt.title("Approximation result of regression")
plt.ylabel("Salary of data points")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_mean.values()), label = "Mean")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_median.values()), label = "Median")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), pred_result, label = "Regression")
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text>### Results from regression analysis:
The linear regression model is not complex enough to generalize the salary trend based on reference level.
Considering the balance between model complexity and running time, the polynomial model with degree 5 or 6 could best describe the trend between reference level and base salary.
The regression could predict the salary of low & medium reference level very well, but for high level incomes, the model is biased in prediction since the datapoints of high incomer is too few to make good predictions.## Incumbent Point Analysis ### 1. Raw analysis of all levels in healthcare sector
<jupyter_code>if not incumbent_point_analyzed:
level_count = defaultdict(int)
level_mean = defaultdict(float)
level_median = defaultdict(float)
for level in sorted(pd.unique(healthcare_df['IncumbentPointCount'])):
level_healthcare_df = healthcare_df[healthcare_df['IncumbentPointCount'] == level]
level_count[int(level)] = level_healthcare_df.shape[0]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
level_mean[int(level)] = company_level_mean.mean()
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
level_median[int(level)] = company_level_median.median()
all_counts = sum(level_count.values())
for level in level_count:
level_count[level] /= all_counts
print("\nDistribution across incumbent points\n")
fig1 = plt.figure(figsize=(20,5))
plt.title("Incumbent point distribution of healthcare industry")
plt.ylabel("Percentage of data points")
plt.xlabel("Incumbent Point")
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(level_count.values()))
plt.show()
print("\nMean value of salary across incumbent points\n")
fig2 = plt.figure(figsize=(20,5))
plt.title("Mean Salary across incumbent points of healthcare industry")
plt.ylabel("Mean salary of data points")
plt.xlabel("Incumbent Point")
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(level_mean.values()))
plt.show()
print("\nMedian values across incumbent points\n")
fig3 = plt.figure(figsize=(20,5))
plt.title("Median Salary across incumbent points of healthcare industry")
plt.ylabel("Median salary of data points")
plt.xlabel("Incumbent Point")
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(level_median.values()))
plt.show()<jupyter_output>
Distribution across incumbent points
<jupyter_text>### Results from incumbent points analysis:
Similar to the conclusion in reference level, the incumbent point greater than 8000 is not accurate### 2. Yearly analysis on incumbent points of healthcare industry<jupyter_code>if not incumbent_point_analyzed:
print("\nDistribution across incumbent points\n")
fig1 = plt.figure(figsize=(50,5))
plt.title("Incumbent point distribution of healthcare industry")
plt.ylabel("Percentage of data points")
plt.xlabel("Incumbent Point")
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
year_level_count = defaultdict(lambda: defaultdict(int))
for level in sorted(pd.unique(healthcare_df['IncumbentPointCount'])):
level_healthcare_df = year_healthcare_df[year_healthcare_df['IncumbentPointCount'] == level]
year_level_count[int(level)] = level_healthcare_df.shape[0]
all_counts = sum(level_count.values())
for level in year_level_count:
year_level_count[level] /= all_counts
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(year_level_count.values()), label = str(year))
plt.legend()
plt.show()
print("\nMean value of salary across incumbent points\n")
fig2 = plt.figure(figsize=(50,5))
plt.title("Mean Salary across incumbent point of healthcare industry")
plt.ylabel("Mean salary of data points")
plt.xlabel("Incumbent Point")
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
year_level_mean = defaultdict(lambda: defaultdict(float))
for level in sorted(pd.unique(healthcare_df['IncumbentPointCount'])):
level_healthcare_df = year_healthcare_df[year_healthcare_df['IncumbentPointCount'] == level]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
year_level_mean[int(level)] = company_level_mean.mean()
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(year_level_mean.values()), label = str(year))
plt.legend()
plt.show()
print("\nMedian values across incumbent points\n")
fig3 = plt.figure(figsize=(50,5))
plt.title("Median Salary across incumbent point of healthcare industry")
plt.ylabel("Median salary of data points")
plt.xlabel("Incumbent Point")
for year in range(2008, 2020):
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == year]
year_level_median = defaultdict(lambda: defaultdict(float))
for level in sorted(pd.unique(healthcare_df['IncumbentPointCount'])):
level_healthcare_df = year_healthcare_df[year_healthcare_df['IncumbentPointCount'] == level]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
year_level_median[int(level)] = company_level_median.median()
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(year_level_median.values()), label = str(year))
plt.legend()
plt.show()<jupyter_output>
Distribution across incumbent points
<jupyter_text>### Results from yearly incumbent points analysis:
The incubent point is too detailed, causing too few data points covered in one incumbent value. This is not suitable for yearly analysis.### 3. Regression analysis on incumbent points of healthcare industry<jupyter_code>if not incumbent_point_analyzed:
# Linear regression on reference level and salary
var_df = healthcare_df[['IncumbentPointCount']]
salary_df = healthcare_df['Base Salary']
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
linear_reg = LinearRegression()
linear_reg.fit(train_var, train_salary)
pred_salary = linear_reg.predict(test_var)
print("The coeficients of linear regression are " + str(linear_reg.coef_))
print("The intercept of linear regresssion is " + str(linear_reg.intercept_) + "\n")
print("The R2 score: ")
print("%.3f\n" % (r2_score(test_salary, pred_salary)))
print("The RMSE error of regression model is %.3f" % (np.sqrt(mean_squared_error(test_salary, pred_salary))))
if not incumbent_point_analyzed:
# Polynomial regression on reference level
r2_scores = []
RMSEs = []
min_deg = 1
max_deg = 10
for degree in range(min_deg, max_deg+1):
var_df = healthcare_df[['IncumbentPointCount']]
salary_df = healthcare_df['Base Salary']
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
linear_reg = LinearRegression()
train_var = PolynomialFeatures(degree).fit_transform(train_var)
test_var = PolynomialFeatures(degree).fit_transform(test_var)
linear_reg.fit(train_var, train_salary)
pred_salary = linear_reg.predict(test_var)
r2_scores.append(r2_score(test_salary, pred_salary))
RMSEs.append(np.sqrt(mean_squared_error(test_salary, pred_salary)))
fig1 = plt.figure()
plt.title("Regression analysis across incumbent points of healthcare industry")
plt.ylabel("R2 score")
plt.xlabel("Degree of regression")
plt.xticks(range(min_deg, max_deg+1, 1))
plt.plot(range(min_deg, max_deg+1), r2_scores)
plt.show()
fig2 = plt.figure()
plt.title("Regression analysis across incumbent points of healthcare industry")
plt.ylabel("RMSE")
plt.xlabel("Degree of regression")
plt.xticks(range(min_deg, max_deg+1, 1))
plt.plot(range(min_deg, max_deg+1), RMSEs)
plt.show()
if not incumbent_point_analyzed:
level_mean = defaultdict(float)
level_median = defaultdict(float)
for level in sorted(pd.unique(healthcare_df['IncumbentPointCount'])):
level_healthcare_df = healthcare_df[healthcare_df['IncumbentPointCount'] == level]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
level_mean[int(level)] = company_level_mean.mean()
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
level_median[int(level)] = company_level_median.median()
var_df = healthcare_df[['IncumbentPointCount']]
salary_df = healthcare_df['Base Salary']
linear_reg = LinearRegression()
var_df = PolynomialFeatures(5).fit_transform(var_df)
linear_reg.fit(var_df, salary_df)
pred_result = []
for level in sorted(pd.unique(healthcare_df['IncumbentPointCount'])):
pred = [[level]]
pred = PolynomialFeatures(5).fit_transform(pred)
pred_result.append(linear_reg.predict(pred))
fig1 = plt.figure()
plt.title("Approximation result of regression")
plt.ylabel("Salary of data points")
plt.xlabel("Incumbent Points")
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(level_mean.values()), label = "Mean")
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), list(level_median.values()), label = "Median")
plt.plot(sorted(pd.unique(healthcare_df['IncumbentPointCount'])), pred_result, label = "Regression")
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text>### Results from regression analysis:
The linear regression model is not complex enough to generalize the salary trend based on incumbent points.
Considering the balance between model complexity and running time, the polynomial model with degree 5 could best describe the trend between reference level and base salary.
If the model degree is higher than 6, then the model will produce a much more unreliable result due to high model complexity & uneven distribution of datapoints.
The regression could predict the salary of low & medium incumbent points very well. Similar to the problems in reference level, the model fails to predict high incumbent level incomes since datapoints of those high level incomers are too rare.## Total Regression Analysis ### 1. Multinomial Regression of all variables<jupyter_code>var_df = healthcare_df[['ReferenceLevelNum', 'IncumbentPointCount']]
salary_df = healthcare_df['Base Salary']
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
linear_reg = LinearRegression()
train_var = PolynomialFeatures(5).fit_transform(train_var)
test_var = PolynomialFeatures(5).fit_transform(test_var)
linear_reg.fit(train_var, train_salary)
pred_salary = linear_reg.predict(test_var)
r2_scores.append(r2_score(test_salary, pred_salary))
RMSEs.append(np.sqrt(mean_squared_error(test_salary, pred_salary)))
print("The coeficients of linear regression are " + str(linear_reg.coef_))
print("The intercept of linear regresssion is " + str(linear_reg.intercept_) + "\n")
print("The R2 score: ")
print("%.3f\n" % (r2_score(test_salary, pred_salary)))
print("The RMSE error of regression model is %.3f" % (np.sqrt(mean_squared_error(test_salary, pred_salary))))<jupyter_output>The coeficients of linear regression are [ 0.00000000e+00 -4.58595334e-01 -5.97359644e-01 -6.49593640e-01
-5.57689876e+00 -1.13404422e+00 -1.88283369e+00 -2.09317582e+01
8.12706609e-01 -3.92779306e-03 2.03037016e+01 9.13369895e-01
-4.90123411e-02 2.81888286e-04 -9.20061909e-08 -8.35500798e-01
-1.58848082e-03 7.13899677e-04 -4.67727359e-06 1.83201986e-09
1.88225722e-12]
The intercept of linear regresssion is 41279.602532898396
The R2 score:
0.589
The RMSE error of regression model is 21922.198
<jupyter_text>### Results from polynomial regression analysis:
The multinomial regression model does not effectively improve the performance compared to linear model.### 2. Regresssion with Supportive Vector of all variables<jupyter_code>sv_regression = SVR(kernel='linear', degree = 1, max_iter = 10000)
var_df = healthcare_df[['ReferenceLevelNum', 'IncumbentPointCount']]
salary_df = healthcare_df['Base Salary']
train_var, test_var, train_salary, test_salary = train_test_split(var_df, salary_df, test_size = 0.2)
sv_regression.fit(train_var, train_salary)
pred_salary = sv_regression.predict(test_var)
print("The coeficients of linear regression are " + str(linear_reg.coef_))
print("The intercept of linear regresssion is " + str(linear_reg.intercept_) + "\n")
print("The R2 score: ")
print("%.3f\n" % (r2_score(test_salary, pred_salary)))
print("The RMSE error of regression model is %.3f" % (np.sqrt(mean_squared_error(test_salary, pred_salary))))<jupyter_output><empty_output><jupyter_text>### Results from Supportive Vector regression analysis:
The SV-regression is very slow in running, and it often fails to converge to a small tolerance limit. And the model performance is worse than traditional regression.
Maybe this kind of regression could have better performance, but it may need searching on better hyper-parameters for good results on regression.## Skill level Analysis<jupyter_code>if not skill_level_analyzed:
print("\nComparison of reference levels of total and 2017\n")
level_count = defaultdict(int)
level_mean = defaultdict(float)
level_median = defaultdict(float)
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
level_healthcare_df = healthcare_df[healthcare_df['ReferenceLevelNum'] == level]
level_count[int(level)] = level_healthcare_df.shape[0]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
level_mean[int(level)] = company_level_mean.mean()
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
level_median[int(level)] = company_level_median.median()
counts = sum(level_count.values())
for level in level_count:
level_count[level] /= counts
year_healthcare_df = healthcare_df[healthcare_df['CalendarYear'] == 2017]
year_level_count = defaultdict(lambda: defaultdict(int))
year_level_mean = defaultdict(float)
year_level_median = defaultdict(float)
for level in sorted(pd.unique(healthcare_df['ReferenceLevelNum'])):
level_healthcare_df = year_healthcare_df[year_healthcare_df['ReferenceLevelNum'] == level]
year_level_count[int(level)] = level_healthcare_df.shape[0]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
year_level_mean[int(level)] = company_level_mean.mean()
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
year_level_median[int(level)] = company_level_median.median()
counts = sum(year_level_count.values())
for level in year_level_count:
year_level_count[level] /= counts
fig1 = plt.figure()
plt.title("Reference level distribution of healthcare industry")
plt.ylabel("Percentage of total number")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_count.values()), label = 'mean')
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_count.values()), label = '2017')
plt.legend()
plt.show()
fig2 = plt.figure()
plt.title("Income distribution across reference level of healthcare industry")
plt.ylabel("Mean income")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_mean.values()), label = 'mean')
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_mean.values()), label = '2017')
plt.legend()
plt.show()
fig3 = plt.figure()
plt.title("Income distribution across reference level of healthcare industry")
plt.ylabel("Median income")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(level_median.values()), label = 'median')
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_median.values()), label = '2017')
plt.legend()
plt.show()
<jupyter_output>
Comparison of reference levels of total and 2017
<jupyter_text>### Results from distribution of total and 2017:
The distribution of all incomers across different levels in 2017 are basically the same as the general one.
Medium & high skill level salaries has risen in the year of 2017. This should be caused by events of mid & high skills technicans in year 2017.
<jupyter_code>if not skill_level_analyzed:
year_companies = pd.unique(year_healthcare_df['KF_ID'])
company_level_median = defaultdict(int)
companies_df = None
for company in year_companies:
company_data_df = healthcare_df[healthcare_df['KF_ID'] == company]
companies_df = merge_dataframe(companies_df, company_data_df)
company_2017_level_count = defaultdict(int)
company_2017_level_mean = defaultdict(float)
company_2017_level_median = defaultdict(float)
for level in sorted(pd.unique(companies_df['ReferenceLevelNum'])):
level_healthcare_df = companies_df[companies_df['ReferenceLevelNum'] == level]
company_2017_level_count[int(level)] = level_healthcare_df.shape[0]
company_level_mean = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).mean()
company_2017_level_mean[int(level)] = company_level_mean.mean()
company_level_median = level_healthcare_df[['KF_ID', 'Base Salary']].dropna().groupby(by = ['KF_ID']).median()
company_2017_level_median[int(level)] = company_level_median.median()
counts = sum(company_2017_level_count.values())
for level in company_2017_level_count:
company_2017_level_count[level] /= counts
fig1 = plt.figure()
plt.title("Reference level distribution of healthcare companies surveyed in 2017")
plt.ylabel("Percentage of total number")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(company_2017_level_count.values()), label = 'all count')
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_count.values()), label = '2017')
plt.legend()
plt.show()
fig2 = plt.figure()
plt.title("Income distribution across reference level of healthcare companies surveyed in 2017")
plt.ylabel("Mean income")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(company_2017_level_mean.values()), label = 'all mean')
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_mean.values()), label = '2017')
plt.legend()
plt.show()
fig3 = plt.figure()
plt.title("Income distribution across reference level of healthcare companies surveyed in 2017")
plt.ylabel("Median income")
plt.xlabel("Reference Level")
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(company_2017_level_median.values()), label = 'all median')
plt.plot(sorted(pd.unique(healthcare_df['ReferenceLevelNum'])), list(year_level_median.values()), label = '2017')
plt.legend()
plt.show()<jupyter_output><empty_output>
|
no_license
|
/healthcare_sector_analysis.ipynb
|
rsk2327/PDSG_PayInequality
| 16 |
<jupyter_start><jupyter_text><jupyter_code>from google.colab import drive
drive.mount('/content/drive')
from google.colab.patches import cv2_imshow
import cv2
import numpy as np
import os
import math
#drive/MyDrive/new/yolo/
# Load Yolo
#drive/MyDrive/new/yolo/
net = cv2.dnn.readNet("drive/MyDrive/new/yolo/yolov3.weights", "drive/MyDrive/new/yolo/yolov3.cfg")
classes = []
with open("drive/MyDrive/new/yolo/coco.names", "r") as f:
classes = [line.strip() for line in f.readlines()]
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
# myset=set()
# for id in class_ids:
# myset.add(id)
# myset
#firstFrame
firstFrame =None
first=1
first2 =1
warn =0; #variable to indicate warning
#vediocapture
vediocap = cv2.VideoCapture('drive/MyDrive/new/yolo/chair2.mp4')
vediocap.set(cv2.CAP_PROP_FRAME_WIDTH,640)
vediocap.set(cv2.CAP_PROP_FRAME_HEIGHT,480)
vediocap.set(cv2.CAP_PROP_FPS, 30)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
outer = cv2.VideoWriter('char2.avi',fourcc, 30.0, (640,480))
terminate =0;
fmid_x=[]
fmid_y=[]
mid_x=[]
mid_y=[]
def distance(x1,x2,y1,y2):
distc = math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
return distc
while vediocap.isOpened():
# Capture frame-by-frame
ret, img = vediocap.read()
if not ret:
break
# Loading image
# img = cv2.imread("drive/MyDrive/new/yolo/")
# img = cv2.resize(img, None, fx=0.4, fy=0.4)
height,width,channels = img.shape
#Detecting objects
blob = cv2.dnn.blobFromImage(img, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(output_layers)
# Showing informations on the screen
class_ids = []
confidences = []
boxes = []
myset =set()
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
# Object detected
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
# Rectangle coordinates
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
#here we add set
myset.add(class_id)
#if object is Lost
#if live we update for specific time
if first:
print("first frame")
fixedset=set()
first =0
firstframe =img
fixedset =[c for c in myset]
no_of_fixedset =len(fixedset)
# print("fixedSet ",len(fixedset))
# print("myset ",len(myset))
no_of_myset = len(myset)
if(no_of_fixedset != no_of_myset):
warn=warn+1;
entered =0
lost=0
if(warn >5):
warn=0
#update the values
print("=======================================================")
if(no_of_fixedset < no_of_myset): #if both no. equal it will not come here
print("New object entered ")
entered=1
else:
print("object Lost")
lost=1
no_of_fixedset = no_of_myset
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
kil =len(boxes)
for j in range(kil):
fmid_x.append(0)
fmid_y.append(0)
mid_x.append(0)
mid_y.append(0)
font = cv2.FONT_HERSHEY_PLAIN
for i in range(len(boxes)):
# print(i)
k=0
if i in indexes:
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
if(entered):
label =label+" Entered "
entered =0
if(lost):
label =label+" MISSING ! ALERT "
lost =0
color = colors[i]
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
cv2.putText(img, label, (x, y + 30), font, 3, color, 3)
#if object moves
if kil:
fmid_x[k] =int((2*x+w)/2)
fmid_y[k] =int((2*y+h)/2)
kil =kil-1
mid_x[k] =(2*x+w)/2
mid_y[k] =(2*y+h)/2
#find distance b/w two objects
dist = distance(fmid_x[k],mid_x[k],fmid_y[k],mid_y[k])
#using this we can even predict direct of object moving
print("distance : " + label,dist)
k = k+1
# cv2_imshow(img) #uncoment this to see the image frames
outer.write(img)
terminate=terminate+1
vediocap.release()
outer.release()
cv2.waitKey(0)
# cv2.destroyAllWindows()
<jupyter_output>first frame
distance : chair 0.7071067811865476
distance : book 0.7071067811865476
distance : chair 0.7071067811865476
distance : book 0.7071067811865476
distance : chair 0.5
distance : book 0.5
distance : chair 0.5
distance : book 0.5
distance : chair 0.7071067811865476
distance : book 0.5
distance : chair 0.7071067811865476
distance : book 0.0
distance : chair 0.7071067811865476
distance : book 0.5
distance : chair 0.0
distance : book 0.5
distance : chair 0.7071067811865476
distance : book 0.5
distance : chair 0.5
distance : book 0.5
distance : chair 0.0
distance : book 0.7071067811865476
distance : chair 0.7071067811865476
distance : book 0.5
distance : person 0.7071067811865476
distance : chair 0.7071067811865476
distance : book 0.0
distance : person 0.0
distance : chair 0.5
distance : book 0.0
distance : person 0.5
distance : chair 0.5
distance : book 0.0
distance : person 0.0
distance : chair 0.5
distance : person 0.0
distance : book 0.0
==========================================[...]
|
no_license
|
/YoloVedioCap.ipynb
|
brkuhgk/yolo
| 1 |
<jupyter_start><jupyter_text># Assignment 01Made by : Neeraj kumar
Shift : 03:30 P.M TO 06:30 P.M<jupyter_code>#\newline Backslash and newline ignored
print("Line1 \n Line2 \n Line3") #Example
print("\\") #Backslash(\\) is used to print single backslash and if we want to print four backslash so we must used eight
#backslash at a single time. In short double backslash is equals to single backslash(\\ = \)
print('\'') #if we want to print single qoute so we must write like this. Another Example.
print('I \'m Neeraj')
print("\"") #For double qoute, we must write like this.<jupyter_output>"
<jupyter_text>If we want to use double with single qoute so we can. for example : print("My name is 'neeraj'")
but we can't used double qoute with same double qoute, then it will be give error for example : print("My name is "NAME"") <jupyter_code>print("My name is 'Name'") #So like this we can't used double qoute with as same doube qoute but we can use single qoute with double qoute
print("My name is "name")
print("Hello \"World\" World") #If we want to use double qoute with double qoute as well, so we must used backslash like this. and this is the Escape Sequence
print("Hello \t World")# \t Backslash "t" is used for horizantal tab space between charactors.
print("Hell\blo") #(\b) backslash b is used to finish the extra words. like this.
#Date: 19.05.2019
a = a + 5
a = 10
a + 5
a
a = a +5
a
a += 5
a
a -= 5
a
a *= 5
a
a/= 5
a<jupyter_output><empty_output><jupyter_text># Comparison Operator<jupyter_code>a = 2
b = 2
a == b
a!= b #not eqaul
a > b
a< b
a >= b
a <= b
a
b
a <= b
a = 2
b = 2
a <= b<jupyter_output><empty_output><jupyter_text># Math Expression: Eliminatting ambiuity<jupyter_code>10 ** 2
10 * 3 ** 2 + (12 * 3)/ 2
2 / 4 - 3 * 9 + 2<jupyter_output><empty_output>
|
no_license
|
/Assignment 01 Sunday 03 30 to 06 30.ipynb
|
neeraj72/new
| 4 |
<jupyter_start><jupyter_text>## ReLU<jupyter_code>model_metadata_file_path = pathlib.Path("..", training_logs["relu"], "metadata.json")
model_metadata = io.read_metadata(str(model_metadata_file_path))
model_metadata
history = model_metadata["history"]
metrics_df = visualize.create_metrics_dataframe(history)
loss_plot = visualize.learning_curves("loss", metrics_df)
visualize.save_plot(loss_plot, f"../reports/figures/{species}-relu.loss.png")
accuracy_plot = visualize.learning_curves("accuracy", metrics_df)
visualize.save_plot(accuracy_plot, f"../reports/figures/{species}-relu.accuracy.png")<jupyter_output><empty_output><jupyter_text>## ReLU with Dropout<jupyter_code>dropout_model_metadata_file_path = pathlib.Path("..", training_logs["relu-dropout"], "metadata.json")
dropout_model_metadata = io.read_metadata(str(dropout_model_metadata_file_path))
dropout_model_metadata
dropout_history = dropout_model_metadata["history"]
dropout_metrics_df = visualize.create_metrics_dataframe(dropout_history)
dropout_loss_plot = visualize.learning_curves("loss", dropout_metrics_df)
visualize.save_plot(dropout_loss_plot, f"../reports/figures/{species}-relu-dropout.loss.png")
dropout_accuracy_plot = visualize.learning_curves("accuracy", dropout_metrics_df)
visualize.save_plot(dropout_accuracy_plot, f"../reports/figures/{species}-relu.accuracy.png")<jupyter_output><empty_output>
|
permissive
|
/notebooks/4.1-agj-evaluate-Tomato.ipynb
|
bitjockey42/ikapati-research
| 2 |
<jupyter_start><jupyter_text># Gradient Descent: Step Sizes - Lab## IntroductionIn this lab, we'll practice applying gradient descent. As we know gradient descent begins with an initial regression line, and moves to a "best fit" regression line by changing values of $m$ and $b$ and evaluating the RSS. So far, we have illustrated this technique by changing the values of $m$ and evaluating the RSS. In this lab, we will work through applying our technique by changing the value of $b$ instead. Let's get started.## ObjectivesYou will be able to:
- Understand how to go from RSS to finding a "best fit" line
- Understand how gradient descent can be used to find the best intercept for your linear regression model## Setting up our initial regression lineOnce again, we'll take take a look at revenues our data example, which looks like this:<jupyter_code>import numpy as np
np.set_printoptions(formatter={'float_kind':'{:f}'.format})
import matplotlib.pyplot as plt
np.random.seed(225)
x = np.random.rand(30, 1).reshape(30)
y_randterm = np.random.normal(0,3,30)
y = 3+ 50* x + y_randterm
plt.plot(x, y, '.b')
plt.xlabel("x", fontsize=14)
plt.ylabel("y", fontsize=14);
plt.show()<jupyter_output><empty_output><jupyter_text>We can start with some values for an initial not-so-accurate regression line, $y = 43x + 12$.<jupyter_code>def regression_formula(x):
return 12 + 43*x
np.random.seed(225)
x = np.random.rand(30,1).reshape(30)
y_randterm = np.random.normal(0,3,30)
y = 3+ 50* x + y_randterm
plt.plot(x, y, '.b')
plt.plot(x, regression_formula(x), '-')
plt.xlabel("x", fontsize=14)
plt.ylabel("y", fontsize=14);
def errors(x_values, y_values, m, b):
y_line = (b + m*x_values)
return (y_values - y_line)
def squared_errors(x_values, y_values, m, b):
return errors(x_values, y_values, m, b)**2
def residual_sum_squares(x_values, y_values, m, b):
return sum(squared_errors(x_values, y_values, m, b))<jupyter_output><empty_output><jupyter_text>Now using the `residual_sum_squares`, function, we calculate the RSS to measure the accuracy of the regression line to our data. Let's take another look at that function:<jupyter_code>residual_sum_squares(x, y , 43, 12) <jupyter_output><empty_output><jupyter_text>### Building a cost curveNow let's use the `residual_sum_squares` function to build a cost curve. Keeping the $m$ value fixed at $43$, write a function called `rss_values`.
* `rss_values` passes our dataset with the `x_values` and `y_values` arguments.
* It also takes a list of values of $b$, and an initial $m$ value as arguments.
* It outputs a numpy array with a first column of `b_values` and `rss_values`, with each key pointing to a list of the corresponding values.<jupyter_code>def rss_values(x_values, y_values, m, b_values):
B_RSS = []
for bval in b_values:
rss = residual_sum_squares(x_values, y_values, m, bval)
B_RSS.append([bval, rss])
return np.array(B_RSS)<jupyter_output><empty_output><jupyter_text>Now loop over a list with $b$ values between 0 and 14 with steps of 0.5. Store it in bval_RSS. Print out the resulting table.<jupyter_code>import sys
b_val = np.arange(0, 14.5, .5)
b_val
bval_RSS = rss_values(x, y, 43, b_val)
np.savetxt(sys.stdout, bval_RSS, '%16.2f') #this line is to round your result, which will make things look nicer.
type(bval_RSS)<jupyter_output> 0.00 1750.97
0.50 1552.09
1.00 1368.21
1.50 1199.33
2.00 1045.45
2.50 906.57
3.00 782.69
3.50 673.81
4.00 579.93
4.50 501.05
5.00 437.17
5.50 388.29
6.00 354.41
6.50 335.53
7.00 331.65
7.50 342.77
8.00 368.89
8.50 410.01
9.00 466.13
9.50 537.25
10.00 623.37
10.50 724.49
11.00 840.61
11.50 971.73
12.00 1117.85
12.50 1278.97
13.00 1455.08
13.50 1646.20
14.00 1852.32
<jupyter_text>Plotly provides for us a table chart, and we can pass the values generated from our `rss_values` function to create a table.And let's plot this out using a a line chart.<jupyter_code>plt.figure(figsize=(10,7))
plt.plot(bval_RSS[:,0], bval_RSS[:,1], '-')
plt.xlabel("b-values", fontsize=14)
plt.ylabel("RSS", fontsize=14)
plt.title("RSS with changes to intercept", fontsize=16);<jupyter_output><empty_output><jupyter_text>## Looking at the slope of our cost curveIn this section, we'll work up to building a gradient descent function that automatically changes our step size. To get you started, we'll provide a function called `slope_at` that calculates the slope of the cost curve at a given point on the cost curve. `Use the slope_at` function for b-values 3 and 6.<jupyter_code>def slope_at(x_values, y_values, m, b):
delta = .001
base_rss = residual_sum_squares(x_values, y_values, m, b)
delta_rss = residual_sum_squares(x_values, y_values, m, b + delta)
numerator = delta_rss - base_rss
slope = numerator/delta
return {'b': b, 'slope': slope}
# Use slope_at
# {'b': 3, 'slope': -232.73066022784406}
slope_at(x,y,43,3)
# Use slope_at
{'b': 6, 'slope': -52.73066022772355}
slope_at(x,y,43,6)
<jupyter_output><empty_output><jupyter_text>So the `slope_at` function takes in our dataset, and returns the slope of the cost curve at that point. So the numbers -232.73 and -52.73 reflect the slopes at the cost curve when b is 3 and 6 respectively.<jupyter_code>slope_3 = slope_at(x, y, 43, 3)['slope']
slope_6 = slope_at(x, y, 43, 6)['slope']
x_3 = np.linspace(3-1, 3+1, 100)
x_6 = np.linspace(6-1, 6+1, 100)
rss_3 = residual_sum_squares(x, y, 43, 3)
rss_6 = residual_sum_squares(x, y, 43, 6)
tan_3 = rss_3+slope_3*(x_3-3)
tan_6 = rss_6+slope_6*(x_6-6)
plt.figure(figsize=(10,7))
plt.plot(bval_RSS[:,0], bval_RSS[:,1], '-')
plt.plot(x_3, tan_3, color = "red", label = "slope =" + str(round(slope_3,2)))
plt.plot(x_6, tan_6, color = "green", label = "slope =" + str(round(slope_6,2)))
plt.xlabel("b-values", fontsize=14)
plt.ylabel("RSS", fontsize=14)
plt.legend(loc='upper right', fontsize='large')
plt.title("RSS with changes to slope", fontsize=16);<jupyter_output><empty_output><jupyter_text>As you can see, it seems pretty accurate. When the curve is steeper and downwards at $b = 3$, the slope is around -232.73. And at $b = 6$ with our cost curve becoming flatter, our slope is around -52.73. ## Moving towards gradient descentNow that we are familiar with our `slope_at` function and how it calculates the slope of our cost curve at a given point, we can begin to use that function with our gradient descent procedure.
Remember that gradient descent works by starting at a regression line with values m, and b, which corresponds to a point on our cost curve. Then we alter our m or b value (here, the b value) by looking to the slope of the cost curve at that point. Then we look to the slope of the cost curve at the new b value to indicate the size and direction of the next step.So now let's write a function called `updated_b`. The function will tell us the step size and direction to move along our cost curve. The `updated_b` function takes as arguments an initial value of $b$, a learning rate, and the `slope` of the cost curve at that value of $m$. Its return value is the next value of `b` that it calculates.<jupyter_code>def updated_b(b, learning_rate, cost_curve_slope):
return b - learning_rate * cost_curve_slope<jupyter_output><empty_output><jupyter_text>This is what our function returns.<jupyter_code>current_slope = slope_at(x, y, 43, 3)['slope']
updated_b(3, .01, current_slope)
# 5.327
current_slope = slope_at(x, y, 43, 5.327)['slope']
updated_b(5.327, .01, current_slope)
# 6.258
current_slope = slope_at(x, y, 43, 6.258)['slope']
updated_b(6.258, .01, current_slope)
# 6.6305
current_slope = slope_at(x, y, 43, 6.631)['slope']
updated_b(6.631, .01, current_slope)
# 6.780<jupyter_output><empty_output><jupyter_text>Take a careful look at how we use the `updated_b` function. By using our updated value of $b$ we are quickly converging towards an optimal value of $b$.
Now let's write another function called `gradient_descent`. The inputs of the function are `x_values`, `y_values`, `steps`, the `m` we are holding constant, the `learning_rate`, and the `current_b` that we are looking at. The `steps` arguments represents the number of steps the function will take before the function stops. We can get a sense of the return value in the cell below. It is a list of dictionaries, with each dictionary having a key of the current `b` value, the `slope` of the cost curve at that `b` value, and the `rss` at that `b` value.<jupyter_code># gonna use updated b. residual sum squares. and slope at
def gradient_descent(x_values, y_values, steps, current_b, learning_rate, m):
result = []
for i in range(steps):
bval = current_b
rss = residual_sum_squares(x_values,y_values,m,current_b)
slope = slope_at(x_values,y_values,m,current_b)['slope']
# print(type(slope))
# print(slope)
# print(slope['slope'])
result.append({'b':bval, 'rss':round(rss, 2), 'slope': round(slope, 14)})
current_b = updated_b(current_b, learning_rate, slope)
return result
descent_steps = gradient_descent(x, y, 50, 0, learning_rate = .005, m = 43)
descent_steps
#[{'b': 0, 'rss': 1750.97, 'slope': -412.73},
# {'b': 2.063653301142949, 'rss': 1026.94, 'slope': -288.91},
# {'b': 3.5082106119386935, 'rss': 672.15, 'slope': -202.24},
# {'b': 4.519400729495828, 'rss': 498.29, 'slope': -141.57},
# {'b': 5.2272338117862205, 'rss': 413.1, 'slope': -99.1},
# {'b': 5.72271696938941, 'rss': 371.35, 'slope': -69.37},
# {'b': 6.06955517971187, 'rss': 350.88, 'slope': -48.56},
# {'b': 6.312341926937677, 'rss': 340.86, 'slope': -33.99},
# {'b': 6.482292649996282, 'rss': 335.94, 'slope': -23.79},
# {'b': 6.601258156136964, 'rss': 333.53, 'slope': -16.66},
# {'b': 6.684534010435641, 'rss': 332.35, 'slope': -11.66},
# {'b': 6.742827108444089, 'rss': 331.77, 'slope': -8.16},
# {'b': 6.7836322770506285, 'rss': 331.49, 'slope': -5.71},
# {'b': 6.812195895074922, 'rss': 331.35, 'slope': -4.0},
# {'b': 6.832190427692808, 'rss': 331.28, 'slope': -2.8}]<jupyter_output><empty_output><jupyter_text>Looking at our b-values, you get a pretty good idea of how our gradient descent function works. It starts far away with $b = 0$, and the step size is relatively large, as is the slope of the cost curve. As the $b$ value updates such that it approaches a minimum of the RSS, the slope of the cost curve and the size of each step both decrease. Remember that each of these steps indicates a change in our regression line's slope value towards a "fit" that more accurately matches our dataset. Let's plot the final regression line as found before, with $m=43$ and $b=6.83$<jupyter_code>plt.scatter(x,y)
plt.plot(x, 43*x+6.83)<jupyter_output><empty_output>
|
non_permissive
|
/index.ipynb
|
Patrickbfuller/dsc-2-14-12-gradient-descent-step-sizes-lab-seattle-ds-career-040119
| 12 |
<jupyter_start><jupyter_text># The Sparks Foundation
## #GRIPJUNE21## Submitted by MOHD AUSAAF NASIRData Science and Business Analytics Tasks
Task 1 - Prediction using Supervised MLStep 1 - Import the required libraries<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error<jupyter_output><empty_output><jupyter_text>Step 2 - Read data from CSV file<jupyter_code>data= pd.read_csv("http://bit.ly/w-data")<jupyter_output><empty_output><jupyter_text>Step 3 Visualize the data<jupyter_code>data.head(6)<jupyter_output><empty_output><jupyter_text>Visualize the last five scores
<jupyter_code>data.tail(5)
data.describe()<jupyter_output><empty_output><jupyter_text>Check for null values if any<jupyter_code>data.isnull == True
data.shape
sns.set_style('darkgrid')
sns.scatterplot(y= data['Scores'], x= data['Hours'])
plt.title('Marks Vs Study Hours', size=24)
plt.ylabel('Marks Percentage', size=16)
plt.xlabel('Hours Studies', size=16)
plt.show()
sns.regplot(x= data['Hours'], y= data['Scores'])
plt.title('Regression Plot',size=24)
plt.ylabel('Marks Percentage', size=16)
plt.xlabel('Hours Studied', size=16)
plt.show()
print(data.corr())<jupyter_output> Hours Scores
Hours 1.000000 0.976191
Scores 0.976191 1.000000
<jupyter_text>Step 4 - Prepare data for Machine learning algorithms and train the algorithm<jupyter_code>data.columns
data.corr()
X = data.iloc[:, :-1].values
Y = data.iloc[:, 1].values
train_X, val_X, train_Y, val_Y = train_test_split(X, Y, random_state = 0)
train_X.shape, train_Y.shape, val_X.shape, val_Y.shape<jupyter_output><empty_output><jupyter_text> Step 5 - Making the predictions<jupyter_code>regression = LinearRegression()
regression.fit(train_X, train_Y)
print("********* Model Trained *********")
pred_Y = regression.predict(val_X)
prediction = pd.DataFrame({'Hours': [i[0] for i in val_X], 'Predicted Marks': [k for k in pred_Y]})
prediction<jupyter_output>********* Model Trained *********
<jupyter_text> Step 6 - Comparing Predicted Marks with Actual Marks<jupyter_code>compare_scores = pd.DataFrame({'Actual Marks': val_Y, 'Predicted Marks': pred_Y})
compare_scores<jupyter_output><empty_output><jupyter_text> Step 7 - Evaluate the model<jupyter_code>plt.scatter(x=val_X, y=val_Y, color='green')
plt.plot(val_X, pred_Y, color='red')
plt.title('Actual vs Predicted', size=24)
plt.ylabel('Marks Percentage', size=16)
plt.xlabel('Hours Studied', size=16)
plt.show()<jupyter_output><empty_output><jupyter_text>Step 8 - Computing the Mean Absolute Error<jupyter_code>print('Mean absolute error: ',mean_absolute_error(val_Y,pred_Y))<jupyter_output>Mean absolute error: 4.130879918502486
<jupyter_text>Step 9 - Final Result<jupyter_code>hours = [9.25]
result = regression.predict([hours])
print("Score = {}".format(round(result[0],3)))<jupyter_output>Score = 93.893
|
no_license
|
/Task 1.ipynb
|
AusaafNasir/Task-1---The-Sparks-Foundation
| 11 |
<jupyter_start><jupyter_text># Object Oriented Programming
## Homework Assignment
#### Problem 1
Fill in the Line class methods to accept coordinates as a pair of tuples and return the slope and distance of the line.<jupyter_code>class Line:
def __init__(self,coor1,coor2):
self.coor1 = coor1
self.coor2 = coor2
def distance(self):
return (((self.coor2[0]-self.coor1[0])**2)+((self.coor2[1]-self.coor1[1])**2))**0.5
def slope(self):
return (self.coor2[1]-self.coor1[1])/(self.coor2[0]-self.coor1[0])
# EXAMPLE OUTPUT
coordinate1 = (3,2)
coordinate2 = (8,10)
li = Line(coordinate1,coordinate2)
li.distance()
li.slope()<jupyter_output><empty_output><jupyter_text>________
#### Problem 2Fill in the class <jupyter_code>class Cylinder:
pi = 3.14
def __init__(self,height=1,radius=1):
self.height = height
self.radius = radius
def volume(self):
return Cylinder.pi * self.height * self.radius**2
def surface_area(self):
return 2 * (Cylinder.pi * self.radius**2) + 2 * Cylinder.pi * self.radius * self.height
# EXAMPLE OUTPUT
c = Cylinder(2,3)
c.volume()
c.surface_area()<jupyter_output><empty_output>
|
no_license
|
/05-Object Oriented Programming/02-Object Oriented Programming Homework.ipynb
|
jackwmarkham/pythonbootcamp
| 2 |
<jupyter_start><jupyter_text># Yelp Project
We will be mining Yelp data today to find zip codes that have the highest rated restaurants for different categories of food. We will be using the excellent Yelp Fusion API. An API (application programming interface) is a way for developers to directly interact with a server. APIs make it easy to request specific data. The Fusion API is the third generation of Yelp APIs. There is very good documentation.
## Make a yelp account
1. Please visit the [Yelp Fusion home page](https://www.yelp.com/developers/documentation/v3) and click the button for creating an app
1. Its not important what you name your app. Just put anything down
1. After app creation you will get both a client id and a client secret. Yelp uses this to keep track of how you use their API
1. Copy and paste the client id and client secret into the strings below## Authenticate Yourself
Before you can use the API, you must authenticate yourself using the standard OAuth2 protocol. You will make a post request with your id and secret and Yelp will respond with an access token. Once you have your access token you can start using the API.<jupyter_code>import pandas as pd
import requests
# replace CLIENT_ID and CLIENT_SECRET with your info
app_id = 'CLIENT_ID'
app_secret = 'CLIENT_SECRET'
# don't edit these lines
data = {'grant_type': 'client_credentials',
'client_id': app_id,
'client_secret': app_secret}
token = requests.post('https://api.yelp.com/oauth2/token', data=data)
access_token = token.json()['access_token']<jupyter_output><empty_output><jupyter_text>### Using the API
The Yelp API is fairly easy to use. There are about six different **endpoints** that you can use. An endpoint is a URL that you make a web request to. Along with the endpoint you send a list of parameters that specify the results that you would like. This project will use the business search endpoint. It has a url of **https://api.yelp.com/v3/businesses/search**.
There are about a dozen parameters you can use to specify an exact search. Check the [search documentation](https://www.yelp.com/developers/documentation/v3/business_search) for more detail.
### Your first search
The following API call to the business search endpoint, searches for the top 50 Italian restaurants in Houston sorted by rating with price of 1, 2 or 3. The highest price is 4. This makes a web request and will take a second or so.<jupyter_code># don't edit these lines
url = 'https://api.yelp.com/v3/businesses/search'
headers = {'Authorization': f'bearer {access_token}'}
# change these to make an API call
params = {'location': 'Houston',
'categories':'italian',
'limit':'50',
'sort_by':'rating',
'price':'1,2,3'
}
resp = requests.get(url=url, params=params, headers=headers)<jupyter_output><empty_output><jupyter_text>### Examine response
Look back at the documentation and you will see a sample response. The response for most APIs is JSON data which results directly as a Python dictionary. JSON data is a hierarchical and nested similar to how your file system stores files and directories.
### Convert response into Python dictionary
The following command converts the response into a Python dictionary<jupyter_code># data is now a Python dictionary
data = resp.json()<jupyter_output><empty_output><jupyter_text>### The data keys
The entire response now resides **`data`** Python variable which is a dictionary. There are three keys in the dictionary.<jupyter_code>data.keys()<jupyter_output><empty_output><jupyter_text>### The businesses key
The main data resides in the businesses keys. The number of results are found with **total** and the lat/long of the search region is paired with **`region`**.<jupyter_code>data['total']
data['region']
# Since data['businesses'] is a list, this will output the first 3 restaurants
data['businesses'][:3]<jupyter_output><empty_output><jupyter_text>### A list of restaurants (dictionaries)
By examining the output of the **`businesses`** key we see a list. This list is composed of dictionaries that contain the actual restaurant data.<jupyter_code># the business key contains a list of restaurants (dictionaries)
type(data['businesses'])
# we specified 50 results returned
len(data['businesses'])<jupyter_output><empty_output><jupyter_text>### Look at the first restaurant<jupyter_code># Lets go to this restaurant!
data['businesses'][0]<jupyter_output><empty_output><jupyter_text># Your turn
It's now your turn to make a completely different request and examine the results. Use the Yelp documentation to make a different search.<jupyter_code># don't edit these lines
url = 'https://api.yelp.com/v3/businesses/search'
headers = {'Authorization': f'bearer {access_token}'}
# CHANGE THESE PARAMETERS to make the search
params = {'location': 'Houston',
'categories':'italian',
'limit':'50',
'sort_by':'rating',
'price':'1,2,3'
}
resp = requests.get(url=url, params=params, headers=headers)
# data is now a Python dictionary that contains all the data
data = resp.json()<jupyter_output><empty_output><jupyter_text>### Use the next few lines to examine the results# Project: Find the best restaurants at the best price for each zip code
We will now walk through in class how to turn the JSON data into a pandas DataFrame in order to answer more interesting questions like finding the best restaurants at the best price for each zip code.<jupyter_code>with open('zip_codes.txt', 'r') as f:
zip_codes = [int(line.strip()) for line in f.readlines()]
# this will take a long time. You might want to use less zip codes
all_restaurants = []
for zip_code in zip_codes:
params = {'location': f'{zip_code}',
'categories':'restaurants',
'offset':'0',
'limit':'50'
}
resp = requests.get(url=url, params=params, headers=headers)
cur_data = resp.json()['businesses']
all_restaurants.extend(cur_data)
rows = []
for restaurant in all_restaurants:
row = {}
row['Category'] = restaurant['categories'][0]['title']
row['Latitude'] = restaurant['coordinates']['latitude']
row['Longitude'] = restaurant['coordinates']['longitude']
row['Phone'] = restaurant['display_phone']
row['ID'] = restaurant['id']
row['image_url'] = restaurant['image_url']
row['Address'] = restaurant['location']['address1']
row['City'] = restaurant['location']['city']
row['State'] = restaurant['location']['state']
row['Zip Code'] = restaurant['location']['zip_code']
row['Name'] = restaurant['name']
row['Price'] = restaurant.get('price', None)
row['Rating'] = restaurant['rating']
row['Review Count'] = restaurant['review_count']
row['URL'] = restaurant['url']
rows.append(row)
df_restaurants = pd.DataFrame(rows)
df_restaurants = df_restaurants.drop_duplicates()
df_restaurants.head(10)<jupyter_output><empty_output>
|
no_license
|
/Yelp API Project.ipynb
|
fh4/Intro-Python-Programming
| 9 |
<jupyter_start><jupyter_text>#### Merging dataframes <jupyter_code>daily2 = pd.read_csv('/Users/plarkin/Documents/GA/projects/p5/tanveer/Data/oc_daily.csv')
daily2.head(1)
# to load file
daily = pd.read_csv('/Users/plarkin/Documents/GA/projects/p5/tanveer/clean_daily_oc.csv')
daily.head(1)
daily.shape,daily2.shape
daily = daily.join(daily2, how ='inner',lsuffix = '_')
daily.head(1)
daily.columns<jupyter_output><empty_output><jupyter_text>## Sentiment Analysis#### Vader Sentiment Analysis
#### Textblob Sentiment Analysis
source : https://neptune.ai/blog/sentiment-analysis-python-textblob-vs-vader-vs-flair
https://textblob.readthedocs.io/en/dev/<jupyter_code>#Adding in Sentiment analysis with designated columns for each output (pos, neg, neu, compound)
analyzer = SentimentIntensityAnalyzer()
#daily['vader'] = daily['text'].map(lambda x:analyzer.polarity_scores(str(x)))
daily['vader_compound'] = [analyzer.polarity_scores(x)['compound'] for x in daily['text']]
# draft_df['vd_neg'] = [analyzer.polarity_scores(x)['neg'] for x in draft_df['alltext']]
# draft_df['vd_neu'] = [analyzer.polarity_scores(x)['neu'] for x in draft_df['alltext']]
# draft_df['vd_pos'] = [analyzer.polarity_scores(x)['pos'] for x in draft_df['alltext']]
%time
from textblob import TextBlob
#testimonial = TextBlob()
#draft_df['tb_polarity'] = [testimonial.polarity(x)['polarity'] for x in draft_df['alltext']]
#draft_df['tb_subj'] = [testimonial.sentiment(x)['subjectivity'] for x in draft_df['alltext']]
daily['textblob_polarity'] = daily['text'].map(lambda words: TextBlob(str(words)).polarity) #polarity is more applicable and comparable to vader compound. subjectivity is more about opinion vs fact
%time<jupyter_output>CPU times: user 2 µs, sys: 0 ns, total: 2 µs
Wall time: 5.25 µs
CPU times: user 2 µs, sys: 0 ns, total: 2 µs
Wall time: 5.01 µs
<jupyter_text>### Text Cleaning
eliminate the punctuation, URL, and @
#source: https://monkeylearn.com/blog/text-cleaning/<jupyter_code>#Use this to remove http, punctuation, URL, and @
daily['text'] = daily['text'].map(lambda x: re.sub(r"(@\[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)|^rt|http.+?", "", str(x.lower())))
#convert price_direction to numerical and drop first row with NA value
daily.dropna(inplace=True)
daily['direction'] = daily['direction'].map({'down' : -1,'same' : 0 , 'up' : 1})<jupyter_output><empty_output><jupyter_text>tokenize and lemmatize
(no longer lemmatizing, results from gridsearch showed superior accuracy without lemmatizing)# Checking w/out tokenizing<jupyter_code>#for checking model with just sentiment
merged_df = daily
#setting date as index and dropping text column for modeling
merged_df.set_index('date', inplace=True)
merged_df.sort_index(inplace=True)
merged_df.drop(columns= ['text', 'day_of_week','date_'], inplace=True)
<jupyter_output><empty_output><jupyter_text>### Train Test Split<jupyter_code>X= merged_df.drop(columns=['Open_pct','Close_pct','Volume_diff','direction'])
y=merged_df['direction']
# X = merged_df.drop(columns= ['price_direction'])
# y = merged_df[['price_direction']].values
# yy = merged_df['price_direction']
yy.value_counts(normalize=True)
#sticking with a test size of 0.20 to save 2 years of data to test on
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, shuffle=False, test_size=0.2)
#need to one hot encode multiclass target in order to process in nn
y_train = to_categorical(y_train, 3)
y_test = to_categorical(y_test, 3)
#source https://stackoverflow.com/questions/61550026/valueerror-shapes-none-1-and-none-3-are-incompatible
# Scale for neural networks
ss = StandardScaler()
Xs_train = ss.fit_transform(X_train)
Xs_test = ss.transform(X_test)
# the length parameter dictates how many rows will constitute a sample
train_sequences = TimeseriesGenerator(Xs_train, y_train, length=90, batch_size=268) #increased batch sizes from 64 to 268
# test sequences
test_sequences = TimeseriesGenerator(Xs_test, y_test, length=90, batch_size=268)
train_sequences[0][0].shape
input_shape = train_sequences[0][0][0].shape
input_shape<jupyter_output><empty_output><jupyter_text># RNN
https://towardsdatascience.com/multi-class-text-classification-with-lstm-1590bee1bd17<jupyter_code># model network
rnn3 = Sequential()
rnn3.add(GRU(8,input_shape=input_shape, return_sequences=True))
rnn3.add(GRU(8,return_sequences=True))
rnn3.add(GRU(8,return_sequences=False)) # false if next layer dense
rnn3.add(Dense(10,activation='relu'))
rnn3.add(Dense(10,activation='relu'))
rnn3.add(Dense(3,activation='softmax'))
# compile model
rnn3.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# show summary
rnn3.summary()
hist_rnn3 = rnn3.fit(train_sequences,validation_data=test_sequences,
epochs=100,verbose=0)
# helper function to plot train and val accuracy
def plot_acc(hist):
plt.plot(hist.history['acc'], label='Train accuracy')
plt.plot(hist.history['val_acc'], label='Test accuracy')
plt.legend();
plot_acc(hist_rnn3)
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(2, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Dense(3, activation='softmax'))
#compile it
model.compile(optimizer='Adam', loss='CategoricalCrossentropy', metrics=['acc'])
#fit it
#adding early stopping as a regularization technique
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')
history = model.fit(train_sequences, validation_data=test_sequences, epochs=100, verbose=0, callbacks=[early_stop]) #increased epochs from 100 to 300
#plot our results
plt.plot(history.history['loss'], label='Train loss')
plt.plot(history.history['val_loss'], label='Test loss')
plt.legend();
plt.plot(history.history['acc'], label='Train accuracy')
plt.plot(history.history['val_acc'], label='Test accuracy')
plt.legend();
max(history.history['val_acc'])<jupyter_output><empty_output><jupyter_text>RNN Model history/specs
FIRST
max test accuracy 0.45553821325302124
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(2, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Dense(3, activation='softmax'))
#compile it
model.compile(optimizer=Adam(learning_rate=.0005), loss='CategoricalCrossentropy', metrics=['acc'])
#fit it
history = model.fit(train_sequences, validation_data=test_sequences, epochs=100, verbose=0)
notes: maxing out after 15-20 epochs
SECOND
max test accuracy 0.4570982754230499
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(8, activation='relu'))#added and increased both hidden layers to 8
model.add(Dense(8, activation='relu'))
model.add(Dense(3, activation='softmax'))
#compile it
model.compile(optimizer=Adam(learning_rate=.0005), loss='CategoricalCrossentropy', metrics=['acc'])
#fit it
history = model.fit(train_sequences, validation_data=test_sequences, epochs=300, verbose=0) #increased epochs from 100 to 300
notes: maxing out after 20 epochs
THIRD
0.4586583375930786
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(32, activation='relu'))#added and increased both hidden layers to 32
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax'))
#compile it
model.compile(optimizer='Adam', loss='CategoricalCrossentropy', metrics=['acc'])
#fit it
history = model.fit(train_sequences, validation_data=test_sequences, epochs=20, verbose=0) #decreased epochs from 300 to 20
FOURTH
0.4602183997631073
added earlystopping
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(32, activation='relu'))#added and increased both hidden layers to 32
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax')) #softmax for multi-classification
#compile it
model.compile(optimizer='Adam', loss='CategoricalCrossentropy', metrics=['acc']) #categorical crossentropy for multi-classification
#fit it
#adding early stopping as a regularization technique
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')
history = model.fit(train_sequences, validation_data=test_sequences, epochs=20, verbose=0, callbacks=[early_stop])
FIFTH
0.4492979645729065
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(32, activation='relu'))#added and increased both hidden layers to 32
model.add(BatchNormalization()) #added to help regularize model
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax')) #softmax for multi-classification
#compile it
model.compile(optimizer='Adam', loss='CategoricalCrossentropy', metrics=['acc']) #categorical crossentropy for multi-classification
#fit it
#adding early stopping as a regularization technique
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')
history = model.fit(train_sequences, validation_data=test_sequences, epochs=20, verbose=0, callbacks=[early_stop]) #increased epochs from 100 to 300
SIXTH
0.4836193323135376
adding dropout 0.2
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(32, activation='relu'))#added and increased both hidden layers to 32
model.add(BatchNormalization()) #added to help regularize model
model.add(Dropout(0.2)) #added to help regularize model
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2)) #added to help regularize model
model.add(Dense(3, activation='softmax')) #softmax for multi-classification
#compile it
model.compile(optimizer='Adam', loss='CategoricalCrossentropy', metrics=['acc']) #categorical crossentropy for multi-classification
#fit it
#adding early stopping as a regularization technique
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')
history = model.fit(train_sequences, validation_data=test_sequences, epochs=20, verbose=0, callbacks=[early_stop]) #increased epochs from 100 to 300
SEVENTH
0.4492979645729065
adding kernel regularization l1
#modeling and layers
model = Sequential()
model.add(GRU(8, input_shape=input_shape, return_sequences= True))
model.add(GRU(8, return_sequences= False))
model.add(Dense(32, activation='relu'))#added and increased both hidden layers to 32
model.add(BatchNormalization()) #added to help regularize model
model.add(Dropout(0.2)) #added to help regularize model
model.add(Dense(32, activation='relu', kernel_regularizer='l1'))
model.add(Dropout(0.2)) #added to help regularize model
model.add(Dense(3, activation='softmax', kernel_regularizer='l1')) #softmax for multi-classification
#compile it
model.compile(optimizer='Adam', loss='CategoricalCrossentropy', metrics=['acc']) #categorical crossentropy for multi-classification
#fit it
#adding early stopping as a regularization technique
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')
history = model.fit(train_sequences, validation_data=test_sequences, epochs=20, verbose=0, callbacks=[early_stop]) #increased epochs from 100 to 300
# LSTM<jupyter_code># make df?
X_train_lstm= np.reshape(Xs_train,(Xs_train.shape[0],1,Xs_train.shape[1]))
X_test_lstm = np.reshape(Xs_test,(Xs_test.shape[0],1,Xs_test.shape[1]))
X_train_lstm.shape, X_test_lstm.shape
# model network
lstm = Sequential()
lstm.add(LSTM(64,input_shape=(1,16)))
lstm.add(Dense(8,activation='relu'))
lstm.add(Dense(3,activation='softmax'))
# compile model
lstm.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
lstm.summary()
# fit model
hist_lstm = lstm.fit(X_train_lstm,y_train, validation_data=(X_test_lstm,y_test), epochs=100,verbose=0)
plot_acc(hist_lstm)
max(hist_lstm.history['val_acc'])<jupyter_output><empty_output>
|
no_license
|
/classification_with_text/RNN & LSTM Modeling-Only Sentiment.ipynb
|
plarkin13/stock_prediction_sentiment
| 7 |
<jupyter_start><jupyter_text># Predicting eye tracking features
Eye-tracking data from reading represent an important resource for both linguistics and natural language processing. The ability to accurately model gaze features is crucial to advance our understanding of language processing. On one hand, it can tell us how well our computational models align with human language processing mechanisms. On the other hand, it may provide more accurate models of reading to further psycholinguistic research.
In this tutorial we will first get acquainted with eye tracking data and the structure of the [Zurich Cognitive Language Processing Corpus (ZuCo)](https://www.nature.com/articles/sdata2018291), for which eye movements were recorded during natural reading of English sentences. We use the eye tracking data as is was provided for the [CMCL 2021 Shared Task](https://cmclorg.github.io/shared_task).
Then, we evaluate the ability of a contextualized language model to predict psycholinguistic data, specifically we use transformer models such as BERT and DistilBERT to predict eye tracking features recorded during reading (fixation duration, fixation proportion, etc.). The goal of the task is to predict 5 different token-level eye-tracking metrics from the ZuCo corpus. Finally, we compare the performance of a pre-trained model and a model fine-tuned on eye tracking data.## Prerequisites<jupyter_code># Install the required pip packages in the current Jupyter kernel
import sys
!{sys.executable} -m pip install numpy pandas seaborn matplotlib torch transformers spacy
# Import the required libraries
import numpy as np
import pandas as pd
import random
import torch
import transformers
import seaborn as sns
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>## Loading dataset<jupyter_code># If you are workin on Google Colab, you need to clone the repo to get access to the data files:
#!git clone https://github.com/beinborn/ESSLLI2021.git
# You still need to make sure that the files are under the correct paths!
# Load ZuCo corpus (train, test and gold data files)
training_data = pd.read_csv("data/training_data.csv")
gold_data = pd.read_csv("data/gold_data.csv")
print(len(training_data['sentence_id'].unique()), "training sentences including", len(training_data), "words.")
print(len(gold_data['sentence_id'].unique()), "test sentences including", len(gold_data), "words.")<jupyter_output>800 training sentences including 15736 words.
191 test sentences including 3554 words.
<jupyter_text>## Data analysis
Originally, the ZuCo corpus contains eye tracking data from 30 participants. Here, the data has already been averaged across readers and the feature values have been scaled to a range between 0 and 100. The eye-tracking feature values are scaled to facilitate evaluation via the mean absolute error. The features nFix and fixProp are scaled separately, while FFD, GPT and TRT are scaled together since these are all dependent and measured in milliseconds. The data was randomly shuffled before splitting into training and test data.<jupyter_code># Let's have a look at some sentences and their eye tracking features to get a better idea of the data.
training_data.loc[training_data['sentence_id'] == 444]
# Note: change the sentence_id to see other sentences (0 - 799)<jupyter_output><empty_output><jupyter_text>### Eye tracking features explained
For each word, there are five eye-tracking features:
- _nFix_ (number of fixations): total number of fixations on the current word.
- _FFD_ (first fixation duration): the duration of the first fixation on the prevailing word.
- _GPT_ (go-past time): the sum of all fixations prior to progressing to the right of the current word, including regressions to previous words that originated from the current word.
- _TRT_ (total reading time): the sum of all fixation durations on the current word, including regressions.
- _fixProp_ (fixation proportion): the proportion of participants that fixated the current word (as a proxy for how likely a word is to be fixated).#### A note on tokenization:
The tokens in the sentences are split in the same manner as they were presented to the participants during the eye tracking experiments. Hence, this does not necessarily follow a linguistically correct tokenization. For example, the sequences
''(except,'' and ''don't'' were presented as such to the reader and not split into ''('', ''except'', '','' and ''do'', ''n't'' as a tokenizer would do to ensure a more natural way of reading.
Sentence endings are marked with an \ symbol added to the last token.
To get a better idea of the data, let's plot some sentences and their corresponding eye tracking features...<jupyter_code># Visualization: fixation times
# Note: change the sentence_id to see other sentences (0 - 799)
example_sentence = training_data.loc[training_data['sentence_id'] == 444]
fig = plt.figure(figsize=(len(example_sentence)+2,3))
feature_to_draw = "TRT" # or "GPT" or "TRT"
words = [s.replace("<EOS>", "") for s in example_sentence["word"]]
ax = sns.scatterplot(data=example_sentence, x="word_id", y=0, size=feature_to_draw, hue=feature_to_draw, palette="flare", sizes=(200,5000), legend=False)
ax.set_frame_on(False)
ax.tick_params(axis=u'both', which=u'both',length=0)
plt.xticks(ticks=example_sentence["word_id"], labels=words, fontsize=14)
plt.yticks([0], [feature_to_draw], fontsize=14)
plt.ylim(-0.05,0.07)
plt.xlabel(None)
plt.tight_layout()
plt.show()
# Visualization: fixation proportion
# Note: change the sentence_id to see other sentences (0 - 799)
example_sentence = training_data.loc[training_data['sentence_id'] == 444]
fig = plt.figure(figsize=(len(example_sentence)+2,3))
feature_to_draw = "fixProp" # or nFix
words = [s.replace("<EOS>", "") for s in example_sentence["word"]]
ax = sns.barplot(data=example_sentence, x="word_id", y="fixProp", color="lightblue")
ax.set_frame_on(False)
ax.tick_params(axis=u'both', which=u'both',length=0)
plt.xticks(ticks=example_sentence["word_id"], labels=words, fontsize=14)
plt.xlabel(None)
plt.ylabel(feature_to_draw, fontsize=16)
plt.tight_layout()
plt.show()
# Now let's get a complete overview of the training data
c = sns.color_palette("hls", 10)
sns.set(font_scale = 1)
sns.set_style("whitegrid")
ax = sns.boxplot(data=training_data[["nFix","FFD","GPT","TRT","fixProp"]], palette=c[5:], color='grey', linewidth=1, fliersize=1)
medians = []
for f in ["nFix","FFD","GPT","TRT","fixProp"]:
median = training_data[f].median()
medians.append(median)
median_labels = [str(np.round(s, 2)) for s in medians]
pos = range(len(medians))
for tick,label in zip(pos,ax.get_xticklabels()):
ax.text(pos[tick], -4, median_labels[tick], #medians[tick] + offsets[tick]
horizontalalignment='center', size='small', color='black')#, weight='semibold')
plt.show()
<jupyter_output><empty_output><jupyter_text>## Training model
We compare three state-of-the-art language models trained for English: BERT and DistilBERT. BERT was the first widely successful transformer-based language model and remains
highly influential. DistilBERT is a variant of BERT that requires less training time due to a considerable reduction
of the training parameters while maintaining similar performance on benchmark datasets.
This allows us to analyze if the lighter architectures have an influence on the patterns of eye movements that the models learn.<jupyter_code># import source code for training and evaluating models
import model
import dataloader
import evaluation_metric<jupyter_output><empty_output><jupyter_text>Training the model can take a while. It's best to start with a low number of epochs. Alternatively, you can skip the next steps and use the provided models in the "models/" directory:
- A pre-trained DistilBERT model (i.e., fine-tuned for 0 epochs)
- A fine-tuned DistilBERT model (trained for 150 epochs on the eye-tracking data)<jupyter_code># Note:
# You could load any other BERT or DistilBERT model version available in the Huggingface library in the same manner.
# DistilBERT was chosen for this tutorial since it has less parameters.
random.seed(12345)
device = torch.device('cpu')
# define model
model_name = 'distilbert-base-uncased'
regr_model = model.TransformerRegressionModel(model_name).to(device)
train_data = dataloader.EyeTrackingCSV(training_data, model_name=model_name)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=16, shuffle=True)
optimizer = torch.optim.AdamW(regr_model.parameters(), lr=5e-5)
loss = torch.nn.MSELoss()
# train model
# Note: start with 1 or 2 epochs, so find out how long this takes...
# set to 0 or empty list for no fine-tuning?
num_epochs = 5
# is this needed?
print("Starting training...")
for epoch in range(num_epochs):
model.train(regr_model, train_loader, optimizer, loss)
print("Epoch:", epoch)
print("Training done.")
# save model
output_model = './models/'+model_name+'_'+str(num_epochs)
torch.save(regr_model, output_model)
print("Model saved to", output_model)<jupyter_output>Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertModel: ['vocab_layer_norm.weight', 'vocab_transform.bias', 'vocab_projector.bias', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_transform.weight']
- This IS expected if you are initializing DistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
<jupyter_text>Note: You can ignore the warning about the model checkpoint...
## Evaluate modelsLoad pre-trained model and predict eye tracking features
<jupyter_code># load pretrained model
pretrained_model_to_evaluate = "models/distilbert-base-uncased_0"
pretrained_model = torch.load(pretrained_model_to_evaluate)
predictions_pretrained = model.predict(pretrained_model, pretrained_model_to_evaluate, gold_data)
predictions_pretrained
# load model fine-tuned on eye-tracking data
finetuned_model_to_evaluate = "models/distilbert-base-uncased_150"
finetuned_model = torch.load(finetuned_model_to_evaluate)
predictions_finetuned = model.predict(finetuned_model, finetuned_model_to_evaluate, gold_data)
predictions_finetuned<jupyter_output><empty_output><jupyter_text>### Baseline
We use the mean central tendency as a baseline for this regression problem, i.e., we calculate the mean value for each feature from the training data and use it as a prediction for all words in the test data.<jupyter_code># compute mean values from training file
features = ["nFix","FFD","GPT","TRT","fixProp"]
mean_feat_values = {}
print("Mean feature values in training data:")
for feat in features:
avg = np.mean(training_data[feat])
mean_feat_values[feat] = avg
print (feat, avg)
# make dataframe of test data with mean values
mean_data = gold_data.copy()
for feat in features:
mean_data[feat] = mean_feat_values[feat]<jupyter_output>Mean feature values in training data:
nFix 15.1017782748365
FFD 3.1910228250548047
GPT 6.345511205701235
TRT 5.309609916601204
fixProp 67.05723394392223
<jupyter_text>### Evaluation metric: Mean absolute error
Now that we have the predictions of the mean baseline as well as a pre-trained and a fine-tuned model, we can calculate the resutss. The model performance is evaluated by using the _mean absolute error_ metric (MAE), meaning that we average over the error of all samples in the test set:
$MAE = \frac{\sum_{i=1}^{n} | y_{i} - x_{i}|}{n} $
<jupyter_code># Calculate mean absolute error on the predictions
print("Results pre-trained model:")
mae_pretrained = evaluation_metric.evaluate(predictions_pretrained, gold_data)
print("Results fine-tuned model:")
mae_finetuned = evaluation_metric.evaluate(predictions_finetuned, gold_data)
print("Results mean baseline:")
mae_mean = evaluation_metric.evaluate(mean_data, gold_data)
data = pd.DataFrame(data={'results': ["mean", "pretrained", "finetuned"], 'mae': [mae_mean, mae_pretrained, mae_finetuned]})
ax = sns.barplot(x="results", y="mae", data=data, palette="flare")
ax.set_ylabel('MAE overall', fontsize=16)
plt.show()<jupyter_output>Results pre-trained model:
MAE for nFix: 15.087638004183914
MAE for FFD: 2.9985058935006013
MAE for GPT: 6.3720704862735795
MAE for TRT: 5.256835941134088
MAE for fixProp: 67.01796054929119
Overall MAE: 19.346602174876672
Results fine-tuned model:
MAE for nFix: 4.3378435708878325
MAE for FFD: 0.7026542748697112
MAE for GPT: 2.5579450948635043
MAE for TRT: 1.6773681869889472
MAE for fixProp: 11.880517177967716
Overall MAE: 4.2312656611155415
Results mean baseline:
MAE for nFix: 7.302784572412885
MAE for FFD: 1.1491256623904909
MAE for GPT: 3.781862596034746
MAE for TRT: 2.777713539662726
MAE for fixProp: 21.77547317917012
Overall MAE: 7.357391909934194
<jupyter_text>As the numbers and the plot show, the pre-trained model that is not trained on eye-tracking features but purely on text, does not beat the mean baseline. However, the model fine-tuned on eye-tracking data is substantially better (i.e., lower MAE) than the baseline.
Based on these results, we can conclude that the difficulty of predicting the individual eye-tracking features is analogous in all models: FFD is the most accurately predicted feature. This seems to suggest that the models are more capable to capture early processing stages of lexical access compared to late-stage semantic integration, represented by TRT and nFix.
Generally, the error for the three features representing reading times in milliseconds (FFD, GPT, and TRT), is much lower than for nFix and fixProp. The latter are the features with the most variance. The mean baseline results also reveal the same patterns. The features with lower variance achieve lower MAEs. The fixProp feature, representing how likely a word is to be fixated, might be more challenging to predict since it is more dependent on subject-specific characteristics.
### Visualization of predictions<jupyter_code># plot a sentence with real eye tracking data and the corresponding model predictions
sentence_to_plot = 953 #801, 901, 950, 953
feature_to_plot = "FFD"
true_sentence = gold_data.loc[gold_data['sentence_id'] == sentence_to_plot]
predicted_sentence_finetuned = predictions_finetuned.loc[predictions_finetuned['sentence_id'] == sentence_to_plot]
predicted_sentence_pretrained = predictions_pretrained.loc[predictions_pretrained['sentence_id'] == sentence_to_plot]
# todo: change fig size accroding to sent length
fig, ax = plt.subplots(1,1,figsize=(len(true_sentence)*1.5,5))
ax.plot(list(range(len(true_sentence))), true_sentence[feature_to_plot], label="correct", color="blue", linestyle="--")
ax.plot(list(range(len(predicted_sentence_finetuned))), predicted_sentence_finetuned[feature_to_plot], label="fine-tuned", color="red", marker='o', ms=10, linestyle="-.")
ax.plot(list(range(len(predicted_sentence_pretrained))), predicted_sentence_pretrained[feature_to_plot], label="pretrained", color="pink", marker='o', ms=10, linestyle="-.")
# todo: add mean as a task
ax.plot(list(range(len(predicted_sentence_pretrained))), [mean_feat_values[feature_to_plot]]*len(predicted_sentence_pretrained), label="mean", color="lightblue", marker='o', ms=10, linestyle="--")
ax.set_ylabel(feature_to_plot, fontsize=16)
ax.set_xticks(list(range(len(true_sentence))))
ax.set_xticklabels(true_sentence['word'], fontsize=16)
ax.legend(fontsize=13)
plt.show()<jupyter_output><empty_output><jupyter_text>### Part of Speech Analysis
Now we have both evaluated the aggregated performance of the models as well as looked at the predictions for specific sentences. Next, we want to analyze whether there are certain parts of speech (PoS) for which eye tracking features are more easily predictable than others.
Therefore, we first use the SpaCy part-of-speech tagger to get the PoS tags for the sentence in our test data: <jupyter_code># load English SpaCy model
import spacy
!python3 -m spacy download en_core_web_sm
nlp = spacy.load("en_core_web_sm")
# making sure Spacy doesn't mess with out tokenization
from spacy.tokens import Doc
def custom_tokenizer(wordlist):
"""replace spacy tokenizer with already tokenized list of words"""
return Doc(nlp.vocab, words=wordlist, spaces=None)
nlp.tokenizer = custom_tokenizer
# tag the data and add the tags to the dataframe
def pos_tag(data_to_tag):
words = [str(w).replace("<EOS>", ".") for w in data_to_tag['word']]
tagged_sents = nlp(words)
data_to_tag['pos_tags'] = [token.pos_ for token in tagged_sents]
return data_to_tag
tagged_data = pos_tag(gold_data)
tagged_predictions = pos_tag(predictions_finetuned)
feature_to_plot = "TRT"
#fig = plt.figure(figsize=(10,5))
fig, ax = plt.subplots(1,1,figsize=(10,5))
sns.barplot(x="pos_tags", y=feature_to_plot, data=tagged_data, color="green", label="true eye tracking data", alpha=0.4)
sns.barplot(x="pos_tags", y=feature_to_plot, data=tagged_predictions, color="red", label="predicted", alpha=0.4)
plt.legend()
plt.show()<jupyter_output><empty_output>
|
no_license
|
/code/tutorial1/esslli_tutorial1.ipynb
|
yancong222/ESSLLI2021
| 11 |
<jupyter_start><jupyter_text>### Control Statements
- Conditional Statements
- if-else
- Iterative Statements
- for
- while
- do-while<jupyter_code>year = int(input("Enter a year "))
if year%400 == 0 or (year% 100 !=0 and year%4 ==0):
print("Leap Year")
else:
print("Non-Leap Year")
n=int(input("Enter a number "))
i=1
while(i<=n):
print(i,end=" ")
i=i+1
n=int(input("Enter a number "))
i=1
s=0
while(i<=n):
if(i%2==0):
s=s+i
i=i+1
print("The Sum of even numbers from 1 to",n,"is",s)
n=int(input("Enter a number "))
i=0
while(n!=0):
i=n%10
print(i,end=" ")
n=n//10 # using / will not give us the required answer. Instead it will divide till the last of the decimal points.<jupyter_output>Enter a number 23
3 2 <jupyter_text>### Functional Programming
- Simple
- Better Reusability
- Easy to Understand
- Division of Program into small blocks<jupyter_code>def fun1():
n=int(input("Enter a number "))
i=0
s=0
while(n!=0):
i=n%10
if(i%2==0):
s=s+i
n=n//10
return s
fun1()
def fun1():
n=int(input("Enter a number "))
i=0
d=0
while(n!=0):
i=n%10
if(i>d):
d=i
n=n//10
return d
fun1()
def fun1():
n=int(input("Enter a number "))
i=0
s=0
p=n
while(n!=0):
f=1
i=n%10
while(i!=0):
f=f*i
i=i-1
s=s+f
n=n//10
if(p==s):
print("Yes")
else:
print("No")
fun1()
def fun1():
n=int(input("Enter a number "))
d=n
p=0
r=0
while(n!=0):
r=n%10
p=p*10 + r
n=n//10
if(d==p):
print("Palindrome ")
else:
print("Not Palindrome")
fun1()<jupyter_output>Enter a number 123
Not Palindrome
<jupyter_text>### When to use which loop
- For non-fixed number of iterations,use the while loop
- For a fixed number of iterations,use the for loop<jupyter_code>def printSeries(lb,ub):
for x in range(lb,ub): # We know that range function is as follows : range(a,b-1)
print(x,end=" ")
return
printSeries(11,25)
def printAlternate(lb,ub):
for x in range(lb,ub+1,2):
print(x,end=" ")
return
lb=int(input("Enter the lower bound "))
ub=int(input("Enter the upper bound "))
printAlternate(lb,ub)
def factors():
n=int(input("Enter a number "))
i=1
while(i<=n):
if(n%i==0):
print(i,end=" ")
i=i+1
factors()
def prime():
n=int(input("Enter a number "))
i=2
c=0
while(i<n):
if(n%i==0):
c=c+1
i=i+1
if(c==0):
print("True")
else:
print("False ")
prime()
def checkprime(i):
j=2
c=0
while(j<i):
if(i%j==0):
c=c+1
i=i+1
if(c==0):
return 1
else:
return 0
def primec():
n=int(input("Enter a number "))
i=2
pc=0
while(i<=n):
pc=pc+checkprime(i)
i=i+1
print(pc)
primec()<jupyter_output><empty_output>
|
no_license
|
/Workshop_File--Day 2.ipynb
|
Amogh19/Problem-Solving-and-Programming
| 3 |
<jupyter_start><jupyter_text># Exercise 2: Creating Redshift Cluster using the AWS python SDK
## An example of Infrastructure-as-code<jupyter_code>import pandas as pd
import boto3
import json<jupyter_output><empty_output><jupyter_text># STEP 0: Make sure you have an AWS secret and access key
- Create a new IAM user in your AWS account
- Give it `AdministratorAccess`, From `Attach existing policies directly` Tab
- Take note of the access key and secret
- Edit the file `dwh.cfg` in the same folder as this notebook and fill
[AWS]
KEY= YOUR_AWS_KEY
SECRET= YOUR_AWS_SECRET
# Load DWH Params from a file<jupyter_code>import configparser
config = configparser.ConfigParser()
config.read_file(open('dwh.cfg'))
KEY = config.get('AWS','KEY')
SECRET = config.get('AWS','SECRET')
DWH_CLUSTER_TYPE = config.get("DWH","DWH_CLUSTER_TYPE")
DWH_NUM_NODES = config.get("DWH","DWH_NUM_NODES")
DWH_NODE_TYPE = config.get("DWH","DWH_NODE_TYPE")
DWH_CLUSTER_IDENTIFIER = config.get("DWH","DWH_CLUSTER_IDENTIFIER")
DWH_DB = config.get("DWH","DWH_DB")
DWH_DB_USER = config.get("DWH","DWH_DB_USER")
DWH_DB_PASSWORD = config.get("DWH","DWH_DB_PASSWORD")
DWH_PORT = config.get("DWH","DWH_PORT")
DWH_IAM_ROLE_NAME = config.get("DWH", "DWH_IAM_ROLE_NAME")
(DWH_DB_USER, DWH_DB_PASSWORD, DWH_DB)
pd.DataFrame({"Param":
["DWH_CLUSTER_TYPE", "DWH_NUM_NODES", "DWH_NODE_TYPE", "DWH_CLUSTER_IDENTIFIER", "DWH_DB", "DWH_DB_USER", "DWH_DB_PASSWORD", "DWH_PORT", "DWH_IAM_ROLE_NAME"],
"Value":
[DWH_CLUSTER_TYPE, DWH_NUM_NODES, DWH_NODE_TYPE, DWH_CLUSTER_IDENTIFIER, DWH_DB, DWH_DB_USER, DWH_DB_PASSWORD, DWH_PORT, DWH_IAM_ROLE_NAME]
})<jupyter_output><empty_output><jupyter_text>## Create clients for EC2, S3, IAM, and Redshift<jupyter_code>ec2 = boto3.client(
"ec2",
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
s3 = boto3.client(
"s3",
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
iam = boto3.client(
"iam",
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
redshift = boto3.client(
"redshift",
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)<jupyter_output><empty_output><jupyter_text>## Check out the sample data sources on S3<jupyter_code>for obj in s3.list_objects(Bucket="awssampledbuswest2", Prefix="ssbgz/")["Contents"]:
print(obj["Key"])<jupyter_output>ssbgz/
ssbgz/customer0002_part_00.gz
ssbgz/dwdate.tbl.gz
ssbgz/lineorder0000_part_00.gz
ssbgz/lineorder0001_part_00.gz
ssbgz/lineorder0002_part_00.gz
ssbgz/lineorder0003_part_00.gz
ssbgz/lineorder0004_part_00.gz
ssbgz/lineorder0005_part_00.gz
ssbgz/lineorder0006_part_00.gz
ssbgz/lineorder0007_part_00.gz
ssbgz/part0000_part_00.gz
ssbgz/part0001_part_00.gz
ssbgz/part0002_part_00.gz
ssbgz/part0003_part_00.gz
ssbgz/supplier.tbl_0000_part_00.gz
ssbgz/supplier0001_part_00.gz
ssbgz/supplier0002_part_00.gz
ssbgz/supplier0003_part_00.gz
<jupyter_text>## STEP 1: IAM ROLE
- Create an IAM Role that makes Redshift able to access S3 bucket (ReadOnly)<jupyter_code>assume_role_doc = json.dumps({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
})
dwhRole = iam.create_role(
RoleName=DWH_IAM_ROLE_NAME,
Description="Allow redshift to read from S3 Bucket",
AssumeRolePolicyDocument=assume_role_doc
)
iam.attach_role_policy(
PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess",
RoleName=DWH_IAM_ROLE_NAME
)
# Copied from AWS Console
## https://console.aws.amazon.com/iam/home?region=us-west-1#/policies/arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess$serviceLevelSummary
## arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
roleArn = iam.get_role(
RoleName=DWH_IAM_ROLE_NAME
)["Role"]["Arn"]<jupyter_output><empty_output><jupyter_text>## STEP 2: Redshift Cluster
- Create a RedShift Cluster
- For complete arguments to `create_cluster`, see [docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/redshift.html#Redshift.Client.create_cluster)<jupyter_code>response = redshift.create_cluster(
# add parameters for hardware
DBName=DWH_DB,
ClusterIdentifier=DWH_CLUSTER_IDENTIFIER,
ClusterType=DWH_CLUSTER_TYPE,
NodeType=DWH_NODE_TYPE,
NumberOfNodes=int(DWH_NUM_NODES),
# add parameters for identifiers & credentials
MasterUsername=DWH_DB_USER,
MasterUserPassword=DWH_DB_PASSWORD,
# add parameter for role (to allow s3 access)
IamRoles=[
roleArn
]
)<jupyter_output><empty_output><jupyter_text>## 2.1 *Describe* the cluster to see its status
- run this block several times until the cluster status becomes `Available`<jupyter_code>metadata_of_interest = [
"ClusterIdentifier",
"NodeType",
"ClusterStatus",
"MasterUsername",
"DBName",
"Endpoint",
"NumberOfNodes",
"VpcId"
]
describe_dict = redshift.describe_clusters(
ClusterIdentifier=DWH_CLUSTER_IDENTIFIER
)["Clusters"][0]
{k: v for (k, v) in describe_dict.items() if k in metadata_of_interest}
print("DWH_ENDPOINT: ", describe_dict["Endpoint"]["Address"])
print("DWH_ROLE_ARN: ", describe_dict["IamRoles"][0]["IamRoleArn"])
print("Role ARN: ", roleArn)
DWH_ENDPOINT = describe_dict["Endpoint"]["Address"]
DWH_ROLE_ARN = describe_dict["IamRoles"][0]["IamRoleArn"]<jupyter_output>DWH_ENDPOINT: dwhcluster.ctcwpuiwdyvj.us-west-2.redshift.amazonaws.com
DWH_ROLE_ARN: arn:aws:iam::702717750718:role/dwhRole
Role ARN: arn:aws:iam::702717750718:role/dwhRole
<jupyter_text> 2.2 Take note of the cluster endpoint and role ARN DO NOT RUN THIS unless the cluster status becomes "Available" ## STEP 3: Open an incoming TCP port to access the cluster ednpoint<jupyter_code>defaultSg = ec2.describe_security_groups(
GroupNames=["default"]
)["SecurityGroups"][0]["GroupId"]
## https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#SecurityGroup:group-id=sg-2d57a950
responseSg = ec2.authorize_security_group_ingress(
IpProtocol="TCP",
CidrIp="0.0.0.0/0",
FromPort=int(DWH_PORT),
ToPort=int(DWH_PORT),
GroupId=defaultSg
# GroupName="defaultSg"
)<jupyter_output><empty_output><jupyter_text>## STEP 4: Make sure you can connect to the clusterConnect to the cluster<jupyter_code>%load_ext sql
conn_string="postgresql://{}:{}@{}:{}/{}".format(DWH_DB_USER, DWH_DB_PASSWORD, DWH_ENDPOINT, DWH_PORT,DWH_DB)
print(conn_string)
%sql $conn_string
%%sql
SELECT *
FROM information_schema.tables
WHERE table_schema = 'public'<jupyter_output> * postgresql://dwhuser:***@dwhcluster.ctcwpuiwdyvj.us-west-2.redshift.amazonaws.com:5439/dwh
7 rows affected.
<jupyter_text>## STEP 5: Clean up your resourcesDO NOT RUN THIS UNLESS YOU ARE SURE
We will be using these resources in the next exercises<jupyter_code>#### CAREFUL!!
#-- Uncomment & run to delete the created resources
redshift.delete_cluster( ClusterIdentifier=DWH_CLUSTER_IDENTIFIER, SkipFinalClusterSnapshot=True)
#### CAREFUL!!<jupyter_output><empty_output><jupyter_text>- run this block several times until the cluster really deleted<jupyter_code>{k: v for (k, v) in describe_dict.items() if k in metadata_of_interest}
#### CAREFUL!!
#-- Uncomment & run to delete the created resources
iam.detach_role_policy(RoleName=DWH_IAM_ROLE_NAME, PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess")
iam.delete_role(RoleName=DWH_IAM_ROLE_NAME)
#### CAREFUL!!<jupyter_output><empty_output>
|
no_license
|
/cloud_data_warehouses/.ipynb_checkpoints/L3 Exercise 2-checkpoint.ipynb
|
aaharrison/udacity-data-eng
| 11 |
<jupyter_start><jupyter_text># 50 Years of Music Trends
## Objective
* Analyze lyrics from billboard top 100 songs over 50 years to identify trends
* Statement: Has the sentiments of popular lyrics changed over time?
## Hypothesis
* Ha = the sentiments of popular lyrics has become more negative over time
* Ho = no change in the sentiments of popular lyrics over time
## Sources
* musixmatch source: https://developer.musixmatch.com/documentation/api-reference/track-lyrics-get
* musixmatch python: https://github.com/hudsonbrendon/python-musixmatch
* billboard python: https://github.com/guoguo12/billboard-charts<jupyter_code># Dependency library
import numpy as np
import pandas as pd
import random
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
from scipy.stats import linregress
# API Calls
import billboard
from musixmatch import Musixmatch
# API Keys
from musixmatch_api import api_key
# Import and Initialize Sentiment Analyzer
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
# Generate a (pseudo) random list of (almost all) dates in string format to fit musixmatch parameter
date_list = []
min_year = 1968
error_counter = 0
for i in range(50):
try:
# RANDOM date generation
month_rand = str(random.randint(1,12))
if len(month_rand) ==1:
# PAD single digit numbers with a leading 0
month_rand = month_rand.zfill(2)
day_rand = str(random.randint(1,28))
if len(day_rand) == 1:
day_rand = day_rand.zfill(2)
# STRINGIFY the result for the musixmatch parameter
date_rand = str(f'{min_year}-{month_rand}-{day_rand}')
# APPEND result to date_list
date_list.append(date_rand)
#INCREMENT the year
min_year = min_year + 1
except ValueError:
error_counter = error_counter + 1
# VIEW date_list object
date_list
# Return top 100 billboard songs for each date in random list generated above
# Note: Running this code takes approximately 2 minutes
billboard_list = 'hot-100'
col_names = ['Song','Artist','Date']
chart_df = pd.DataFrame(columns = col_names)
for date in date_list:
chart = billboard.ChartData(billboard_list,date)
for row in chart:
# EMPTY the list placeholder used to create the dataframe
chart_list = []
# CLEAN and convert the billboard object
chart_list.append(str(row).replace("'","",1))
temp_df = pd.DataFrame(chart_list)
temp_df = temp_df[0].str.split("' by ",expand=True)
temp_df = temp_df.rename(columns={0:"Song",1:"Artist"})
temp_df['Date'] = date
# APPEND the temp_df in the current loop location to the chart_df
chart_df = chart_df.append(temp_df)
# REMOVE duplicates and RESET index from the resulting dataframe
chart_df = chart_df.drop_duplicates().reset_index(drop=True)
# VIEW dataframe head
print(len(chart_df))
chart_df.head()
# SPLIT the date values in the dataframe for plotting and analysis purposes
chart_df['Year'], chart_df['Month'], chart_df['Day'] = chart_df['Date'].str.split('-').str
# VIEW dataframe head
chart_df.head()
# Retrieve lyrics from MusixMatch API based on song and artist in above dataframe
# Running this code takes approximately 5 - 7 minutes
musixmatch = Musixmatch(api_key)
lyrics_list = []
error_counter = 0
# LOOP through the data frame and use song title and artist name to search for lyrics in musixmatch
for x in range(len(chart_df)):
# ERROR HANDLING in case a song queries returns 'null' from musixmatch
try:
# GRAB the lyrics based on location (iloc) in chart_df
song_search = chart_df.iloc[x,0]
artist_search = chart_df.iloc[x,1]
lyrics = musixmatch.matcher_lyrics_get(q_artist=artist_search,
q_track=song_search)['message']['body']['lyrics']['lyrics_body']
# FORMATTING to truncate the nonsense at the end of the lyrics from MusixMatch
song_length = len(lyrics)
endpoint = len("******* This Lyrics is NOT for Commercial use *******\n(1409617829201)")
lyrics = lyrics.replace("\n", " ")
lyrics = str(lyrics[:song_length-endpoint])
# APPEND lyrics to lyrics_list
lyrics_list.append(lyrics)
except:
error_counter = error_counter + 1
lyrics_list.append('MUSIXMATCH_NA')
# CREATE new column in chart_df
chart_df['Lyrics'] = lyrics_list
# VIEW dataframe head
chart_df.head()
# REMOVE blanks and errors from the dataframe
clean_chart_df = chart_df[(chart_df['Lyrics'] != "MUSIXMATCH_NA") & (chart_df['Lyrics'] != "")].reset_index(drop=True)
# VIEW dataframe head
clean_chart_df.head()
# clean_chart_df.count()
# Vader Sentiment Analysis conducted on each song in the dataframe
# INITIALIZE a list to hold the sentiments
lyrics_sentiments = []
# ANALYZE the list
for y in range(len(clean_chart_df)):
results = analyzer.polarity_scores(clean_chart_df.iloc[y,6])
compound = results["compound"]
pos = results["pos"]
neu = results["neu"]
neg = results["neg"]
lyrics_sentiments.append({"Compound": compound,
"Positive": pos,
"Negative": neg,
"Neutral": neu})
# CREATE a dataframe of sentiment analysis that will be appended to the chart_df
lyrics_sentiments_df = pd.DataFrame(lyrics_sentiments)
# APPEND new columns containing the sentiment analysis
clean_chart_df['Compound'] = lyrics_sentiments_df['Compound']
clean_chart_df['Positive'] = lyrics_sentiments_df['Positive']
clean_chart_df['Negative'] = lyrics_sentiments_df['Negative']
clean_chart_df['Neutral'] = lyrics_sentiments_df['Neutral']
# SAVE to a .csv output
clean_chart_df.to_csv('billboard_analysis.csv')
# VIEW resulting dataframe head
clean_chart_df.head()
# Create the pandas dataframe group in order to calculate averages by year
chart_group = clean_chart_df.groupby(["Year"]).mean()
chart_group_df = pd.DataFrame(chart_group).reset_index(drop=False)
chart_group_df
# TEST HISTOGRAM of negative sentiment results
plt.figure(figsize=(20,20))
plt.subplot(2, 1, 2)
plt.hist(chart_group_df['Negative'], 10, density=True, alpha=0.7, label="population1")
x,labels=plt.xticks()
labels=[0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1 , 0.11]
plt.xticks(x,labels,fontsize=16)
plt.xticks(label=labels)
plt.yticks(fontsize=16)
plt.title("Distribution of Songs by Negative Sentiment",fontsize=20)
plt.savefig("histogram.png")
# Create a local x-axis
x_axis = np.arange(1968,2018)
print(len(x_axis))
print(len(chart_group_df['Negative']))
# Scatterplot of negative sentiment analysis
# Creates the regression line
(slope, intercept, r_value, p_value, std_err) = linregress(x_axis, chart_group_df['Negative'])
fit = slope * x_axis + intercept
r2 = r_value ** 2
# Sets up plot
fig, ax = plt.subplots(figsize=(20,10))
ax.set_xlabel("Years",fontsize=18)
ax.set_ylabel("Sentiment Analysis: Negative",fontsize=18)
ax.tick_params(labelsize=16)
ax.set_title(label="Negative Sentiment over Time",fontsize=24)
# Plots the data
ax.plot(x_axis, chart_group_df['Negative'], marker='o', color=('red'), linewidth=0.5)
ax.plot(x_axis, fit, 'b--')
plt.savefig("negative.png")
plt.show
print(f'r = {r_value}')
print(f'r^2 = {r2}')
print(f'std err = {std_err}')
print(f'p-value = {p_value}')
print('49% of the variation in the dependent variable (negative sentiment score) is accounted for by the variation \
in the independent variable (time in years). We feel comfortable using R^2, because the data is approximately \
normally distributed based on the shape of the histogram (slightly skewed right). Additionally, the p-value \
is 0+ which is < 0.05. \
Conclusion: We reject the null hypothesis (Ho) in favor of the alternative. These two variables are strongly related')
# Scatterplot of positive sentiment analysis
# Creates the regression line
(slope, intercept, r_value, p_value, std_err) = linregress(x_axis, chart_group_df['Positive'])
fit = slope * x_axis + intercept
r2 = r_value ** 2
# Sets up plot
fig, ax = plt.subplots(figsize=(20,10))
ax.set_xlabel("Years",fontsize=18)
ax.set_ylabel("Sentiment Analysis: Positive",fontsize=18)
ax.tick_params(labelsize=16)
ax.set_title(label="Positive Sentiment over Time",fontsize=24)
# Plots the data
ax.plot(x_axis, chart_group_df['Positive'], marker='o', color=('green'), linewidth=0.5)
ax.plot(x_axis, fit, 'b--')
plt.savefig("positive.png")
plt.show
print(f'r = {r_value}')
print(f'r^2 = {r2}')
print(f'std err = {std_err}')
print(f'p-value = {p_value}')
# Scatterplot of neutral sentiment analysis
# Creates the regression line
(slope, intercept, r_value, p_value, std_err) = linregress(x_axis, chart_group_df['Neutral'])
fit = slope * x_axis + intercept
r2 = r_value ** 2
# Sets up plot
fig, ax = plt.subplots(figsize=(20,10))
ax.set_xlabel("Years",fontsize=18)
ax.set_ylabel("Sentiment Analysis: Neutral",fontsize=18)
ax.tick_params(labelsize=16)
ax.set_title(label="Neutral Sentiment over Time",fontsize=24)
# Plots the data
ax.plot(x_axis, chart_group_df['Neutral'], marker='o', color=('blue'), linewidth=0.5)
ax.plot(x_axis, fit, 'b--')
plt.savefig("neutral.png")
plt.show
print(f'r = {r_value}')
print(f'r^2 = {r2}')
print(f'std err = {std_err}')
print(f'p-value = {p_value}')
# Scatterplot of compound sentiment analysis
# Creates the regression line
(slope, intercept, r_value, p_value, std_err) = linregress(x_axis, chart_group_df['Compound'])
fit = slope * x_axis + intercept
r2 = r_value ** 2
# Sets up plot
fig, ax = plt.subplots(figsize=(20,10))
ax.set_xlabel("Years",fontsize=18)
ax.set_ylabel("Sentiment Analysis: Compound",fontsize=18)
ax.tick_params(labelsize=16)
ax.set_title(label="Compound Sentiment over Time",fontsize=24)
# Plots the data
ax.plot(x_axis, chart_group_df['Compound'], marker='x', color=('black'), linewidth=0.5)
ax.plot(x_axis, fit, 'b--')
plt.savefig("compound.png")
plt.show
print(f'r = {r_value}')
print(f'r^2 = {r2}')
print(f'std err = {std_err}')
print(f'p-value = {p_value}')<jupyter_output><empty_output>
|
no_license
|
/projects/Lyrics-Analyzer/Output/Test Samples/JB_Sample/.ipynb_checkpoints/Project_Notebook_FINAL_JB-checkpoint.ipynb
|
jeffreybox/GTATL201805DATA3
| 1 |
<jupyter_start><jupyter_text>### Test-train split<jupyter_code>feature_columns = 'average_stars review_count compliment_cool compliment_cute \
compliment_funny compliment_hot compliment_list compliment_more \
compliment_note compliment_photos compliment_plain cool fans \
funny rev_length rev_stars rev_use friend_count friend_label \
rev_count_label buss_star buss_review alpha_length length_lemmatize \
NN NNP NNS PDT POS PRP PRP$ RB RBR RBS RP VB \
VBD VBG VBN VBP VBZ WDT WP WRB'.split()
X = (cleaned_data[feature_columns])
X.replace(to_replace= 'NaN', value=np.nan, inplace=True)
X.fillna(value= 0, inplace=True)
y= cleaned_data['Faker']
X_train=X.values[:160,:]
X_test=X.values[160:,]
y_train =y.values[:160,]
y_test =y.values[160:,]
<jupyter_output>/home/titli/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py:3795: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
method=method)
/home/titli/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py:3787: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
downcast=downcast, **kwargs)
<jupyter_text>### Feature selection<jupyter_code>from sklearn.ensemble import RandomForestClassifier
RFcw = RandomForestClassifier(random_state=40, n_estimators = 50, min_samples_leaf=5, class_weight="balanced")
RFcw.fit(X_train, y_train)
# Feature importance
#forest = RFcw.best_estimator_.steps[-1][1]
imps = RFcw.feature_importances_
# Std dev due to all trees in the forest
std = np.std([tree.feature_importances_ for tree in RFcw.estimators_], axis=0)
# Index that sorts the features with most important first
indices = np.argsort(imps)[::-1]
for f in range(len(feature_columns)):
print(f+1, feature_columns[indices[f]], imps[indices[f]])
selected_feature = []
for i in range(20):
selected_feature.append(feature_columns[indices[i]])
<jupyter_output>1 PRP$ 0.09078578784594421
2 average_stars 0.0775395134909361
3 NN 0.06063926493135878
4 buss_review 0.05987820737856189
5 VB 0.05971976446595563
6 review_count 0.05706733732029779
7 rev_length 0.05095355081116822
8 VBD 0.05039938581656459
9 VBZ 0.04727061028771006
10 RB 0.039802843884653445
11 length_lemmatize 0.03722743532116322
12 PRP 0.037204700636598254
13 rev_stars 0.03366753528498414
14 NNS 0.03223270609026408
15 alpha_length 0.03171755626693837
16 VBG 0.03158601665967478
17 WRB 0.031040453129827304
18 VBN 0.01970257065924371
19 buss_star 0.019024365506568573
20 friend_count 0.016763321124830893
21 RP 0.01572484918826479
22 rev_use 0.015040555502168356
23 VBP 0.01351162025111022
24 rev_count_label 0.009282615290685692
25 compliment_funny 0.009172669308572233
26 cool 0.008406717354263342
27 friend_label 0.007913955599181727
28 WDT 0.0070361235106931016
29 POS 0.005554062762930463
30 funny 0.005427997432628426
31 compliment_plain 0.0051176937910191135
32 fans 0.0034094091042426644[...]<jupyter_text>### New test-train split based on selected features<jupyter_code>X_new = (cleaned_data[selected_feature])
X_new_train=X_new.values[:160,:]
X_new_test=X_new.values[160:,]
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from imblearn.pipeline import Pipeline as pl
from imblearn.under_sampling import RandomUnderSampler
steps = [("rus", RandomUnderSampler(random_state=30)),
("model", MultinomialNB(alpha=100))]
GNBrus1 = pl(steps)
GNBrus1.fit(X_new_train, y_train)
y_pred =GNBrus1.predict(X_new_test)
y_pred_train =GNBrus1.predict(X_new_train)
def confusion(predictions, actuals):
actuals=actuals.values[:,0] if isinstance(actuals,pd.DataFrame) else actuals
true_pos= (predictions==1) & (actuals==1)
true_pos.sum()
true_neg= (predictions==0) & (actuals==0)
true_neg.sum()
false_pos= (predictions==1) & (actuals==0)
false_pos.sum()
false_neg= (predictions==0) & (actuals==1)
false_neg.sum()
prec=true_pos.sum()/(true_pos.sum()+false_pos.sum())
accur=(true_pos.sum()+true_neg.sum())/(true_pos.sum()+false_pos.sum()+ \
true_neg.sum()+ false_neg.sum())
recall = true_pos.sum()/(true_pos.sum()+false_neg.sum())
F1=2*(prec*recall/(prec+recall))
return(true_pos.sum(), false_pos.sum(),false_neg.sum(),true_neg.sum(), accur,recall, prec, F1)
np.array((confusion(y_pred_train, y_train.reshape((1, 160)))))
confusion(y_pred, y_test.reshape((1, 40)))
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
from imblearn.pipeline import Pipeline as pl
from imblearn.under_sampling import RandomUnderSampler
from sklearn.metrics import average_precision_score
PRAUC = []
for trees in [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]:
ETcwiter = ExtraTreesClassifier(random_state=40, n_estimators = trees, min_samples_leaf=4, class_weight="balanced")
ETcwiter.fit(X_new_train, y_train)
PRAUC.append(average_precision_score(y_test, ETcwiter.predict_proba(X_new_test)[:,1]))
plt.scatter(x= [10, 20, 30, 40, 50, 60, 70, 80, 90, 100], y= PRAUC, marker='^', color='red')
plt.xlabel('No. of trees', fontsize=15)
plt.ylabel('average_precision_score',fontsize=15)
plt.title('Best estimator for no. of trees in extra tress',fontsize=15)
ETcwiter = ExtraTreesClassifier(random_state=40, n_estimators = 100, min_samples_leaf=4, class_weight="balanced")
ETcwiter.fit(X_new_train, y_train)
y_pred = ETcwiter.predict(X_new_test)
y_pred_train = ETcwiter.predict(X_new_train)
np.array((confusion(y_pred, y_test.reshape((1, 40)))))
np.array((confusion(y_pred_train, y_train.reshape((1, 160)))))<jupyter_output><empty_output>
|
no_license
|
/Capstone_project_fake_reviewers/Machine_learning/Naivebayes_extratree.ipynb
|
tannisthamaiti/Fake-reviewers-in-Yelp
| 3 |
<jupyter_start><jupyter_text>---
title: "Point-in-time joins with PySpark"
date: 2021-09-09
type: technical_note
draft: false
---# Point-in-Time (PIT) joins in Hopsworks Feature Store
In order to create a training dataset, data scientists usually have to generate information about the future by putting themselves back in time, or travelling back in time.
Let's take the case of churn prediction: We want to predict which of our users will churn within the next few months. To train such a model we need to construct a training dataset containing rows for each of our users, one column indicating whether a user churned and X additional columns with features about the user, such as his historical purchases or interactions with the product.
Since we don't know the future yet (it's what we want to predict), to generate such prediction targets, we have to go back in time and determine which users churned in the last couple of months.
### Snapshot based time travel
If you have looked at our other time travel notebooks, the most simple solution to this problem would be to choose a single cutoff time for all our users. Using the time travel capabilities of the Feature Store with Apache Hudi make it easy enough to fetch data from a single point in time. This can work if features aren't frequently updated, however, when prediction target events of different users lie far apart and features are updated frequently you might leave a lot of information unused, hurting the accuracy of your model.
### Individual point-in-time correct cutoffs
A better approach includes using all information up to the time the prediction target event happened. So that means we still go back, let's say a maximum 6 months, in time, however, we want to remember the prediction target event time stamp on a user level (row level) and find the latest values of our prediction features before this point in time. The following diagrams illustrates this approach for one user:

A problem often occurring at this stage is the possibility of leaking information from the future (light red signals) into the training dataset. This means using feature signals which happened after the prediction target event has to be strictly avoided. What we want to retrieve from the feature store are the green, and the green only, feature signals.
In this case it is not as simple as performing a time travel query with a single time stamp. This challenge is solved by a point-in-time correct join instead.
**Point-in-time joins** prevent feature leakage by recreating the state of the world at a single point in time for every entity or primary key value (user in our case).
**Hopsworks Feature Store** abstracts this complexity away by simply telling it where to find the relevant event time stamps for feature groups. We will go through the process in the rest of the notebook.
## Event-time enabled Feature GroupsFor the Feature Store to be able to perform a PIT join, we need to tell it where to find the event time stamp within each feature group. Event-time is a timestamp indicating the instant in time when an event happened at the source of the event, so this is *not* an ingestion time stamp or the like, but instead should originate from your source systems.
To "event-time enable" a feature group, you set the `event_time` argument at feature group creation. We are using simple Integers to indicate the timestamps, for better readability.
### For simplicity we will create three feature groups
* `marketing` with customer id, complaints in the last 7 days, outbound activities in the last 7 days and coupons received in the last 14 days;
* `contract` with customer id, contract_type;
* `churn` which will contain labels wether loan was cancelled `1` or rejected `0` with a timestamp when the contract was created or cancelled, note that this feature group has a composite primary key of customer id and a contract id, referring to the id in the contract feature group.<jupyter_code>import hsfs
from pyspark.sql import DataFrame, Row
from pyspark.sql.types import *
connection = hsfs.connection()
# get a reference to the feature store, you can access also shared feature stores by providing the feature store name
fs = connection.get_feature_store()
marketing_schema = StructType([
StructField("customer_id", IntegerType(), True),
StructField("ts", LongType(), True),
StructField("complaints_7d", IntegerType(), True),
StructField("outbound_7d", IntegerType(), True),
StructField("coupon_14d", IntegerType(), True)
])
contract_schema = StructType([
StructField("contract_id", IntegerType(), True),
StructField("ts", LongType(), True),
StructField("contract_type", StringType(), True)
])
churn_schema = StructType([
StructField("customer_id", IntegerType(), True),
StructField("contract_id", IntegerType(), True),
StructField("ts", LongType(), True),
StructField("contract_churn", IntegerType(), True)
])<jupyter_output><empty_output><jupyter_text>We will first load the Churn Feature Group with the initial contracts. We can assume there is a job running inserting new rows for every new contract into this feature group.<jupyter_code>new_contracts = [
Row(1, 100, 10010, 0),
Row(2, 101, 10017, 0),
Row(3, 102, 10035, 0),
Row(4, 103, 10023, 0),
Row(5, 104, 10546, 0),
Row(6, 105, 10213, 0),
Row(7, 106, 10056, 0),
Row(8, 107, 10012, 0)
]
new_contracts_df = spark.createDataFrame(new_contracts, churn_schema)<jupyter_output><empty_output><jupyter_text>At the same time some contracts will be cancelled and inserted into the feature group over time. We will perform this insertion with a bit of time difference, so that we can later demonstrate the capabilities of PIT joins together with time-travel queries.<jupyter_code>churned_contracts = [
Row(1, 100, 10356, 1),
Row(5, 104, 10692, 1),
Row(6, 105, 10375, 1),
Row(8, 107, 10023, 1)
]
churned_contracts_df = spark.createDataFrame(churned_contracts, churn_schema)<jupyter_output><empty_output><jupyter_text>Now let's create some mock data for the secondary feature groups, containing the actual explanatory features used to predict the churn.
The contract feature group is a feature group that gets new contracts appended with information about the contract itself only, such as the type of contract. Hence all timestamps are the same as the inital contract creation in the churn feature group.<jupyter_code>contracts = [
Row(100, 10010, "Long-term"),
Row(101, 10017, "Short-term"),
Row(102, 10035, "Trial"),
Row(103, 10023, "Short-term"),
Row(104, 10546, "Long-term"),
Row(105, 10213, "Trial"),
Row(106, 10056, "Long-term"),
Row(107, 10012, "Short-term")
]
contracts_df = spark.createDataFrame(contracts, contract_schema)<jupyter_output><empty_output><jupyter_text>The marketing activities feature group contains features related to outbound and inbound contacts to the customer. You can imagine these to be computed by a streaming application, updating the features every time new events arrive.
In the point in time join we want to get the latest of these updates just before or at the same time as the prediction event happened. Contracts can be in the training dataset twice, once when they haven't churned yet and then when they churned. The rows which should be picked for either of the target events are marked with a comment.
At the same time, always the latest state of the customer's marketing profile should be available in the online feature store.<jupyter_code>marketing = [
Row(1, 10010, 0, 0, 1), # this one
Row(1, 10174, 3, 0, 4),
Row(1, 10257, 7, 0, 3),
Row(1, 10352, 3, 0, 5), # this one
Row(1, 10753, 0, 0, 0),
Row(1, 10826, 0, 0, 1), # online feature store
Row(2, 10017, 0, 1, 1), # this one
Row(2, 10021, 0, 1, 1),
Row(2, 10034, 0, 1, 2), # online feature store
Row(3, 10035, 1, 3, 0), # this one
Row(3, 10275, 1, 2, 0),
Row(5, 10546, 0, 1, 0), # this one
Row(5, 10598, 2, 2, 1), # this one
Row(5, 13567, 0, 1, 0),
Row(5, 16245, 0, 1, 0), # online feature store
Row(6, 10213, 0, 0, 1), # this one
Row(6, 10234, 0, 5, 0), # this one
Row(6, 10436, 0, 0, 1), # online feature store
Row(7, 10056, 0, 0, 0), # this one
Row(7, 10056, 0, 1, 0),
Row(7, 10056, 0, 2, 1),
Row(7, 10056, 0, 3, 0), # online feature store
Row(8, 10012, 0, 0, 1), # this one
Row(8, 10023, 10, 0, 1), # this one
Row(8, 10033, 0, 0, 1), # online feature store
]
marketing_df = spark.createDataFrame(marketing, marketing_schema)<jupyter_output><empty_output><jupyter_text>### Create the feature groupsWe are now ready to create our three feature groups. Note the additional argument we are passing, in order to tell the Feature Store which column should be used as `event_time`.<jupyter_code>marketing_fg = fs.create_feature_group("marketing",
version=1,
description="Features about inbound/outbound communication with customers",
online_enabled=True,
statistics_config=False,
primary_key=["customer_id"],
event_time="ts")
marketing_fg.save(marketing_df)
contracts_fg = fs.create_feature_group("contracts",
version=1,
description="Contract information features",
online_enabled=True,
statistics_config=False,
primary_key=["contract_id"],
event_time="ts")
contracts_fg.save(contracts_df)
churn_fg = fs.create_feature_group("churn",
version=1,
description="Customer/contract information about activity of contract",
online_enabled=True,
statistics_config=False,
primary_key=["customer_id", "contract_id"],
event_time="ts")
churn_fg.save(new_contracts_df)<jupyter_output><empty_output><jupyter_text>We insert the churned contracts, in a separate upsert step.<jupyter_code>churn_fg.insert(churned_contracts_df)<jupyter_output>StorageWarning: The statistics are not enabled of feature group `churn`, with version `1`. No statistics computed.<jupyter_text>### Constructing a Point-in-Time Join
The Feature Store HSFS API comes with a `Query` abstraction. Operations such as `.join` or `.select` on a feature group return such `Query` objects. Queries are solely based on metadata up until they are saved as training datasets or read into Dataframes.
There are **two requirements** to construct a point in time join:
1. All feature groups have to be event time enabled. If there is no event timestamp available, you can fall back on creating an ingestion timestamp in your feature engineering pipeline.
2. The label (feature) group is the Feature Group that contains the column that is defined as the "label" column when creating the training dataset. The **left-most feature group has to be the label group**, meaning this is the feature group of which the timestamps will be used as reference timestamp against which the explaining features are joined (dark red dots in the diagram above). In our case this is the churn feature group.
So let's construct the join:<jupyter_code>query = churn_fg.select_all().join(contracts_fg.select(["contract_type"]), on=["contract_id"]) \
.join(marketing_fg.select(["complaints_7d", "outbound_7d", "coupon_14d"]), on=["customer_id"])<jupyter_output><empty_output><jupyter_text>You can print and look at the constructed SQL join query. However, there is no need to go into this complexity, as it will all be handled by the feature store.<jupyter_code>print(query.to_string())<jupyter_output>WITH right_fg0 AS (SELECT *
FROM (SELECT `fg2`.`customer_id`, `fg2`.`contract_id`, `fg2`.`ts`, `fg2`.`contract_churn`, `fg0`.`contract_type`, RANK() OVER (PARTITION BY `fg2`.`contract_id`, `fg2`.`ts` ORDER BY `fg0`.`ts` DESC) pit_rank_hopsworks
FROM `fg2` `fg2`
INNER JOIN `fg0` `fg0` ON `fg2`.`contract_id` = `fg0`.`contract_id` AND `fg2`.`ts` >= `fg0`.`ts`) NA
WHERE `pit_rank_hopsworks` = 1), right_fg1 AS (SELECT *
FROM (SELECT `fg2`.`customer_id`, `fg2`.`contract_id`, `fg2`.`ts`, `fg2`.`contract_churn`, `fg1`.`complaints_7d`, `fg1`.`outbound_7d`, `fg1`.`coupon_14d`, RANK() OVER (PARTITION BY `fg2`.`customer_id`, `fg2`.`ts` ORDER BY `fg1`.`ts` DESC) pit_rank_hopsworks
FROM `fg2` `fg2`
INNER JOIN `fg1` `fg1` ON `fg2`.`customer_id` = `fg1`.`customer_id` AND `fg2`.`ts` >= `fg1`.`ts`) NA
WHERE `pit_rank_hopsworks` = 1) (SELECT `right_fg0`.`customer_id`, `right_fg0`.`contract_id`, `right_fg0`.`ts`, `right_fg0`.`contract_churn`, `right_fg0`.`contract_type`, `right_fg1`.`complaints_7d`, `righ[...]<jupyter_text>As explained above, the Query itself does not perform any join yet until you call an action such as `.read` on it:<jupyter_code>query.read().show(20)<jupyter_output>+-----------+-----------+-----+--------------+-------------+-------------+-----------+----------+
|customer_id|contract_id| ts|contract_churn|contract_type|complaints_7d|outbound_7d|coupon_14d|
+-----------+-----------+-----+--------------+-------------+-------------+-----------+----------+
| 6| 105|10375| 1| Trial| 0| 5| 0|
| 5| 104|10692| 1| Long-term| 2| 2| 1|
| 5| 104|10546| 0| Long-term| 0| 1| 0|
| 2| 101|10017| 0| Short-term| 0| 1| 1|
| 7| 106|10056| 0| Long-term| 0| 0| 0|
| 8| 107|10023| 1| Short-term| 10| 0| 1|
| 1| 100|10356| 1| Long-term| 3| 0| 5|
| 1| [...]<jupyter_text>#### Filters on point-in-time queries
As with any other query, it is possible to apply filters to it.
For example, if you are planning to train only a model for a certain contract type:<jupyter_code>churn_fg.select_all().join(
contracts_fg.select(["contract_type"])
.filter(contracts_fg.contract_type == "Long-term")
, on=["contract_id"]) \
.join(marketing_fg.select(["complaints_7d", "outbound_7d", "coupon_14d"]), on=["customer_id"]).read().show(20)<jupyter_output>+-----------+-----------+-----+--------------+-------------+-------------+-----------+----------+
|customer_id|contract_id| ts|contract_churn|contract_type|complaints_7d|outbound_7d|coupon_14d|
+-----------+-----------+-----+--------------+-------------+-------------+-----------+----------+
| 5| 104|10692| 1| Long-term| 2| 2| 1|
| 5| 104|10546| 0| Long-term| 0| 1| 0|
| 7| 106|10056| 0| Long-term| 0| 0| 0|
| 1| 100|10356| 1| Long-term| 3| 0| 5|
| 1| 100|10010| 0| Long-term| 0| 0| 1|
+-----------+-----------+-----+--------------+-------------+-------------+-----------+----------+<jupyter_text>#### Combining time-travel and point-in-time join in one query
We performed a separate upsert on our churn feature group in order to be able to demonstrate time-travel, so let's look at the commit information:<jupyter_code>commit_details = churn_fg.commit_details()
print(commit_details)<jupyter_output>{1631088747000: {'committedOn': '20210908081227', 'rowsUpdated': 0, 'rowsInserted': 4, 'rowsDeleted': 0}, 1631088722000: {'committedOn': '20210908081202', 'rowsUpdated': 0, 'rowsInserted': 8, 'rowsDeleted': 0}}<jupyter_text>As you can see, there are two commits: (1) the initial commit, which inserted the eight contracts. And (2) the upsert commit with the additinal four churned contracts.
Now we would like to query the state of the Feature Store at the time before the churned contracts were ingested:<jupyter_code># getting the correct timestamp, we sort by the commit time and get the string representation of the commit time
committed_on = commit_details[sorted(commit_details)[0]]["committedOn"]
committed_on
contracts_fg.commit_details()
churn_fg.select_all().as_of(committed_on).read().show(20)<jupyter_output>+-----------+-----------+-----+--------------+
|customer_id|contract_id| ts|contract_churn|
+-----------+-----------+-----+--------------+
| 2| 101|10017| 0|
| 5| 104|10546| 0|
| 8| 107|10012| 0|
| 3| 102|10035| 0|
| 6| 105|10213| 0|
| 7| 106|10056| 0|
| 4| 103|10023| 0|
| 1| 100|10010| 0|
+-----------+-----------+-----+--------------+<jupyter_text>And combining this with our point-in-time join:<jupyter_code>query.as_of(committed_on).read().show(20)<jupyter_output>+-----------+-----------+-----+--------------+-------------+-------------+-----------+----------+
|customer_id|contract_id| ts|contract_churn|contract_type|complaints_7d|outbound_7d|coupon_14d|
+-----------+-----------+-----+--------------+-------------+-------------+-----------+----------+
| 5| 104|10546| 0| Long-term| 0| 1| 0|
| 2| 101|10017| 0| Short-term| 0| 1| 1|
| 7| 106|10056| 0| Long-term| 0| 0| 0|
| 1| 100|10010| 0| Long-term| 0| 0| 1|
| 6| 105|10213| 0| Trial| 0| 0| 1|
| 8| 107|10012| 0| Short-term| 0| 0| 1|
| 3| 102|10035| 0| Trial| 1| 3| 0|
+-----------+-------[...]<jupyter_text>### Creating the training dataset
Creating the training dataset is now as simple as initializing the metadata with `.create_training_dataset()` and subsequently persisting it and materializing the query with `.save()`.<jupyter_code>td = fs.create_training_dataset("churn_model",
version=1,
data_format="csv",
label=["contract_churn"])
td.save(query)<jupyter_output><empty_output><jupyter_text>#### Querying the online feature store
We can reproduce the query and get the latest feature vector for each contract from the online feature store now.
**NOTE:**
- Any applied filter at the time of creating the training dataset is not applied during serving time. It is the responsibility of the application performing the lookup, to only provide contract ids which belong for example to a certain category. The reason for that is that we always want to get back a vector, applying a filter, might lead to no results.<jupyter_code>td.get_serving_vector({"customer_id": 1, "contract_id": 100})
td.get_serving_vector({"customer_id": 2, "contract_id": 101})<jupyter_output>[2, 101, 10017, 'Short-term', 0, 1, 2]
|
no_license
|
/notebooks/featurestore/hsfs/time_travel/point_in_time_join_python.ipynb
|
robzor92/hops-examples
| 16 |
<jupyter_start><jupyter_text># 範例 : 計程車費率預測
https://www.kaggle.com/c/new-york-city-taxi-fare-prediction# [作業目標]
- 試著模仿範例寫法, 使用程車費率預測競賽練習時間欄位處理# [作業重點]
- 新增星期幾(day of week)與第幾周(week of year)這兩項特徵, 觀察有什麼影響 (In[4], Out[4], In[5], Out[5])
- 新增加上年週期與周周期特徵(參考教材) , 觀察有什麼影響 (In[8], Out[8], In[9], Out[9]) <jupyter_code># 做完特徵工程前的所有準備
import pandas as pd
import numpy as np
import datetime
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import GradientBoostingRegressor
data_path = 'data/'
df = pd.read_csv(data_path + 'taxi_data1.csv')
train_Y = df['fare_amount']
df = df.drop(['fare_amount'] , axis=1)
df.head()
# 時間特徵分解方式:使用datetime
df['pickup_datetime'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S UTC'))
df['pickup_year'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%Y')).astype('int64')
df['pickup_month'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%m')).astype('int64')
df['pickup_day'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%d')).astype('int64')
df['pickup_hour'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%H')).astype('int64')
df['pickup_minute'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%M')).astype('int64')
df['pickup_second'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%S')).astype('int64')
df.head()
# 將結果使用線性迴歸 / 梯度提升樹分別看結果
df_temp = df.drop(['pickup_datetime'] , axis=1)
scaler = MinMaxScaler()
train_X = scaler.fit_transform(df_temp)
Linear = LinearRegression()
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
GDBT = GradientBoostingRegressor()
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')<jupyter_output>Linear Reg Score : 0.026876871475640173
Gradient Boosting Reg Score : 0.7115447416761242
<jupyter_text># 作業1
* 對照範例,試著加入星期幾 (day of week) 與第幾周 (week of year) 這兩項特徵,
看看結果會比原本只有時間特徵分解的結果更好或更差?
> 加入星期幾與第幾周的特徵後, 看起來結果似乎更差了, 可能表示"第幾周"這個特徵沒有提供多少資訊
但是搭配日週期之後效果有提升, 可能是假日與非假日(星期幾)的作息(日週期)與費率有相關性<jupyter_code># 語法參閱 https://docs.python.org/3/library/datetime.html
df['pickup_dow'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%w')).astype('int64')
df['pickup_woy'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%W')).astype('int64')
df.head()
# 將結果使用線性迴歸 / 梯度提升樹分別看結果
df_temp = df.drop(['pickup_datetime'] , axis=1)
train_X = scaler.fit_transform(df_temp)
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')
# 加上"日週期"特徵 (參考講義"週期循環特徵")
import math
df['day_cycle'] = df['pickup_hour']/12 + df['pickup_minute']/720 + df['pickup_second']/43200
df['day_cycle'] = df['day_cycle'].map(lambda x:math.sin(x*math.pi))
df.head()
# 將結果使用線性迴歸 / 梯度提升樹分別看結果
df_temp = df.drop(['pickup_datetime'] , axis=1)
train_X = scaler.fit_transform(df_temp)
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')<jupyter_output>Linear Reg Score : 0.02827400966881246
Gradient Boosting Reg Score : 0.7126037331189518
<jupyter_text># 作業2
* 對照範例的日週期效果,試著參考投影片完成年週期與周週期的特徵 (也可以用你自己想到的方式),
看看結果會比範例中的結果更好或更差?
> 從結果來看, 加上年週期與周週期, 正確性有顯著的提升,
這可能表示冷熱等確實與費率也有相關性(天氣熱比較搭計程車?)<jupyter_code># 加上"年週期"與"周週期"特徵
df['year_cycle'] = df['pickup_month']/6 + df['pickup_day']/180
df['year_cycle'] = df['year_cycle'].map(lambda x:math.cos(x*math.pi))
df['week_cycle'] = df['pickup_dow']/3.5 + df['pickup_hour']/84
df['week_cycle'] = df['week_cycle'].map(lambda x:math.sin(x*math.pi))
df.head()
# 將結果使用線性迴歸 / 梯度提升樹分別看結果
df_temp = df.drop(['pickup_datetime'] , axis=1)
train_X = scaler.fit_transform(df_temp)
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')<jupyter_output>Linear Reg Score : 0.028326599866693947
Gradient Boosting Reg Score : 0.7154288505120123
|
no_license
|
/answers/Day_025_DayTime_Features_Ans.ipynb
|
Julian-Chu/2nd-ML100Days
| 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.