content
stringlengths 73
1.12M
| license
stringclasses 3
values | path
stringlengths 9
197
| repo_name
stringlengths 7
106
| chain_length
int64 1
144
|
---|---|---|---|---|
<jupyter_start><jupyter_text>
# Rotating 3D wireframe plot
A very simple 'animation' of a 3D plot. See also rotate_axes3d_demo.
<jupyter_code>from __future__ import print_function
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
import time
def generate(X, Y, phi):
'''
Generates Z data for the points in the X, Y meshgrid and parameter phi.
'''
R = 1 - np.sqrt(X**2 + Y**2)
return np.cos(2 * np.pi * X + phi) * R
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Make the X, Y meshgrid.
xs = np.linspace(-1, 1, 50)
ys = np.linspace(-1, 1, 50)
X, Y = np.meshgrid(xs, ys)
# Set the z axis limits so they aren't recalculated each frame.
ax.set_zlim(-1, 1)
# Begin plotting.
wframe = None
tstart = time.time()
for phi in np.linspace(0, 180. / np.pi, 100):
# If a line collection is already remove it before drawing.
if wframe:
ax.collections.remove(wframe)
# Plot the new wireframe and pause briefly before continuing.
Z = generate(X, Y, phi)
wframe = ax.plot_wireframe(X, Y, Z, rstride=2, cstride=2)
plt.pause(.001)
print('Average FPS: %f' % (100 / (time.time() - tstart)))<jupyter_output><empty_output>
|
no_license
|
/2.2.3/_downloads/wire3d_animation.ipynb
|
matplotlib/matplotlib.github.com
| 1 |
<jupyter_start><jupyter_text># Multiclass Text Classification Using BERT and Keras
In this example, we will use ***ktrain*** ([a lightweight wrapper around Keras](https://github.com/amaiya/ktrain)) to build a model using the dataset employed in the **scikit-learn** tutorial: [Working with Text Data](https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html). As in the tutorial, we will sample 4 newsgroups to create a relatively small multiclass text classification dataset. The objective is to accurately classify each document into one of these four newsgroup topic categories. This will provide us an opportunity to see **BERT** in action on a relatively smaller training set. Let's fetch the [20newsgroups dataset ](http://qwone.com/~jason/20Newsgroups/) using scikit-learn.<jupyter_code># fetch the dataset using scikit-learn
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train',
categories=categories, shuffle=True, random_state=42)
test_b = fetch_20newsgroups(subset='test',
categories=categories, shuffle=True, random_state=42)
print('size of training set: %s' % (len(train_b['data'])))
print('size of validation set: %s' % (len(test_b['data'])))
print('classes: %s' % (train_b.target_names))
x_train = train_b.data
y_train = train_b.target
x_test = test_b.data
y_test = test_b.target<jupyter_output>Downloading 20news dataset. This may take a few minutes.
Downloading dataset from https://ndownloader.figshare.com/files/5975967 (14 MB)
<jupyter_text>## STEP 1: Load and Preprocess the Data
Preprocess the data using the `texts_from_array function` (since the data resides in an array).
If your documents are stored in folders or a CSV file you can use the `texts_from_folder` or `texts_from_csv` functions, respectively.<jupyter_code>(x_train, y_train), (x_test, y_test), preproc = text.texts_from_array(x_train=x_train, y_train=y_train,
x_test=x_test, y_test=y_test,
class_names=train_b.target_names,
preprocess_mode='bert',
maxlen=350,
max_features=35000)<jupyter_output>task: text classification
downloading pretrained BERT model (uncased_L-12_H-768_A-12.zip)...
[██████████████████████████████████████████████████]
extracting pretrained BERT model...
done.
cleanup downloaded zip...
done.
preprocessing train...
language: en
<jupyter_text>## STEP 2: Load the BERT Model and Instantiate a Learner object<jupyter_code># you can disregard the deprecation warnings arising from using Keras 2.2.4 with TensorFlow 1.14.
model = text.text_classifier('bert', train_data=(x_train, y_train), preproc=preproc)
learner = ktrain.get_learner(model, train_data=(x_train, y_train), batch_size=6)<jupyter_output>Is Multi-Label? False
maxlen is 350
done.
<jupyter_text>## STEP 3: Train the Model
We train using one of the three learning rates recommended in the BERT paper: *5e-5*, *3e-5*, or *2e-5*.
Alternatively, the ktrain Learning Rate Finder can be used to find a good learning rate by invoking `learner.lr_find()` and `learner.lr_plot()`, prior to training.
The `learner.fit_onecycle` method employs a [1cycle learning rate policy](https://arxiv.org/pdf/1803.09820.pdf).
<jupyter_code>learner.fit_onecycle(2e-5, 4)<jupyter_output>
begin training using onecycle policy with max lr of 2e-05...
Train on 2257 samples
Epoch 1/4
2257/2257 [==============================] - 486s 215ms/sample - loss: 0.6799 - accuracy: 0.7288
Epoch 2/4
2257/2257 [==============================] - 462s 205ms/sample - loss: 0.1436 - accuracy: 0.9539
Epoch 3/4
2257/2257 [==============================] - 462s 205ms/sample - loss: 0.0482 - accuracy: 0.9858
Epoch 4/4
2257/2257 [==============================] - 463s 205ms/sample - loss: 0.0101 - accuracy: 0.9982
<jupyter_text>We can use the `learner.validate` method to test our model against the validation set.
As we can see, BERT achieves a **96%** accuracy, which is quite a bit higher than the 91% accuracy achieved by SVM in the [scikit-learn tutorial](https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html).<jupyter_code>learner.validate(val_data=(x_test, y_test), class_names=train_b.target_names)<jupyter_output> precision recall f1-score support
alt.atheism 0.95 0.92 0.93 319
comp.graphics 0.98 0.98 0.98 389
sci.med 0.98 0.96 0.97 396
soc.religion.christian 0.94 0.98 0.96 398
accuracy 0.96 1502
macro avg 0.96 0.96 0.96 1502
weighted avg 0.96 0.96 0.96 1502
<jupyter_text>## How to Use Our Trained BERT Model
We can call the `learner.get_predictor` method to obtain a Predictor object capable of making predictions on new raw data.<jupyter_code>predictor = ktrain.get_predictor(learner.model, preproc)
predictor.get_classes()
predictor.predict(test_b.data[0:1])
# we can visually verify that our prediction of 'sci.med' for this document is correct
print(test_b.data[0])
# we predicted the correct label
print(test_b.target_names[test_b.target[0]])<jupyter_output>sci.med
<jupyter_text>The `predictor.save` and `ktrain.load_predictor` methods can be used to save the Predictor object to disk and reload it at a later time to make predictions on new data.<jupyter_code># let's save the predictor for later use
predictor.save('/tmp/my_predictor')
# reload the predictor
reloaded_predictor = ktrain.load_predictor('/tmp/my_predictor')
# make a prediction on the same document to verify it still works
reloaded_predictor.predict(test_b.data[0:1])<jupyter_output><empty_output>
|
no_license
|
/p15/김규열_bert.ipynb
|
Cerebro114/2020BigData
| 7 |
<jupyter_start><jupyter_text># Visualizing your data
```python
# Histograms
import matplotlib.pyplot as plt
dog_pack["height_cm"].hist(bins=20)
# Bar plots
avg_weight_by_breed = dog_pack.groupby("breed")["weight_kg"].mean()
avg_weight_by_breed.plot(kind="bar", title="Mean Weight by Dog Breed")
# Line plots
sully.head()
sully.plot(x="date", y="weight_kg", kind="line")
# Rotating axis labels
sully.plot(x="date", y="weight_kg", kind="line", rot=45)
# Scatter plots
dog_pack.plot(x="height_cm", y="weight_kg", kind="scatter")
# Layering plots
dog_pack[dog_pack["sex"]=="F"]["height_cm"].hist()
dog_pack[dog_pack["sex"]=="M"]["height_cm"].hist()
# Add a legend
plt.legend(["F", "M"])
# Transparency
dog_pack[dog_pack["sex"]=="F"]["height_cm"].hist(alpha=0.7)
dog_pack[dog_pack["sex"]=="M"]["height_cm"].hist(alpha=0.7)
plt.legend(["F", "M"])
```[Which avocado size is most popular? | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=2)
> ## Which avocado size is most popular?
>
> Avocados are increasingly popular and delicious in guacamole and on toast. The Hass Avocado Board keeps track of avocado supply and demand across the USA, including the sales of three different sizes of avocado. In this exercise, you'll use a bar plot to figure out which size is the most popular.
>
> Bar plots are great for revealing relationships between categorical (size) and numeric (number sold) variables, but you'll often have to manipulate your data first in order to get the numbers you need for plotting.
>
> `pandas` has been imported as `pd`, and `avocados` is available.### init<jupyter_code>###################
##### Dataframe
###################
#upload and download
from downloadfromFileIO import saveFromFileIO
""" à executer sur datacamp: (apres copie du code uploadfromdatacamp.py)
uploadToFileIO(avocados)
"""
tobedownloaded="""
{pandas.core.frame.DataFrame: {'avocados.csv': 'https://file.io/5TNo5ROfaGwW'}}
"""
prefixToc='1.1'
prefix = saveFromFileIO(tobedownloaded, prefixToc=prefixToc, proxy="")
#initialisation
import pandas as pd
avocados = pd.read_csv(prefix+'avocados.csv',index_col=0)<jupyter_output>Téléchargements à lancer
{'pandas.core.frame.DataFrame': {'avocados.csv': 'https://file.io/5TNo5ROfaGwW'}}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 54270 0 54270 0 0 73238 0 --:--:-- --:--:-- --:--:-- 73337
<jupyter_text>### code> - Print the head of the `avocados` dataset. _What columns are available?_
> - For each avocado size group, calculate the total number sold, storing as `nb_sold_by_size`.
> - Create a bar plot of the number of avocados sold by size.
> - Show the plot.<jupyter_code># Import matplotlib.pyplot with alias plt
import matplotlib.pyplot as plt
# Look at the first few rows of data
print(avocados.head())
# Get the total number of avocados sold of each size
nb_sold_by_size = avocados.groupby('size')['nb_sold'].sum()
# Create a bar plot of the number of avocados sold by size
nb_sold_by_size.plot(kind='bar')
# Show the plot
plt.show()<jupyter_output> date type year avg_price size nb_sold
0 2015-12-27 conventional 2015 0.95 small 9626901.09
1 2015-12-20 conventional 2015 0.98 small 8710021.76
2 2015-12-13 conventional 2015 0.93 small 9855053.66
3 2015-12-06 conventional 2015 0.89 small 9405464.36
4 2015-11-29 conventional 2015 0.99 small 8094803.56
<jupyter_text>[Changes in sales over time | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=3)
> ## Changes in sales over time
>
> Line plots are designed to visualize the relationship between two numeric variables, where each data values is connected to the next one. They are especially useful for visualizing the change in a number over time since each time point is naturally connected to the next time point. In this exercise, you'll visualize the change in avocado sales over three years.
>
> `pandas` has been imported as `pd`.> - Get the total number of avocados sold on each date. _The DataFrame has two rows for each date -- one for organic, and one for conventional_. Save this as `nb_sold_by_date`.
> - Create a line plot of the number of avocados sold.
> - Show the plot.<jupyter_code># Import matplotlib.pyplot with alias plt
import matplotlib.pyplot as plt
# Get the total number of avocados sold on each date
nb_sold_by_date = avocados.groupby('date')['nb_sold'].sum()
# Create a line plot of the number of avocados sold by date
nb_sold_by_date.plot(kind='line')
# Show the plot
plt.show()<jupyter_output><empty_output><jupyter_text>[Avocado supply and demand | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=4)
> ## Avocado supply and demand
>
> Scatter plots are ideal for visualizing relationships between numerical variables. In this exercise, you'll compare the number of avocados sold to average price and see if they're at all related. If they're related, you may be able to use one number to predict the other.
>
> `matplotlib.pyplot` has been imported as `plt` and `pandas` has been imported as `pd`.> - Create a scatter plot with `nb_sold` on the x-axis and `avg_price` on the y-axis. Title it `"Number of avocados sold vs. average price"`.
> - Show the plot.<jupyter_code>plt.rcParams['figure.figsize'] = [11, 7]
plt.style.use('fivethirtyeight')
# Scatter plot of nb_sold vs avg_price with title
avocados.plot(kind='scatter', x='nb_sold', y='avg_price', title='Number of avocados sold vs. average price')
# Show the plot
plt.show()<jupyter_output><empty_output><jupyter_text>[Price of conventional vs. organic avocados | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=5)
> ## Price of conventional vs. organic avocados
>
> Creating multiple plots for different subsets of data allows you to compare groups. In this exercise, you'll create multiple histograms to compare the prices of conventional and organic avocados.
>
> `matplotlib.pyplot` has been imported as `plt` and `pandas` has been imported as `pd`.> - Subset `avocados` for the conventional type, and the average price column. Create a histogram.
> - Create a histogram of `avg_price` for organic type avocados.
> - Add a legend to your plot, with the names "conventional" and "organic".
> - Show your plot.<jupyter_code># Histogram of conventional avg_price
avocados[avocados.type=='conventional']['avg_price'].hist()
# Histogram of organic avg_price
avocados[avocados.type=='organic']['avg_price'].hist()
# Add a legend
plt.legend(['conventional', 'organic'])
# Show the plot
plt.show()<jupyter_output><empty_output><jupyter_text>> Modify your code to adjust the transparency of both histograms to `0.5` to see how much overlap there is between the two distributions.<jupyter_code># Histogram of conventional avg_price
avocados[avocados.type=='conventional']['avg_price'].hist(alpha=0.5)
# Histogram of organic avg_price
avocados[avocados.type=='organic']['avg_price'].hist(alpha=0.5)
# Add a legend
plt.legend(['conventional', 'organic'])
# Show the plot
plt.show()<jupyter_output><empty_output><jupyter_text>> Modify your code to use 20 bins in both histograms.<jupyter_code># Histogram of conventional avg_price
avocados[avocados.type=='conventional']['avg_price'].hist(bins=20, alpha=0.5)
# Histogram of organic avg_price
avocados[avocados.type=='organic']['avg_price'].hist(bins=20, alpha=0.5)
# Add a legend
plt.legend(['conventional', 'organic'])
# Show the plot
plt.show()<jupyter_output><empty_output><jupyter_text># Missing values
```python
# Detecting missing values
dogs.isna()
# Detecting any missing values
dogs.isna().any()
# Counting missing values
dogs.isna().sum()
# Plotting missing values
import matplotlib.pyplot as plt
dogs.isna().sum().plot(kind="bar")
plt.show()
# Removing rows containing missing values
dogs.dropna()
# Replacing missing values
dogs.fillna(0)
```[Finding missing values | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=7)
> ## Finding missing values
>
> Missing values are everywhere, and you don't want them interfering with your work. Some functions ignore missing data by default, but that's not always the behavior you might want. Some functions can't handle missing values at all, so these values need to be taken care of before you can use them. If you don't know where your missing values are, or if they exist, you could make mistakes in your analysis. In this exercise, you'll determine if there are missing values in the dataset, and if so, how many.
>
> `pandas` has been imported as `pd` and `avocados_2016`, a subset of `avocados` that contains only sales from 2016, is available.### init<jupyter_code>avocados_2016 = avocados[avocados.year == 2016].reset_index(drop=True)
###################
##### Dataframe
###################
#upload and download
from downloadfromFileIO import saveFromFileIO
""" à executer sur datacamp: (apres copie du code uploadfromdatacamp.py)
uploadToFileIO(avocados_2016)
"""
tobedownloaded="""
{pandas.core.frame.DataFrame: {'avocados_2016.csv': 'https://file.io/4PnDNYnECG9L'}}
"""
prefixToc='2.1'
prefix = saveFromFileIO(tobedownloaded, prefixToc=prefixToc, proxy="")
#initialisation
import pandas as pd
avocados_2016 = pd.read_csv(prefix+'avocados_2016.csv',index_col=0)<jupyter_output>Téléchargements à lancer
{'pandas.core.frame.DataFrame': {'avocados_2016.csv': 'https://file.io/4PnDNYnECG9L'}}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5509 0 5509 0 0 9120 0 --:--:-- --:--:-- --:--:-- 9105
<jupyter_text>> - Print a DataFrame that shows whether each value in `avocados_2016` is missing or not.
> - Print a summary that shows whether _any_ value in each column is missing or not.
> - Create a bar plot of the total number of missing values in each column.<jupyter_code># Import matplotlib.pyplot with alias plt
import matplotlib.pyplot as plt
# Check individual values for missing values
print(avocados_2016.isna())
# Check each column for missing values
print(avocados_2016.isna().any())
# Bar plot of missing values by variable
avocados_2016.isna().sum().plot(kind="bar")
# Show plot
plt.show()<jupyter_output> date avg_price total_sold small_sold large_sold xl_sold \
0 False False False False False False
1 False False False False False False
2 False False False False True False
3 False False False False False False
4 False False False False False True
5 False False False True False False
6 False False False False False False
7 False False False False True False
8 False False False False False False
9 False False False False False False
10 False False False False True False
11 False False False False False False
12 False False False False False False
13 False False False [...]<jupyter_text>[Removing missing values | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=8)
> ## Removing missing values
>
> Now that you know there are some missing values in your DataFrame, you have a few options to deal with them. One way is to remove them from the dataset completely. In this exercise, you'll remove missing values by removing all rows that contain missing values.
>
> `pandas` has been imported as `pd` and `avocados_2016` is available.> - Remove the rows of `avocados_2016` that contain missing values and store the remaining rows in `avocados_complete`.
> - Verify that all missing values have been removed from `avocados_complete`. Calculate each column that has NAs and print.<jupyter_code># Remove rows with missing values
avocados_complete = avocados_2016.dropna()
# Check if any columns contain missing values
print(avocados_complete.isna().any())<jupyter_output>date False
avg_price False
total_sold False
small_sold False
large_sold False
xl_sold False
total_bags_sold False
small_bags_sold False
large_bags_sold False
xl_bags_sold False
dtype: bool
<jupyter_text>[Replacing missing values | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=9)
> ## Replacing missing values
>
> Another way of handling missing values is to replace them all with the same value. For numerical variables, one option is to replace values with 0— you'll do this here. However, when you replace missing values, you make assumptions about what a missing value means. In this case, you will assume that a missing number sold means that no sales for that avocado type were made that week.
>
> In this exercise, you'll see how replacing missing values can affect the distribution of a variable using histograms. You can plot histograms for multiple variables at a time as follows:
>
> dogs[["height_cm", "weight_kg"]].hist()
>
>
> `pandas` has been imported as `pd` and `matplotlib.pyplot` has been imported as `plt`. The `avocados_2016` dataset is available.> - A list has been created, `cols_with_missing`, containing the names of columns with missing values: `"small_sold"`, `"large_sold"`, and `"xl_sold"`.
> - Create a histogram of those columns.
> - Show the plot.<jupyter_code># List the columns with missing values
cols_with_missing = ["small_sold", "large_sold", "xl_sold"]
# Create histograms showing the distributions cols_with_missing
avocados_2016[cols_with_missing].hist()
# Show the plot
plt.show()<jupyter_output><empty_output><jupyter_text>> - Replace the missing values of `avocados_2016` with `0`s and store the result as `avocados_filled`.
> - Create a histogram of the `cols_with_missing` columns of `avocados_filled`.<jupyter_code># Fill in missing values with 0
avocados_filled = avocados_2016.fillna(0)
# Create histograms of the filled columns
avocados_filled[cols_with_missing].hist()
# Show the plot
plt.show()<jupyter_output><empty_output><jupyter_text># Creating DataFrames[List of dictionaries | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=11)
> ## List of dictionaries
>
> You recently got some new avocado data from 2019 that you'd like to put in a DataFrame using the list of dictionaries method. Remember that with this method, you go through the data row by row.
>
> 
>
> `pandas` as `pd` is imported.> - Create a list of dictionaries with the new data called `avocados_list`.
> - Convert the list into a DataFrame called `avocados_2019`.
> - Print your new DataFrame.<jupyter_code># Create a list of dictionaries with new data
avocados_list = [
{'date': '2019-11-03', 'small_sold': 10376832, 'large_sold': 7835071},
{'date': '2019-11-10', 'small_sold': 10717154, 'large_sold': 8561348},
]
# Convert list into DataFrame
avocados_2019 = pd.DataFrame(avocados_list)
# Print the new DataFrame
print(avocados_2019)<jupyter_output> date small_sold large_sold
0 2019-11-03 10376832 7835071
1 2019-11-10 10717154 8561348
<jupyter_text>[Dictionary of lists | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=12)
> ## Dictionary of lists
>
> Some more data just came in! This time, you'll use the dictionary of lists method, parsing the data column by column.
>
> 
>
> `pandas` as `pd` is imported.> - Create a dictionary of lists with the new data called `avocados_dict`.
> - Convert the dictionary to a DataFrame called `avocados_2019`.
> - Print your new DataFrame.<jupyter_code># Create a dictionary of lists with new data
avocados_dict = {
"date": ['2019-11-17', '2019-12-01'],
"small_sold": [10859987, 9291631],
"large_sold": [7674135, 6238096]
}
# Convert dictionary into DataFrame
avocados_2019 = pd.DataFrame(avocados_dict)
# Print the new DataFrame
print(avocados_2019)<jupyter_output> date small_sold large_sold
0 2019-11-17 10859987 7674135
1 2019-12-01 9291631 6238096
<jupyter_text># Reading and writing CSVs
```python
# CSV to DataFrame
import pandas as pd
new_dogs = pd.read_csv("new_dogs.csv")
# DataFrame to CSV
new_dogs.to_csv("new_dogs_with_bmi.csv")
```[CSV to DataFrame | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=14)
> ## CSV to DataFrame
>
> You work for an airline, and your manager has asked you to do a competitive analysis and see how often passengers flying on other airlines are involuntarily bumped from their flights. You got a CSV file (`airline_bumping.csv`) from the Department of Transportation containing data on passengers that were involuntarily denied boarding in 2016 and 2017, but it doesn't have the exact numbers you want. In order to figure this out, you'll need to get the CSV into a pandas DataFrame and do some manipulation!
>
> `pandas` is imported for you as `pd`. `"airline_bumping.csv"` is in your working directory.### init<jupyter_code>###################
##### file
###################
#upload and download
from downloadfromFileIO import saveFromFileIO
""" à executer sur datacamp: (apres copie du code uploadfromdatacamp.py)
uploadToFileIO_pushto_fileio('airline_bumping.csv')
"""
tobedownloaded="""
{numpy.ndarray: {'airline_bumping.csv': 'https://file.io/t5h50W4stebe'}}
"""
prefixToc = '4.1'
prefix = saveFromFileIO(tobedownloaded, prefixToc=prefixToc, proxy="")
<jupyter_output>Téléchargements à lancer
{'numpy.ndarray': {'airline_bumping.csv': 'https://file.io/t5h50W4stebe'}}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 890 0 890 0 0 1769 0 --:--:-- --:--:-- --:--:-- 1765
<jupyter_text>### code> - Read the CSV file `"airline_bumping.csv"` and store it as a DataFrame called `airline_bumping`.
> - Print the first few rows of `airline_bumping`.<jupyter_code># Read CSV as DataFrame called airline_bumping
airline_bumping = pd.read_csv('airline_bumping.csv')
# Take a look at the DataFrame
print(airline_bumping.head())<jupyter_output> airline year nb_bumped total_passengers
0 DELTA AIR LINES 2017 679 99796155
1 VIRGIN AMERICA 2017 165 6090029
2 JETBLUE AIRWAYS 2017 1475 27255038
3 UNITED AIRLINES 2017 2067 70030765
4 HAWAIIAN AIRLINES 2017 92 8422734
<jupyter_text>> For each airline group, select the `nb_bumped`, and `total_passengers` columns, and calculate the sum (for both years). Store this as `airline_totals`.<jupyter_code># For each airline, select nb_bumped and total_passengers and sum
airline_totals = airline_bumping.groupby('airline')[['nb_bumped', 'total_passengers']].sum()
airline_totals<jupyter_output><empty_output><jupyter_text>> Create a new column of `airline_totals` called `bumps_per_10k`, which is the number of passengers bumped per 10,000 passengers in 2016 and 2017.<jupyter_code># Create new col, bumps_per_10k: no. of bumps per 10k passengers for each airline
airline_totals["bumps_per_10k"] = airline_totals['nb_bumped'] / airline_totals['total_passengers'] * 10000
airline_totals<jupyter_output><empty_output><jupyter_text>> Print `airline_totals` to see the results of your manipulations.<jupyter_code># Print airline_totals
print(airline_totals)<jupyter_output> nb_bumped total_passengers bumps_per_10k
airline
ALASKA AIRLINES 1392 36543121 0.380920
AMERICAN AIRLINES 11115 197365225 0.563169
DELTA AIR LINES 1591 197033215 0.080748
EXPRESSJET AIRLINES 3326 27858678 1.193883
FRONTIER AIRLINES 1228 22954995 0.534960
HAWAIIAN AIRLINES 122 16577572 0.073593
JETBLUE AIRWAYS 3615 53245866 0.678926
SKYWEST AIRLINES 3094 47091737 0.657015
SOUTHWEST AIRLINES 18585 228142036 0.814624
SPIRIT AIRLINES 2920 32304571 0.903897
UNITED AIRLINES 4941 134468897 0.367446
VIRGIN AMERICA 242 12017967 0.201365
<jupyter_text>[DataFrame to CSV | Python](https://campus.datacamp.com/courses/data-manipulation-with-pandas/creating-and-visualizing-dataframes?ex=15)
> ## DataFrame to CSV
>
> You're almost there! To make things easier to read, you'll need to sort the data and export it to CSV so that your colleagues can read it.
>
> `pandas` as `pd` has been imported for you.> - Sort `airline_totals` by the values of `bumps_per_10k` from highest to lowest, storing as `airline_totals_sorted`.
> - Print your sorted DataFrame.
> - Save the sorted DataFrame as a CSV called `"airline_totals_sorted.csv"`.<jupyter_code># Create airline_totals_sorted
airline_totals_sorted = airline_totals.sort_values('bumps_per_10k', ascending=False)
# Print airline_totals_sorted
print(airline_totals_sorted)
# Save as airline_totals_sorted.csv
airline_totals_sorted.to_csv('airline_totals_sorted.csv')<jupyter_output> nb_bumped total_passengers bumps_per_10k
airline
EXPRESSJET AIRLINES 3326 27858678 1.193883
SPIRIT AIRLINES 2920 32304571 0.903897
SOUTHWEST AIRLINES 18585 228142036 0.814624
JETBLUE AIRWAYS 3615 53245866 0.678926
SKYWEST AIRLINES 3094 47091737 0.657015
AMERICAN AIRLINES 11115 197365225 0.563169
FRONTIER AIRLINES 1228 22954995 0.534960
ALASKA AIRLINES 1392 36543121 0.380920
UNITED AIRLINES 4941 134468897 0.367446
VIRGIN AMERICA 242 12017967 0.201365
DELTA AIR LINES 1591 197033215 0.080748
HAWAIIAN AIRLINES 122 16577572 0.073593
|
no_license
|
/python-sandbox/data-manipulation-with-pandas/chapter4 - Creating and Visualizing DataFrames.ipynb
|
Ahmad20/data-scientist-skills
| 20 |
<jupyter_start><jupyter_text><jupyter_code>#python code to find the compound interest for the given p,n,r
p = float(input("Enter the principal amount: "))
n = float(input("Enter the time period: "))
r = float(input("Enter the rate of interest: "))
CI = p*(1+n*r/100)**n
print(CI)
#python code to convert centigrade to farenheit
c = float(input("Enter the temperature in centigrade: "))
f = 9/5*c + 32
print(f)
#python code to find greater between 2 numbers
a = float(input("Enter the value a: "))
b = float(input("Enter the value b: "))
if(a>b):
print("a is greater than b")
elif (b>a):
print("b is greater than a")
else:
print("a is equal to b")
#python code to find surface areas of cylinder and cone using function
import math
def s_area_cy(r,h):
return (2*math.pi*r*r*h)
def s_area_co(r,h):
return (1/3*math.pi*r*r*h)
r = float(input("Enter the radius value: "))
h = float(input("Enter the height value: "))
print(s_area_cy(r,h))
print(s_area_co(r,h))
#python code find the greatest of four numbers ( using ‘and’ operator) using function.
def greatest(a,b,c,d):
if (a>=b and a>=c and a>=d):
print ("a is greatest")
elif (b>=a and b>=c and b>=d):
print ("b is greatest")
elif (c>=a and c>=b and c>=d):
print ("c is greatest")
else:
print ("d is greatest")
a = float(input("Enter the value a: "))
b = float(input("Enter the value b: "))
c = float(input("Enter the value c: "))
d = float(input("Enter the value d: "))
greatest(a,b,c,d)
#python code to perform the operations ( ODDorEven, Factorial, ODDNoUptoN, PrimeUptoN ) using functions with menu choice
def oddoreven(N):
if N%2 == 0:
return "even"
else:
return "odd"
def fact(N):
f = 1
for i in range (1,N+1):
f = f*i
return (f)
def oddnums(N):
return [i for i in range(1,N+1,2)]
def primenums(N):
li = []
if N>1:
li.append(2)
for i in range (1,N+1):
if i>1:
for j in range (2,i//2+2):
if(i%j == 0):
break
else:
if j == i//2+1:
li.append(i)
return li
loop = 1
while loop == 1:
# Print what options you have
print ("your options are:")
print (" ")
print("1) Odd or Even")
print("2) Factorial")
print("3) Odd numbers upto N")
print("4) Prime numbers upto N")
print("5) Quit ")
print(" ")
try:
choice = int(input("Choose your option: "))
except:
print('please enter a valid number for option')
print(" ")
print(" ")
if choice == 1:
N = int(input("Enter the number: "))
print("The answer is ",oddoreven(N))
elif choice == 2:
N = int(input("Enter the number: "))
print("The answer is ",fact(N))
elif choice == 3:
N = int(input("Enter the number: "))
print("The answer is ",oddnums(N))
elif choice == 4:
N = int(input("Enter the number: "))
print("The answer is ",primenums(N))
elif choice == 5:
loop = 0
else:
print("please choice a valid option from 1 to 5")
choice=0
print ("Arigato")
<jupyter_output>your options are:
1) Odd or Even
2) Factorial
3) Odd numbers upto N
4) Prime numbers upto N
5) Quit
Choose your option: 1
Enter the number: 5
The answer is odd
your options are:
1) Odd or Even
2) Factorial
3) Odd numbers upto N
4) Prime numbers upto N
5) Quit
Choose your option: 2
Enter the number: 6
The answer is 720
your options are:
1) Odd or Even
2) Factorial
3) Odd numbers upto N
4) Prime numbers upto N
5) Quit
Choose your option: 3
Enter the number: 22
The answer is [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21]
your options are:
1) Odd or Even
2) Factorial
3) Odd numbers upto N
4) Prime numbers upto N
5) Quit
Choose your option: 4
Enter the number: 30
The answer is [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
your options are:
1) Odd or Even
2) Factorial
3) Odd numbers upto N
4) Prime numbers upto N
5) Quit
Choose your option: 5
Arigato
|
no_license
|
/Day1_Session1_exercise.ipynb
|
alvas-education-foundation/nishanthvr
| 1 |
<jupyter_start><jupyter_text># Numerical methods for biomedical engineering
## Motivation
Why study this?
* Problem-solving power
* Understanding off-the-shelf software.
* Custom software
* Deeper understanding of computer programming
* Deeper understanding of mathematics.
What you'll be able to do after this course is solve real engineering problems with the help of a computer.## What are numerical methods
_Mathematical methods that are used to approximate the solution of complicated problems so that the solution consists of only addition, subtraction and multiplication operations._
## Modelling in engineering
A basic mathematical model consits of a formulation like this:
$$ Dependent\ variable = f\left(independent\ variables, parameters, forcing\ functions\right)$$
* Dependent variable: state
* Independent variables: dimensions along which we study the system
* Forcing functions: external influences affecting the system.### The falling parachutist
A parachutist is falling through the atmosphere (before opening the parachute).
Dependent variable: velocity
Independent variables: time
Forcing functions: gravity, air resistance.$$F = ma$$
$$F = F_D + F_U$$
$$F_D = mg$$
$$F_U = -cv$$If we combine all of this:
$$\frac{d v}{dt} = \frac{mg -cv}{m} $$
What kind of equation is this? a differential### Analytical solution
$$v(t) = \frac{gm}{c} (1 - e^{-(c/m)t})$$<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
m = 66
v_0 = 0
g = 9.8
c = 20
t = 5
v_t5 = g * m / c * (1 - np.exp(-t * c / m) )
v_t5
def v_analytical(time):
return (g * m / c) * (1 - np.exp(-time * c / m) )
ts_a = np.linspace(0,7)
vs_a = v_analytical(ts_a)
plt.plot(ts_a, vs_a)<jupyter_output><empty_output><jupyter_text>### Euler's method
Finite differences:
$$\frac{dv}{dt} \approx \frac{\Delta v}{\Delta t} = \frac{v(t_{i+1}) - v(t_i)}{t_{i+1}- t_i} $$
Which can be rearranged like this:
$$ v(t_{i+1} ) = v(t_i ) + \left[g − \frac{c}{m} v(t_i)\right] (t_{i+1} − t_i )$$<jupyter_code>v_0 = 0
t_0 = 0
timedelta = 1
t_1 = t_0 + timedelta
v_1 = v_0 + (g - (c / m) * v_0) * (t_1 - t_0)
v_1<jupyter_output><empty_output><jupyter_text>If we do it several times in succession and plot it:<jupyter_code>v_0 = 0
t_0 = 0
timedelta = 1
vs_n = []
ts_n = np.arange(0, 7, timedelta)
for t in ts_n:
t_1 = t_0 + timedelta
v_1 = v_0 + (g - (c / m) * v_0) * (t_1 - t_0)
v_0 = v_1
vs_n.append(v_0)
plt.plot(ts_a, vs_a)
plt.plot(ts_n +timedelta, vs_n);<jupyter_output><empty_output><jupyter_text>If we increase the number of steps, we get improved precision at the cost of computing power.<jupyter_code>v_0 = 0
t_0 = 0
timedelta = .2
vs_n = []
ts_n = np.arange(0, 7, timedelta)
for t in ts_n:
t_1 = t_0 + timedelta
v_1 = v_0 + (g - (c / m) * v_0) * (t_1 - t_0)
v_0 = v_1
vs_n.append(v_0)
plt.plot(ts_a, vs_a)
plt.plot(ts_n +timedelta, vs_n);<jupyter_output><empty_output><jupyter_text># The notebook
This is a _Markdown_ cell. I can write __formatted__ text.
Or formulas, with a different markup language called $\LaTeX$.
$$ v(t_{i+1} ) = v(t_i ) + \left[g − \frac{c}{m} v(t_i)\right] (t_{i+1} − t_i )$$<jupyter_code>x = 2
x * 2<jupyter_output><empty_output><jupyter_text># Python programming## Basic data types### Numeric types
There are two: `int` and `float`<jupyter_code>2
type(2)
x = 42
y = 2.3
type(y * 10)<jupyter_output><empty_output><jupyter_text>### Basic arithmetic<jupyter_code>x + y
x - y
x * y
# we can divide them:
x / y<jupyter_output><empty_output><jupyter_text>`//` represents integer division<jupyter_code>78 // 10
78 % 10<jupyter_output><empty_output><jupyter_text>### Strings
Delimited with single or double quotes. Can contain any Unicode character.<jupyter_code>title = "Canterbury's tales"
print(title)
beginning = "Muchos años después, frente al pelotón de fusilamiento, el coronel..."
print(beginning)
beginning = 'Muchos años después, frente al pelotón de fusilamiento, el coronel...'
print(beginning)<jupyter_output>Muchos años después, frente al pelotón de fusilamiento, el coronel...
<jupyter_text>Can only span one line...<jupyter_code>"this is a line
and this is the next one"<jupyter_output><empty_output><jupyter_text>...unless we use triple-double or triple-single quotes:<jupyter_code>multiline_string = """this is a line
and this is the next one"""
multiline_string
multiline_string = '''this is a line
and this is the next one'''
multiline_string
print(multiline_string)<jupyter_output>this is a line
and this is the next one
<jupyter_text>### Booleans
Two values: `True` and `False`<jupyter_code>True
False<jupyter_output><empty_output><jupyter_text>Combine them with the usual Boolean operators:<jupyter_code>True and False
True or False
not True<jupyter_output><empty_output><jupyter_text>## Data structures### Lists
A sequential, mutable data structure. Can contain data of any type, in any combination. Designed for either random or sequential access.
Create with square brackets or the `list()` function.<jupyter_code>my_first_list = [8, 12, 41]
type(my_first_list)<jupyter_output><empty_output><jupyter_text>#### List methods
Concerned mostly with adding or removing elements, either by position or by identity.
<jupyter_code>my_first_list.append('some string')
my_first_list
my_first_list.count(8)
ages = [19, 22, 21, 20, 19, 21, 22, 28]
ages.count(22)
ages.index(20)
ages.index(99)<jupyter_output><empty_output><jupyter_text>Careful with copying!<jupyter_code>a = ['x', 'y', 'z']
b = a
b
b[1] = 'Holi'
b
a
a = ['x', 'y', 'z']
b = a.copy()
b[1] = 'Holi'
b
a<jupyter_output><empty_output><jupyter_text>#### Adding and removing elements<jupyter_code>my_first_list
my_first_list.insert(2, 'new element')
my_first_list
my_first_list.remove(12)
my_first_list
my_first_list.pop(1)
my_first_list<jupyter_output><empty_output><jupyter_text>### Tuples
Similar to lists, but immutable.<jupyter_code>my_first_tuple = ('mantis', 'spider', 'lion')
type(my_first_tuple)
my_first_tuple.index('spider')
my_first_tuple[0] = 'human'<jupyter_output><empty_output><jupyter_text>Careful! Tuples are immutable, but their contents might not be.
What tuple immutability actually means is that a tuple's elements can not be reassigned.<jupyter_code>another_tuple = ('Daniel', 'Mateos', ['nummethods', 'algorithms'])
another_tuple[2] = None
another_tuple[2].append('Analysis of genetic data')
another_tuple<jupyter_output><empty_output><jupyter_text>### Unpacking
Consists of taking a tuple or list and extracting its components into individual variables.<jupyter_code>name, surname, subjects = another_tuple
name
subjects
name, surname = another_tuple
name, surname, something, orother = another_tuple
name, *rest = another_tuple
name
rest
long_list = [1, 2, 3, 4, 5, 6, 7]
first, second, *rest = long_list
rest<jupyter_output><empty_output><jupyter_text>### Indexing and slicing
Indexing consists of extracting a single element from a string, list, or tuple, while slicing consists of extracting several.
This is a handy diagram to keep in mind:
<jupyter_code>another_tuple[0]
another_tuple[1]
len(another_tuple)
another_tuple[10]
another_tuple[-1]
letters = list('abcdef')
letters
letters[-1]
letters[2:4]
letters[-5:-3]
letters[:-3]
letters[4:]<jupyter_output><empty_output><jupyter_text>### Dictionaries
HashMap or associative array. Consists of pairs `key : value`. We use a `key` in order to get back the corresponding `value`.<jupyter_code>spanish_english = {'comida' : 'food', 'oso' : 'bear', 'universidad' : 'university'}
spanish_english
spanish_english['oso']
spanish_english['cerveza']
spanish_english['cerveza'] = 'beer'
spanish_english
spanish_english['universidad'] = 'Uni'
spanish_english<jupyter_output><empty_output><jupyter_text>## Branching
One `if` clause, followed by zero to any `elif`s, followed by zero or one `else`.<jupyter_code>monday = True
if monday == True:
print('Oohhhhh I have to work')
else:
print('Yipeee!')
today = 'Saturday'
if today == 'Monday':
print('Oohhhhh I have to work')
elif today == 'Tuesday':
print('Its getting better')
else:
print('Yipeee!')<jupyter_output>Yipeee!
<jupyter_text>## Loops
Like any other language, mostly.### `for` loops
For loops in Python don't need mandatorily a numeric control variable.<jupyter_code>for number in [0, 1, 2, 3]:
print(number ** 2)
courses = another_tuple[-1].copy()
courses
for course in courses:
print('Ufff')
print('This semester Im teaching', course, 'again')
print('you know?')<jupyter_output>Ufff
This semester Im teaching nummethods again
Ufff
This semester Im teaching algorithms again
Ufff
This semester Im teaching Analysis of genetic data again
you know?
<jupyter_text>### `while` loops<jupyter_code>today = 0
while today < 5:
print('Lets go to work')
today = today + 1
today = 0
while today < 5:
print('Lets go to work')
today += 1
today = 0
while True:
if today == 5:
break
print('Lets go to work')
today += 1
<jupyter_output>Lets go to work
Lets go to work
Lets go to work
Lets go to work
Lets go to work
<jupyter_text>### `enumerate`, `zip`
`enumerate` takes one sequence and returns an sequence of tuples `(index, element)`.
`zip` takes m sequences of n elements and returns n sequences of m elements. A bit like a matrix transpose.<jupyter_code>for element in enumerate(courses):
print(element)
for i, course in enumerate(courses):
print(i)
print(course)
for i, course in enumerate(courses):
if i % 2 == 0:
print(course)
instructors = ['Daniel', 'Sergio', 'Fatima']
for course, instructor in zip(courses, instructors):
print('The course', course, 'is taught by', instructor)<jupyter_output>The course nummethods is taught by Daniel
The course algorithms is taught by Sergio
The course Analysis of genetic data is taught by Fatima
<jupyter_text>## Functions
Every function in Python returns a single value. If there is no `return`, it will return `None`. If there are several values, they will be packed in a tuple.<jupyter_code>def square(n):
result = n ** 2
return result
square(9)<jupyter_output><empty_output><jupyter_text>Default values for optional arguments are specified like this:<jupyter_code>def power(n, exponent=2):
return n ** exponent
power(5, 3)
power(5)<jupyter_output><empty_output><jupyter_text>### Global and local scope
A name in a local scope, that, is, inside a function, will _shadow_ the same name in the global scope, but will __not__ overwrite it.<jupyter_code>n = 42
def square(n):
result = n ** 2
return result
square(9)<jupyter_output><empty_output><jupyter_text>It is a bad idea to use global variables from inside a function, like this:<jupyter_code>n = 42
def square():
# Bad idea!!!
return n ** 2
square()<jupyter_output><empty_output><jupyter_text>## Error handling
We catch errors with `try`/`except` clauses
We generate errors with `raise`.<jupyter_code>3 / 0
try:
3 / 0
except:
print('An error has occurred but the program goes on')
raise Exception('Error!!')
raise ValueError('Error!!')<jupyter_output><empty_output><jupyter_text>## Object oriented programming in Python
Very simple.<jupyter_code>class Vehicle():
def move(self):
print('Im moving!')
mybicycle = Vehicle()
type(mybicycle)
mybicycle.move()<jupyter_output>Im moving!
<jupyter_text>`dir()` lists the methods and attributes of an object:<jupyter_code>dir(mybicycle)<jupyter_output><empty_output><jupyter_text>The [dunder](https://rszalski.github.io/magicmethods/) or magic methods start and end with double underscores. They have special uses. The most important one is `__init__`, which is called when an object is created and serves to initialize its attributes.<jupyter_code>class Vehicle():
def __init__(self, owner):
self.owner = owner
def move(self):
print('Im moving!')
mybicycle = Vehicle('Dani')
type(mybicycle)
mybicycle.owner<jupyter_output><empty_output><jupyter_text># The Python scientific stack: SciPy
Python Main Data LibrariesNumPy: Base N-dimensional array package
SciPy library: Fundamental library for scientific computing
Matplotlib: Comprehensive 2D Plotting
IPython: Enhanced Interactive Console
Sympy: Symbolic mathematics
pandas: Data structures & analysis## `matplotlib`<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot([0,1,2], [5,2,3]);
plt.scatter([0,1,2], [5,2,3]);<jupyter_output><empty_output><jupyter_text> ## `numpy`<jupyter_code>arr = np.linspace(0,9)
arr ** 2
plt.plot(arr, arr ** 2);
arr.dtype
arr.ndim
arr.shape
arr_2 = np.array([[1,2], [3,4], [5,6]])
arr_2
arr_2.shape
print(np.zeros(10))
print(np.ones((2,3)))
print(np.ones_like(arr_2))
print(np.eye(4))
print(np.empty((2,3,2)))<jupyter_output>[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[[1. 1. 1.]
[1. 1. 1.]]
[[1 1]
[1 1]
[1 1]]
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
[[[ 29. 256.94 ]
[256.40845169 241.82666761]
[243.55636364 253.44 ]]
[[253.44 36. ]
[253.44 29. ]
[256.40845169 253.44 ]]]
<jupyter_text>### Indexing<jupyter_code>arr[4]<jupyter_output><empty_output><jupyter_text>### Slicing<jupyter_code>arr[:5]
arr_2[1:,1:]
arr_2
arr_2 % 2 == 0
arr_2[arr_2 % 2 == 0]<jupyter_output><empty_output><jupyter_text>### Careful with copying!
Same as above.<jupyter_code>arr_3 = arr_2
arr_3[1] = [98, 101]
arr_3
arr_2<jupyter_output><empty_output><jupyter_text>### Element wise operations<jupyter_code>arr_2 + 2
arr_2 / 2
arr_2 // 2
1/arr_2
arr_2 - arr_2
arr_2 % 2<jupyter_output><empty_output><jupyter_text>Apart from operators, there are many vectorized operations as functions in `numpy`.<jupyter_code>np.exp(arr)<jupyter_output><empty_output><jupyter_text>Regular Python functions don't know how to do this:<jupyter_code>import math
math.exp(arr)<jupyter_output><empty_output><jupyter_text>#### Exercise
Plot the sigmoid function
$$ f(x) = \frac{1}{1 + e^{-x}}$$
between -5 and 5<jupyter_code>xs = np.linspace(-5, 5)
ys = 1 / (1 + np.exp(-xs))
plt.plot(xs, ys);<jupyter_output><empty_output><jupyter_text>### Matrix operations<jupyter_code>arr.dot(arr)
arr @ arr
one_dimensional_vector = np.arange(10)
one_dimensional_vector
one_dimensional_vector.reshape(2,5)
one_dimensional_vector
arr_3 = one_dimensional_vector.reshape(2, 5)
arr_3
arr_2.dot(arr_3)
one_dimensional_vector.cumsum()
one_dimensional_vector.cumprod()
arr_2
arr_2.sum(axis=0)
arr_2.sum(axis=1)
arr_2.transpose()<jupyter_output><empty_output><jupyter_text>### Linear Algebra
http://docs.scipy.org/doc/numpy-1.10.0/reference/routines.linalg.html<jupyter_code>from numpy import linalg as la
help(la)<jupyter_output>Help on package numpy.linalg in numpy:
NAME
numpy.linalg
DESCRIPTION
Core Linear Algebra Tools
-------------------------
Linear algebra basics:
- norm Vector or matrix norm
- inv Inverse of a square matrix
- solve Solve a linear system of equations
- det Determinant of a square matrix
- lstsq Solve linear least-squares problem
- pinv Pseudo-inverse (Moore-Penrose) calculated using a singular
value decomposition
- matrix_power Integer power of a square matrix
Eigenvalues and decompositions:
- eig Eigenvalues and vectors of a square matrix
- eigh Eigenvalues and eigenvectors of a Hermitian matrix
- eigvals Eigenvalues of a square matrix
- eigvalsh Eigenvalues of a Hermitian matrix
- qr QR decomposition of a matrix
- svd Singular value decomposition [...]<jupyter_text>### Trace, determinant and inverse<jupyter_code>np.trace(arr_2)
arr_2.transpose
arr_2.transpose()
square = arr_2.dot(arr_2.transpose())
square[0,1] = 7
la.det(square)
la.inv(square)<jupyter_output><empty_output><jupyter_text>#### Exercise
In a chicken and rabbit farm, there are 35 heads and 94 legs. How many chickens and how many rabbits do we have?
Remember:
$$A \cdot X = B$$
$$A^{-1} \cdot A \cdot X = I \cdot X = A^{-1} \cdot B$$
$$X = A^{-1} \cdot B$$<jupyter_code>A = np.array([[1,1],[4,2]])
A = np.array([1,1,4,2]).reshape(2,2)
B = np.array([35, 94])
X = la.inv(A).dot(B)
X
A.dot(la.inv(A))<jupyter_output><empty_output><jupyter_text>### Norm of a vector and a matrix<jupyter_code>la.norm(arr)
la.norm(arr, ord=1)<jupyter_output><empty_output><jupyter_text>### Matrix factorization: eigen-decomposition and SVD#### Eigenvectors and eigenvalues
$$A \cdot \vec{v} = \lambda \cdot \vec{v} $$
$$ \det(\lambda \cdot I - A) = 0 $$
<jupyter_code>square
eigval = la.eigvals(square)
eigval
eigvec = la.eig(square)[1]
eigvec
eigvec.dot(np.diag(eigval)).dot(la.inv(eigvec))<jupyter_output><empty_output><jupyter_text>### Comparison with python: execution times<jupyter_code>import time
size_of_vec = 1000
def pure_python_version():
t1 = time.time()
X = range(size_of_vec)
Y = range(size_of_vec)
Z = []
for i in range(len(X)):
Z.append(X[i] + Y[i])
return time.time() - t1
def numpy_version():
t1 = time.time()
X = np.arange(size_of_vec)
Y = np.arange(size_of_vec)
Z = X + Y
return time.time() - t1
t1 = pure_python_version()
t2 = numpy_version()
print(t1/t2)<jupyter_output>0.39771359807460893
|
non_permissive
|
/edition_19/section-00.inclass.ipynb
|
danimateos/numerical_methods
| 50 |
<jupyter_start><jupyter_text># visualisation des données<jupyter_code>%matplotlib inline<jupyter_output><empty_output><jupyter_text>Cette instruction fait apparaître les graphes dans le notebook. Si ce n'est pas le cas, il faut la réexécuer. Les deux lignes suivantes permettent de vérifier où matplotlib a prévu d'afficher ses résultats. Pour un notebook, cela doit être ``'nbAgg'`` ou ``'module://ipykernel.pylab.backend_inline'``.<jupyter_code>import matplotlib
matplotlib.get_backend()
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import matplotlib
matplotlib.get_backend()
from jyquickhelper import add_notebook_menu
add_notebook_menu()<jupyter_output><empty_output><jupyter_text>## Matplotlib, pandas### Récupération des données
On récupère les données disponibles sur le site de l'INSEE : [Naissance, décès, mariages 2012](http://www.insee.fr/fr/themes/detail.asp?ref_id=fd-etatcivil2012&page=fichiers_detail/etatcivil2012/doc/documentation.htm). Il s'agit de récupérer la liste des mariages de l'année 2012. On souhaite représenter le graphe du nombre de mariages en fonction de l'écart entre les mariés.<jupyter_code>from urllib.error import URLError
import pyensae
from pyensae.datasource import dBase2df, DownloadDataException
files = ["etatcivil2012_nais2012_dbase.zip",
"etatcivil2012_dec2012_dbase.zip",
"etatcivil2012_mar2012_dbase.zip" ]
try:
pyensae.download_data(files[-1],
website='http://telechargement.insee.fr/fichiersdetail/etatcivil2012/dbase/')
except (DownloadDataException, URLError, TimeoutError):
# backup plan
pyensae.download_data(files[-1], website="xd")
df = dBase2df("mar2012.dbf")
df.shape, df.columns
df.head()<jupyter_output><empty_output><jupyter_text>On récupère de la même manière la signification des variables :<jupyter_code>from pyensae.datasource import dBase2df
vardf = dBase2df("varlist_mariages.dbf")
vardf.shape, vardf.columns
vardf<jupyter_output><empty_output><jupyter_text>### Exercice 1 : écart entre les mariés
1. En ajoutant une colonne et en utilisant l'opération [group by](http://pandas.pydata.org/pandas-docs/stable/groupby.html), on veut obtenir la distribution du nombre de mariages en fonction de l'écart entre les mariés. Au besoin, on changera le type d'une colone ou deux.
2. On veut tracer un nuage de points avec en abscisse l'âge du mari, en ordonnée, l'âge de la femme. Il faudra peut-être jeter un coup d'oeil sur la documentation de la méthode [plot](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html).<jupyter_code>df["colonne"] = df.apply (lambda r: int(r["colonne"]), axis=1) # pour changer de type
df["difference"] = ...<jupyter_output><empty_output><jupyter_text>### Exercice 2 : graphe de la distribution avec pandas
Le module ``pandas`` propose un panel de graphiques standard faciles à obtenir. On souhaite représenter la distribution sous forme d'histogramme. A vous de choisir le meilleure graphique depuis la page [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html).<jupyter_code>df.plot (...)<jupyter_output><empty_output><jupyter_text>### matplotlib
[matplotlib](http://matplotlib.org/) est le module qu'utilise [pandas](http://pandas.pydata.org/). Ainsi, la méthode [plot](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) retourne un objet de type [Axes](http://matplotlib.org/api/axes_api.html#module-matplotlib.axes) qu'on peut modifier par la suite via les [méthodes suivantes](http://matplotlib.org/api/pyplot_summary.html). On peut ajouter un titre avec [set_title](http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.set_title) ou ajouter une grille avec [grid](http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.grid). On peut également superposer [deux courbes sur le même graphique](http://stackoverflow.com/questions/19941685/how-to-show-a-bar-and-line-graph-on-the-same-plot), ou [changer de taille de caractères](http://stackoverflow.com/questions/12444716/how-do-i-set-figure-title-and-axes-labels-font-size-in-matplotlib). Le code suivant trace le nombre de mariages par département.<jupyter_code>df["nb"] = 1
dep = df[["DEPMAR","nb"]].groupby("DEPMAR", as_index=False).sum().sort_values("nb",ascending=False)
ax = dep.plot(kind = "bar", figsize=(14,6))
ax.set_xlabel("départements", fontsize=16)
ax.set_title("nombre de mariages par départements", fontsize=16)
ax.legend().set_visible(False) # on supprime la légende
# on change la taille de police de certains labels
for i,tick in enumerate(ax.xaxis.get_major_ticks()):
if i > 10 :
tick.label.set_fontsize(8)<jupyter_output><empty_output><jupyter_text>Quand on ne sait pas, le plus simple est d'utiliser un moteur de recherche avec un requête du type : ``matplotlib + requête``. Pour créer un graphique, le plus courant est de choisir le graphique le plus ressemblant d'une [gallerie de graphes](http://matplotlib.org/gallery.html) puis de l'adapter à vos données.### Exercice 3 : distribution des mariages par jour
On veut obtenir un graphe qui contient l'histogramme de la distribution du nombre de mariages par jour de la semaine et d'ajouter une seconde courbe correspond avec un second axe à la répartition cumulée.## Autres alternatives
### ggplot
Le module [ggplot](https://github.com/yhat/ggplot) est inspiré du module [ggplot2](http://ggplot2.org/) pour [R](http://www.r-project.org/). Il reprend la même charte graphique et une syntaxe similaire. Les graphiques produits sont plus lisibles et plus faciles à construire.<jupyter_code># ggplot does not work with matplotlib 2.0 (yet)
from ggplot import aes, geom_point, ggtitle, xlab, ggplot
df["nb"] = 1
dep = df[["JSEMAINE","nb"]].groupby("JSEMAINE", as_index=False).sum().sort_values("JSEMAINE",ascending=False)
ggplot(aes(x='JSEMAINE', y='nb'), data=dep) + \
geom_point(color='red') + \
xlab("jour de la semaine") + \
ggtitle("nombre de mariages par jour de la semaine")<jupyter_output><empty_output><jupyter_text>### Exercice 4 : même graphique avec ggplot
On veut obtenir un graphique similaire qui contient l'histogramme de la distribution du nombre de mariages par jour de la semaine et la répartition cumulée sur deux graphiques séparées mais avec [ggplot](https://github.com/yhat/ggplot). Vous trouverez quelques exemples sur la page [ggplot for python](http://blog.yhathq.com/posts/ggplot-for-python.html) et [exemples avec la version 0.4](http://blog.yhathq.com/posts/ggplot-0.4-released.html). Ce module est moins complet que la version de [R.ggplot2](http://ggplot2.org/) mais il utilise presque la même syntaxe : [documentation ggplot2](http://docs.ggplot2.org/current/). Vous aurez besoin de la fonction [facet_wrap](http://docs.ggplot2.org/current/facet_wrap.html) et peut-être aussi besoin de regarder l'exemple [How do I create a bar chart in python ggplot?](http://stackoverflow.com/questions/22599521/how-do-i-create-a-bar-chart-in-python-ggplot).Cet exercice est plus une façon de découvrir un nouveau module. La solution la plus simple consiste néanmoins à changer le style de [matplotlib pour celui de ggplot](http://matplotlib.org/users/style_sheets.html).## Réseaux, graphes
### networkx
Le module [networkx](https://networkx.github.io/) permet de représenter un réseau ou un graphe de petite taille (< 500 noeuds). Un graphe est défini par un ensemble de noeuds (ou *vertex* en anglais) reliés par des arcs (ou *edge* en anglais). La [gallerie](http://networkx.github.io/documentation/latest/gallery.html) vous donnera une idée de ce que le module est capable de faire.<jupyter_code>import random
import networkx as nx
G=nx.Graph()
for i in range(15) :
G.add_edge ( random.randint(0,5), random.randint(0,5) )
import matplotlib.pyplot as plt
f, ax = plt.subplots(figsize=(8,4))
nx.draw(G, ax = ax)<jupyter_output><empty_output><jupyter_text>### Graphviz
[Graphviz](http://www.graphviz.org/) est un outil développé depuis plusieurs années déjà qui permet de réprésenter des graphes plus conséquents (> 500 noeuds). Il propose un choix plus riche de graphes : [gallerie](http://www.graphviz.org/Gallery.php). Il est utilisable via le module [graphviz](https://pypi.python.org/pypi/graphviz). Son installation requiert l'installation de l'outil [Graphviz](http://www.graphviz.org/) qui n'est pas inclus. La différence entre les deux modules tient dans l'algorithme utilisé pour assigner des coordonnées à chaque noeud du graphe de façon à ce que ses arcs se croisent le moins possibles. Au delà d'une certaine taille, le dessin de graphe n'est plus lisible et nécessite quelques tatônnements. Cela peut passer par une clusterisation du graphe (voir la [méthode Louvain](http://perso.uclouvain.be/vincent.blondel/research/louvain.html)) de façon à colorer certains noeuds proches voire à les regrouper.<jupyter_code>import random, os
from graphviz import Digraph
from IPython.display import Image
from pyquickhelper.helpgen import find_graphviz_dot
bin = os.path.dirname(find_graphviz_dot())
if bin not in os.environ["PATH"]:
os.environ["PATH"] = os.environ["PATH"] + ";" + bin
dot = Digraph(comment='random graph', format="png")
for i in range(15) :
dot.edge ( str(random.randint(0,5)), str(random.randint(0,5)) )
img = dot.render('t_random_graph.gv')
Image(img)<jupyter_output><empty_output><jupyter_text>### Exercice 5 : dessin d'un graphe avec networkx
On construit un graphe aléatoire, ses 20 arcs sont obtenus en tirant 20 fois deux nombres entiers entre 1 et 10. Chaque arc doit avoir une épaisseur aléatoire. On regardera les fonctions [spring_layout](https://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.layout.spring_layout.html?highlight=spring_layout#networkx.drawing.layout.spring_layout), [draw_networkx_nodes](https://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.nx_pylab.draw_networkx_nodes.html?highlight=draw_networkx_nodes#networkx.drawing.nx_pylab.draw_networkx_nodes), [draw_networkx_edges](https://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.nx_pylab.draw_networkx_edges.html?highlight=draw_networkx_edges#networkx.drawing.nx_pylab.draw_networkx_edges). La [gallerie](https://networkx.github.io/documentation/latest/gallery.html) peut aider aussi.## Cartographie
Dessiner une carte n'est pas difficile en soit. Toute la difficulté vient du fait qu'on a besoin pour lire cette carte de point de référence : les rues pour une ville, les frontières pour un département, une région, un pays, les fleuves et montagnes pour une carte représentation des données démographiques. Ces informations sont importantes pour situer l'information représentée par le graphique. Si vous n'avez pas besoin de tout ça, les formules suivantes vous suffiront :
- [Coordonnées sphériques](http://fr.wikipedia.org/wiki/Coordonn%C3%A9es_sph%C3%A9riques)
- [Conversion latitude/longitude to X/Y](http://www.movable-type.co.uk/scripts/latlong.html)
- [distance de Haversine](http://en.wikipedia.org/wiki/Haversine_formula)
Ces fonctionnalités sont disponibles via le module [geopy](https://github.com/geopy/geopy). Dans le cas contraire, voici quelques directions :
- [basemap](http://matplotlib.org/basemap/index.html) : les exemples de la documentation sont assez longs, le module permet de superposer sur la même carte de nombreuses informations, les frontières, la direction des vents ou des courants. Le téléchargement du module prend un peu de temps (100 Mo) à cause de toutes ces données.
- [shapely](https://github.com/Toblerity/Shapely) : ce module est utile pour dessiner des aires sur des cartes. Sous Windows, il faut l'installer depuis [Unofficial Windows Binaries for Python Extension Packages](http://www.lfd.uci.edu/~gohlke/pythonlibs/) car il inclut la DLL ``geos_c.dll`` qui vient de [GEOS](http://trac.osgeo.org/osgeo4w/). Dans le cas contraire, il faut installer [GEOS](http://trac.osgeo.org/osgeo4w/), ce qui prend pas mal de temps. Il est utilisé par [cartopy](http://scitools.org.uk/cartopy/).
Il en existe d'autres mais leur installation recèle quelques difficultés que je n'ai pas toujours eu la patience de contourner :
- [cartopy](http://scitools.org.uk/cartopy/) : les exemples sont plus courts, mais il télécharge les données au moment où il en a besoin, celui-ci devrait fonctionner.
- [mapnik](http://mapnik.org/) : l'installation sur Windows est réservée aux connaisseurs, trop compliqué sous Windows.
L'exemple qui suit utilise [basemap](http://matplotlib.org/basemap/index.html). Il est tiré de la page [BaseMap](http://nbviewer.ipython.org/github/rdhyee/working-open-data/blob/master/notebooks/Day_14_basemap_redux.ipynb).<jupyter_code>from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
m = Basemap(width=12000000,height=9000000,projection='lcc',
resolution='c',lat_1=44,lat_2=52,lat_0=48,lon_0=2.34)
m.bluemarble()<jupyter_output><empty_output><jupyter_text>Le suivant montre l'Europe et ses pays avec une projection différente ([Setting up the map](http://matplotlib.org/basemap/users/mapsetup.html)). Quelques recherches sur Internet permettent d'aboutir rapidement à des tutoriels comme celui-ci : [Visualization: Mapping Global Earthquake Activity](http://introtopython.org/visualization_earthquakes.html).<jupyter_code>from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
m = Basemap(llcrnrlon=-5,llcrnrlat=40,urcrnrlon=20,urcrnrlat=56,
resolution='i',projection='cass',lon_0=2.34,lat_0=48)
m.drawcoastlines()
m.drawcountries()
m.fillcontinents(color='coral',lake_color='aqua')
m.drawparallels(np.arange(-40,61.,2.))
m.drawmeridians(np.arange(-20.,21.,2.))
m.drawmapboundary(fill_color='aqua')
# on ajoute Paris sur la carte
lon = 2.3488000
lat = 48.853410
x,y = m(lon, lat)
m.plot(x, y, 'bo', markersize=6)
plt.text(x, y, "Paris")<jupyter_output>c:\python35_x64\lib\site-packages\mpl_toolkits\basemap\__init__.py:1775: MatplotlibDeprecationWarning: The get_axis_bgcolor function was deprecated in version 2.0. Use get_facecolor instead.
axisbgc = ax.get_axis_bgcolor()
<jupyter_text>### Cartes avec les départements
Pour dessiner des formes sur une carte, il faut connaître les coordonnées de ces formes. L'article suivant
lire [Matplotlib Basemap tutorial 10: Shapefiles Unleached, continued](http://www.geophysique.be/2013/02/12/matplotlib-basemap-tutorial-10-shapefiles-unleached-continued/) permet de dessiner les départements belges. On va s'en inspirer pour dessiner les départements français. La première chose à faire est de récupérer des données géographiques. Une fois simple de les trouver est d'utiliser un moteur de recherche avec le mot clé **shapefile** inclus dedans : c'est le format du fichier. *shapefile france* permet d'obtenir quelques sources. En voici d'autres :
* [GADM](http://www.gadm.org/) : database of Global Administrative Areas
* [OpenData.gouv commune](https://www.data.gouv.fr/fr/datasets/geofla-communes/) : base de données sur data.gouv.fr
* [The National Map Small-Scale Collection](http://nationalmap.gov/small_scale/#chpbound) : Etats-Unis
* [ArcGIS](https://developers.arcgis.com/javascript/jsapi/esri.basemaps-amd.html) : API Javascripts
* [Natural Earth](http://www.naturalearthdata.com/) : Natural Earth is a public domain map dataset available at 1:10m, 1:50m, and 1:110 million scales. Featuring tightly integrated vector and raster data, with Natural Earth you can make a variety of visually pleasing, well-crafted maps with cartography or GIS software.
* [thematicmapping](http://thematicmapping.org/downloads/world_borders.php) : World Borders Dataset
* [OpenStreetMap Data Extracts](http://download.geofabrik.de/) : OpenStreetMap data
* [OpenStreetMapData](http://openstreetmapdata.com/) : OpenStreetMap data
* [Shapefile sur Wikipedia](http://wiki.openstreetmap.org/wiki/Shapefiles) : contient divers liens vers des sources de données
La première chose à vérifier est la licence associées aux données : on ne peut pas en faire ce qu'on veut. Pour cet exemple, j'ai choisi la première source de données, GADM. La licence n'est pas précisée explicitement (on peut trouver *happy to share* sur le site, la page wikipedia [GADM](https://en.wikipedia.org/wiki/GADM) précise : *GADM is not freely available for commercial use. The GADM project created the spatial data for many countries from spatial databases provided by national governments, NGO, and/or from maps and lists of names available on the Internet (e.g. from Wikipedia).* C'est le choix que j'avais fait en 2015 mais l'accès à ces bases a probablement changé car l'accès est restreint. J'ai donc opté pour les bases accessibles depuis [data.gouv.fr](https://www.data.gouv.fr/fr/datasets/geofla-departements-30383060/). Leur seul inconvénient est que les coordonnées sont exprimées dans une projection de type [Lambert 93](https://fr.wikipedia.org/wiki/Projection_conique_conforme_de_Lambert). Cela nécessite une conversion.<jupyter_code>from pyensae import download_data
download_data("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z",
website="https://wxs-telechargement.ign.fr/oikr5jryiph0iwhw36053ptm/telechargement/inspire/" + \
"GEOFLA_THEME-DEPARTEMENTS_2015_2$GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01/file/")
from pyquickhelper.filehelper import un7zip_files
un7zip_files("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", where_to="shapefiles")<jupyter_output><empty_output><jupyter_text>La license accompagne les données : *ce produit est téléchargeable et utilisable gratuitement sous licence [Etalab](https://www.etalab.gouv.fr/licence-ouverte-open-licence)*. Pour un usage commercial, il faut faire attentation à la license associée aux données. Le seul inconvénient des données *GEOFLA* est que certaines sont données dans le système de coordonnées [Lambert 93](https://fr.wikipedia.org/wiki/Projection_conique_conforme_de_Lambert) (voir aussi [Cartographie
avec R](https://www.sylvaindurand.org/cartographie-avec-R/)).<jupyter_code>shp = 'shapefiles\\GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01\\GEOFLA\\1_DONNEES_LIVRAISON_2015\\' + \
'GEOFLA_2-1_SHP_LAMB93_FR-ED152\\DEPARTEMENT\\DEPARTEMENT.shp'
import shapefile
r = shapefile.Reader(shp)
shapes = r.shapes()
records = r.records()
len(shapes), len(records)
r.measure
r.bbox
r.shapeType<jupyter_output><empty_output><jupyter_text>On regarde une zone en particulier mais on réduit la quantité de données affichées :<jupyter_code>d = shapes[0].__dict__.copy()
d["points"] = d["points"][:10]
d
records[0]<jupyter_output><empty_output><jupyter_text>J'utilise une fonction de conversion des coordonnées récupérée sur [Internet](https://gist.github.com/blemoine/e6045ed93b3d90a52891).<jupyter_code>import math
def lambert932WGPS(lambertE, lambertN):
class constantes:
GRS80E = 0.081819191042816
LONG_0 = 3
XS = 700000
YS = 12655612.0499
n = 0.7256077650532670
C = 11754255.4261
delX = lambertE - constantes.XS
delY = lambertN - constantes.YS
gamma = math.atan(-delX / delY)
R = math.sqrt(delX * delX + delY * delY)
latiso = math.log(constantes.C / R) / constantes.n
sinPhiit0 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * math.sin(1)))
sinPhiit1 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit0))
sinPhiit2 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit1))
sinPhiit3 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit2))
sinPhiit4 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit3))
sinPhiit5 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit4))
sinPhiit6 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit5))
longRad = math.asin(sinPhiit6)
latRad = gamma / constantes.n + constantes.LONG_0 / 180 * math.pi
longitude = latRad / math.pi * 180
latitude = longRad / math.pi * 180
return longitude, latitude
lambert932WGPS(99217.1, 6049646.300000001), lambert932WGPS(1242417.2, 7110480.100000001)<jupyter_output><empty_output><jupyter_text>Puis je récupère le code final (toujours à [Matplotlib Basemap tutorial 10: Shapefiles Unleached, continued](http://www.geophysique.be/2013/02/12/matplotlib-basemap-tutorial-10-shapefiles-unleached-continued/)) en l'adaptant pour la France. Et comme rien ne marche jamais du premier coup, je suis tombé sur une erreur :
*ValueError: All values in the dash list must be positive*
Après avoir enlevé les lignes une à une, on trouve que c'est le dessin des méridiens et des parallèles qui posent problème. Je les commente et on recentre un peu la carte.<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
fig = plt.figure(figsize=(20,10))
#Custom adjust of the subplots
#plt.subplots_adjust(left=0.05,right=0.95,top=0.90,bottom=0.05,wspace=0.15,hspace=0.05)
ax = plt.subplot(111)
#Let's create a basemap of Europe
x1 = -5.0
x2 = 12.
y1 = 40.
y2 = 54.
m = Basemap(resolution='i',projection='merc', llcrnrlat=y1,urcrnrlat=y2,llcrnrlon=x1,urcrnrlon=x2,lat_ts=(x1+x2)/2)
m.drawcountries(linewidth=0.5)
m.drawcoastlines(linewidth=0.5)
if False:
# provoque l'erreur
# ValueError: All values in the dash list must be positive
m.drawparallels(np.arange(y1,y2,2.),labels=[1,0,0,0],color='black',
dashes=[1,0],labelstyle='+/-',linewidth=0.2) # draw parallels
m.drawmeridians(np.arange(x1,x2,2.),labels=[0,0,0,1],color='black',
dashes=[1,0],labelstyle='+/-',linewidth=0.2) # draw meridians
from matplotlib.collections import LineCollection
from matplotlib import cm
import shapefile
shp = 'shapefiles\\GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01\\GEOFLA\\1_DONNEES_LIVRAISON_2015\\' + \
'GEOFLA_2-1_SHP_LAMB93_FR-ED152\\DEPARTEMENT\\DEPARTEMENT.shp'
r = shapefile.Reader(shp)
shapes = r.shapes()
records = r.records()
for record, shape in zip(records,shapes):
# les coordonnées sont en Lambert 93
geo_points = [lambert932WGPS(x,y) for x, y in shape.points]
lons = [_[0] for _ in geo_points]
lats = [_[1] for _ in geo_points]
data = np.array(m(lons, lats)).T
if len(shape.parts) == 1:
segs = [data,]
else:
segs = []
for i in range(1,len(shape.parts)):
index = shape.parts[i-1]
index2 = shape.parts[i]
segs.append(data[index:index2])
segs.append(data[index2:])
lines = LineCollection(segs,antialiaseds=(1,))
# pour changer les couleurs c'est ici, il faudra utiliser le champ records
# pour les changer en fonction du nom du départements
lines.set_facecolors(cm.jet(np.random.rand(1)))
lines.set_edgecolors('k')
lines.set_linewidth(0.1)
ax.add_collection(lines)<jupyter_output><empty_output><jupyter_text>### Cartes interactivesLa vidéo [Spatial data and web mapping with Python](http://www.youtube.com/watch?v=qmgh14LUOjQ&feature=youtu.be) vous en dira un peu plus sur la cartographie. Lorsqu'on dessine une carte avec les rues d'une ville, on veut pouvoir zoomer et dézoomer facilement pour avoir une vue d'ensemble ou une vue détaillé. Dans ce cas là, il faut utiliser un service externe telle que [Gmap API](https://developers.google.com/maps/?hl=FR), [Bing Map API](http://www.microsoft.com/maps/choose-your-bing-maps-API.aspx), [Yahoo Map API](https://developer.yahoo.com/maps/simple/V1/) ou [OpenStreetMap](https://openstreetmap.fr/) qui est une version open source. Dans tous les cas, il faut faire attention si les données que vous manipulez dans la mesure où elles transitent par un service externe. L'article [Busy areas in Paris](http://www.xavierdupre.fr/blog/2013-09-26_nojs.html) est un exemple d'utilisation d'[OpenStreetMap](https://openstreetmap.fr/). Ce qu'on cherche avant tout, c'est un [graphique interactif](#inter). Il existe des modules qui permettent d'utiliser ces services directement depuis un notebook python. [smopy](https://github.com/rossant/smopy) crée une carte non interactive :<jupyter_code>import smopy
map = smopy.Map((42., -1., 55., 3.), z=5)
map.show_ipython()<jupyter_output><empty_output><jupyter_text>Le module [folium](https://github.com/wrobstory/folium) insert du javascript dans le notebook lui-même. Voici un exemple construit à partir de ce module : [Creating Interactive Election Maps Using folium and IPython Notebooks](http://blog.ouseful.info/2015/04/17/creating-interactive-election-maps-using-folium-and-ipython-notebooks/). Il est prévu pour fonctionner comme suit. D'abord, une étape d'initialisation :<jupyter_code>import folium<jupyter_output><empty_output><jupyter_text>Et si elle fonctionne (un jour peut-être), la suite devrait être :<jupyter_code>map_osm = folium.Map(location=[48.85, 2.34])
map_osm<jupyter_output><empty_output><jupyter_text>Donc, on prend un raccourci et on en profite pour ajouter un triangle à l'emplacement de l'ENSAE :<jupyter_code>import folium
map_osm = folium.Map(location=[48.85, 2.34])
from pyensae.notebook_helper import folium_html_map
map_osm.add_child(folium.RegularPolygonMarker(location=[48.824338, 2.302641], popup='ENSAE',
fill_color='#132b5e', radius=10))
from IPython.display import HTML
#HTML(folium_html_map(map_osm))
map_osm<jupyter_output><empty_output><jupyter_text>Un moteur de recherche vous donnera rapidement quelques exemples de cartes utilisant ce module :
[Folium Polygon Markers](http://bl.ocks.org/wrobstory/5609786), [Easy interactive maps with folium](https://ocefpaf.github.io/python4oceanographers/blog/2014/05/05/folium/), [Plotting shapefiles with cartopy and folium](https://ocefpaf.github.io/python4oceanographers/blog/2015/02/02/cartopy_folium_shapefile/).Graphiques interactifs
Ce sujet sort du cadre de cette séance. Un graphique interactif réagit à des événéments comme le passage du curseur de la souris, un clic, un zoom. Il est difficile de lire un graphique trop chargé, c'est pourquoi en rendant le graphique interactifs, on cherche à donner plus d'information sans nuire à sa lisibilité. Voici quelques scénarios :
- On veut représenter une des dimensions du problème en animant le graphique. C'est fréquent en 3D où un des axes est celui du temps. On préfèrera un graphique en 2D évoluant dans le temps.
- Lorsqu'il y a trop de courbes à dessiner, le lecteur peut activer ou désactiver certaines courbes pour pouvoir les comparer.
- On peut permettre de changer d'échelle (logarithmique ou changer la base 100 à différents endroits).
- On veut donner une vue d'ensemble et en même temps un niveau de détails plus fin si le lecteur le demande.
Ces animations pris leur essor avec internet et le langage [javascript](http://fr.wikipedia.org/wiki/JavaScript). Concevoir un graphique animé nécessite plus de temps car il faut prévoir ce qu'il doit se passer en cas d'action du lecteur (souris, touche, ...). Les modules python permettant de les créer construisent en fait un code javascript qu'il faut ensuite exécuter dans un navigateur (ou directement dans un notebook comme celui-ci). La librairie javascript qui a changé la façon de les concevoir est [d3.js](http://d3js.org/). Beaucoup d'autres librairies sont des surcouches de celle-ci [nvd3](http://nvd3.org/).
Le module [plotly](https://plot.ly/python/) est intéressant mais toutes les fonctionnalités ne sont pas gratuites. De plus, il faut créer un compte pour pouvoir s'en servir. Le module [mpld3](http://mpld3.github.io/) est intéressant dans le sens où il convertit un graphique créé avec un matplotlib en un graphique javascript réalisé avec [d3.js](http://d3js.org). Il faut d'abord activer la sortie notebook (voir [D3 Plugins: Truly Interactive Matplotlib In Your Browser](http://jakevdp.github.io/blog/2014/01/10/d3-plugins-truly-interactive/)).<jupyter_code>import mpld3
mpld3.enable_notebook()<jupyter_output><empty_output><jupyter_text>Ensuite, les graphiques réalisés avec matplotlib seront affichés en javascript (si cela ne fonctionne pas, il ne faut pas hésiter à redémarrer le Kernel). Il faut cliquer sur le graphique pour que celui-ci deviennent zoomable.<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))
ax.grid(color='white', linestyle='solid')
N = 50
scatter = ax.scatter(np.random.normal(size=N),
np.random.normal(size=N),
c=np.random.random(size=N),
s = 1000 * np.random.random(size=N),
alpha=0.3,
cmap=plt.cm.jet)
ax.set_title("D3 Scatter Plot", size=18)<jupyter_output>c:\python35_x64\lib\site-packages\matplotlib\cbook.py:137: MatplotlibDeprecationWarning: The axisbg attribute was deprecated in version 2.0. Use facecolor instead.
warnings.warn(message, mplDeprecation, stacklevel=1)
<jupyter_text>Il n'existe pas encore un module incontournable. Un de ceux qui pourrait devenir une référence est le module [bokeh](http://bokeh.pydata.org/). Il n'utilise pas [d3.js](http://d3js.org/) mais le principe est le même.<jupyter_code>import bokeh, bokeh.io as bio
bio.output_notebook()<jupyter_output><empty_output><jupyter_text>Une fois que le module est initialisé, on peut afficher son graphique.<jupyter_code>import bokeh.plotting as bplt
p = bplt.figure(title = "Exemple")
p.xaxis.axis_label = 'X'
p.yaxis.axis_label = 'Y'
p.circle([1,2,3,4],[4,5,6,5.5], fill_color="red", color="red", size=12)
bplt.show(p)
from pyquickhelper.helpgen import NbImage # seconde image
NbImage("pngbokeh.png") # pour la conversion des notebooks au format HTML<jupyter_output><empty_output><jupyter_text>Pour sauver le graph sous forme de fichier HTML :<jupyter_code>import os
bplt.output_file("example_bokeh.html")
bplt.save(p)
print([ _ for _ in os.listdir(".") if "html" in _ ] )<jupyter_output>INFO:bokeh.core.state:Session output file 'example_bokeh.html' already exists, will be overwritten.
|
permissive
|
/_doc/notebooks/td1a/td1a_cenonce_session_12.ipynb
|
AlexisEidelman/ensae_teaching_cs
| 26 |
<jupyter_start><jupyter_text># Lab: Titanic Survival Exploration with Decision Trees## Getting Started
In this lab, you will see how decision trees work by implementing a decision tree in sklearn.
We'll start by loading the dataset and displaying some of its rows.<jupyter_code># Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Pretty display for notebooks
%matplotlib inline
# Set a random seed
import random
random.seed(42)
# Load the dataset
in_file = '../datasets/titanic.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())<jupyter_output><empty_output><jupyter_text>Recall that these are the various features present for each passenger on the ship:
- **Survived**: Outcome of survival (0 = No; 1 = Yes)
- **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- **Name**: Name of passenger
- **Sex**: Sex of the passenger
- **Age**: Age of the passenger (Some entries contain `NaN`)
- **SibSp**: Number of siblings and spouses of the passenger aboard
- **Parch**: Number of parents and children of the passenger aboard
- **Ticket**: Ticket number of the passenger
- **Fare**: Fare paid by the passenger
- **Cabin** Cabin number of the passenger (Some entries contain `NaN`)
- **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets.
Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`.<jupyter_code># Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
features_raw = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(features_raw.head())<jupyter_output><empty_output><jupyter_text>The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcomes[i]`.
## Preprocessing the data
Now, let's do some data preprocessing. First, we'll remove the names of the passengers, and then one-hot encode the features.
One-Hot encoding is useful for changing over categorical data into numerical data, with each different option within a category changed into either a 0 or 1 in a separate *new* category as to whether it is that option or not (e.g. Queenstown port or not Queenstown port). Check out [this article](https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f) before continuing.
**Question:** Why would it be a terrible idea to one-hot encode the data without removing the names?<jupyter_code># Removing the names
features_no_names = features_raw.drop(['Name'], axis=1)
# One-hot encoding
features = pd.get_dummies(features_no_names)<jupyter_output><empty_output><jupyter_text>And now we'll fill in any blanks with zeroes.<jupyter_code>features = features.fillna(0.0)
display(features.head())<jupyter_output><empty_output><jupyter_text>## (TODO) Training the model
Now we're ready to train a model in sklearn. First, let's split the data into training and testing sets. Then we'll train the model on the training set.<jupyter_code>from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, outcomes, test_size=0.2, random_state=42)
# Import the classifier from sklearn
from sklearn.tree import DecisionTreeClassifier
# TODO: Define the classifier, and fit it to the data
model = None<jupyter_output><empty_output><jupyter_text>## Testing the model
Now, let's see how our model does, let's calculate the accuracy over both the training and the testing set.<jupyter_code># Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculate the accuracy
from sklearn.metrics import accuracy_score
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)<jupyter_output><empty_output><jupyter_text># Exercise: Improving the model
Ok, high training accuracy and a lower testing accuracy. We may be overfitting a bit.
So now it's your turn to shine! Train a new model, and try to specify some parameters in order to improve the testing accuracy, such as:
- `max_depth`
- `min_samples_leaf`
- `min_samples_split`
You can use your intuition, trial and error, or even better, feel free to use Grid Search!
**Challenge:** Try to get to 85% accuracy on the testing set. If you'd like a hint, take a look at the solutions notebook next.<jupyter_code># TODO: Train the model
# TODO: Make predictions
# TODO: Calculate the accuracy<jupyter_output><empty_output>
|
no_license
|
/supervised-learning/titanic_survival_exploration.ipynb
|
AlphaSpectrum/ai
| 7 |
<jupyter_start><jupyter_text># Simple multilabel classifier
We want to predict the movie genre from the title<jupyter_code>import numpy as np
import pandas as pd
from tqdm import tqdm_notebook
import string
import nltk
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>## Case study<jupyter_code>import pymongo
db_name = 'movie-dialogs'
collection = 'movies'
M = pymongo.MongoClient()[db_name][collection]
training_set, V, K = [], set(), set()
for movie in M.find({}):
words, classes = movie['title'].split(), movie['genres']
for w in words:
V.add(w)
for k in classes:
K.add(k)
training_set.append((words, k))
V = list(V) + ['UNKNOWN']
K = list(K)
training_set[:4]
word2idx = dict([(w, i) for i, w in enumerate(V)])
class2idx = dict([(k, i) for i, k in enumerate(K)])
def encode_title(tokens, word2idx, pad=5):
encoded = np.zeros(pad, dtype=int)
enc1 = np.array([word2idx.get(word, word2idx["UNKNOWN"]) for word in tokens])
length = min(pad, len(enc1))
encoded[:length] = enc1[:length]
return encoded, length
encode_title('this is a title'.split(), word2idx, pad=5)<jupyter_output><empty_output><jupyter_text>## Classifier<jupyter_code>import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(42)
class LSTM(nn.Module) :
def __init__(self, vocab_size, embedding_dim, hidden_dim, target_dim) :
super().__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim, padding_idx=0)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True)
self.linear = nn.Linear(hidden_dim, target_dim)
self.dropout = nn.Dropout(0.2)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x, l):
x = self.embeddings(x)
x = self.dropout(x)
lstm_out, (ht, ct) = self.lstm(x)
return self.softmax(self.linear(ht[-1]))
losses = []
loss_function = nn.NLLLoss()
model = LSTM(len(V), 100, 32, len(K))
optimizer = optim.SGD(model.parameters(), lr=0.001)
epochs = tqdm_notebook(list(range(10)))
for epoch in epochs:
total_loss = 0
for title, target in training_set:
title_idx, l = encode_title(title, word2idx)
t = torch.tensor(title_idx, dtype=torch.long)
model.zero_grad()
log_probs = model(t.view(1, -1), l)
loss = loss_function(log_probs, torch.tensor([class2idx[target]]))
loss.backward()
optimizer.step()
total_loss += loss.item()
losses.append(total_loss)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12,3))
ax.plot(losses, marker='o')
plt.show()
e, l = encode_title('eternal sunshine of the spotless mind'.split(), word2idx)
prediction = model(torch.tensor(e).view(1, -1), l)
e
probs = np.exp(prediction.detach().numpy())[0]
for i, p in sorted(enumerate(probs), key=lambda x: -x[1]):
print(K[i], round(p, 2))<jupyter_output>drama 0.12
thriller 0.11
action 0.07
romance 0.06
crime 0.06
comedy 0.06
sci-fi 0.05
adventure 0.05
horror 0.04
mystery 0.04
fantasy 0.04
family 0.03
biography 0.03
history 0.02
film-noir 0.02
music 0.02
animation 0.02
sport 0.02
0.02
musical 0.02
documentary 0.02
short 0.02
war 0.02
western 0.02
adult 0.01
|
no_license
|
/thematic-studies/language-models/L11-multilabel-classification.ipynb
|
antonelladamico17/inforet
| 3 |
<jupyter_start><jupyter_text>## Imports<jupyter_code># you need requests and beautifulSoup
# import requests
# import urllib.request
from bs4 import BeautifulSoup
from selenium import webdriver
import os
import time
basedir = '/Users/danielben-zion/Dropbox/insight/teefies/scraping'
def ensure_dir(file_path):
directory = os.path.dirname(file_path)
if not os.path.exists(directory):
os.makedirs(directory)
def setOptions():
options = webdriver.FirefoxOptions()
options.add_argument('--disable-infobars')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--disable-extensions')
# options.add_argument('headless')
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
options.add_argument('--no-proxy-server')
return options
<jupyter_output><empty_output><jupyter_text># Some initial exploratory stuff. Feel free to skip all this<jupyter_code># start with the base page, from there get the url to "see all reviews"
url = 'https://www.chewy.com/royal-canin-veterinary-diet-urinary/dp/35160'
with webdriver.Firefox() as driver:
driver.get(url)
page = driver.page_source
file_ = open('sample-base-page-2.html', 'w')
file_.write(page)
file_.close()
file = open('sample-base-page-2.html','r')
soup = BeautifulSoup(file,'html')
more_revs = soup.find_all(class_='cw-btn cw-btn--default')
suffix = more_revs[0].get('href')
baseurl = f'chewy.com{suffix[0:-1]}'
pagenum = 1
print(f'{baseurl}{pagenum}')
pages = range(1,10)
namelinks = {'friskies-tasty-treasures-cheese': 104225,
'fancy-feast-poultry-beef-classic-pate': 103926,
'royal-canin-veterinary-diet-urinary': 35160,
'hills-prescription-diet-cd-multicare': 54783,
'blue-buffalo-healthy-gourmet-variety': 141486,
'fancy-feast-delights-cheddar-grilled': 103869,
'nutro-perfect-portions-grain-free': 155651,
}
with webdriver.Firefox() as driver:
for name in namelinks.keys():
for pagenum in pages:
filepath = os.path.join(basedir,f'html_pages/{name}/')
filepathname = f'{filepath}/page{pagenum}.html'
ensure_dir(filepath)
url = f'https://www.chewy.com/{name}/product-reviews/{namelinks[name]}?reviewSort=MOST_RELEVANT&reviewFilter=ALL_STARS&pageNumber={pagenum}'
# print(url)
# print(filepathname)
driver.get(url)
page = driver.page_source
file_ = open(filepathname, 'w')
file_.write(page)
file_.close()
time.sleep(20)
file = open('html_pages/friskies-tasty-treasures-cheese/page1.html','r')
soup = BeautifulSoup(file,'html')
for (i,review) in enumerate(soup.find_all("section", {'class': 'ugc-list__review'})):
print(f'{i}: {review.text.strip()}')
# soup.find_all("section", class_ = 'ugc-list__review')<jupyter_output>0: I make a fancy bed for my cats with a $40 basket and pillow from a fancy human furniture store. They go for the free cardboard box with an old towel thrown in it. I buy them cute toys. They go for a crumpled up piece of paper. I buy expensive canned food and Friskies. They go for Friskies. Sure, expensive food usually has better ingredients and makes ME feel better about feeding it to my cats. But will they eat it? That's a gamble. But Friskies, yeah, they'll gobble it all up.
1: I ordered this by mistake. I STRONGLY prefer pate because it plops out of the can with a few shakes. This isn't pate. The cats like it well enough, so in fairness I have to say that, for those who do not hate scraping stinky ooey-gooey cat food out of the can with a spatula or spoon, it's a nice product. For that reason, I gave it three stars instead of one. As far as I'm concerned, cat food in gravy is a pain and I can hardly wait to use it up. In the future, I'll read the text more carefull[...]<jupyter_text> Experimenting with navigating through the main page
This goes through the first 10 pages of cat food on chewy sorted by popularity, and downloads the page <jupyter_code>basedir = '/Users/danielben-zion/Dropbox/insight/teefies/data'
counter = 0
pagenums = range(1,25)
options = setOptions()
with webdriver.Firefox(options=options) as driver:
for pagenum in pagenums:
driver.get(f'https://www.chewy.com/s?rh=c%3A325%2Cc%3A387%2Cc%3A389&page={pagenum}&sort=popularity')
filepathname = f'{basedir}/html_pages/main-page/page{pagenum}.html'
with open(filepathname,'w') as file:
file.write(driver.page_source)
time.sleep(3)
<jupyter_output><empty_output><jupyter_text>### We now go through all of the page source files that we acquired in the previous cell, and extract the links to all of the individual product pages. All in all there are 324 of them. Nothing is saved in this cell, it just collects the links that we're going to go across in a future cell.<jupyter_code>%cd '/Users/danielben-zion/Dropbox/insight/teefies/scraping/'
import glob
import pickle
links = {}
for f in glob.glob(f'{basedir}/html_pages/main-page/*.html'):
with open(f,'r') as file:
soup = BeautifulSoup(file,'html')
all_product_boxes = soup.find_all('article', {"class" : 'product-holder js-tracked-product cw-card cw-card-hover'.split()})
for li in all_product_boxes:
links[li['data-name']] = 'https://www.chewy.com'+li.a['href']
with open('links.pkl','wb') as file:
pickle.dump(links,file)<jupyter_output>/Users/danielben-zion/Dropbox/insight/teefies/scraping
<jupyter_text> This is just some random shit
various product information is available in the product box <jupyter_code>all_product_boxes = soup.find_all('article', {"class" : 'product-holder js-tracked-product cw-card cw-card-hover'.split()})
# for li in all_product_boxes:
# print(li['data-name'],",",li['data-price'])
# print('chewy.com'+li.a['href'])
driver = webdriver.Firefox()
driver.get('https://www.chewy.com/friskies-shreds-variety-pack-canned/dp/104243')
soup = BeautifulSoup(driver.page_source,'html')
suffixpath = soup.find('a',{'class' : 'cw-btn cw-btn--default',
'href': True})['href']
url = f'www.chewy.com{suffixpath}'
print(url)
print(basedir)
all_links = pickle.load( open('links.pkl','rb'))
print(len(all_links))
# for (key,val) in all_links.items():
# print(key.split(',')[0].replace(' ','-').replace('/','').replace('\'',''))
# print(val)
<jupyter_output>864
<jupyter_text>### This is where the heavy lifting happens. We go through each link that was collected previously, save the main product page source file, and the first 10 pages of reviews for that product. There is some random string manipulation stuff to put all those files into their appropriate folders. And some hand-engineered crawling through the review pages (first I make a review_url_stem which is the url to the review page minus the actual page number, and then I just manually set the page number to 1 through 9)<jupyter_code># Now is a good time to activate some kind of IP switching software
pages = range(1,10)
counter = 0
options = setOptions()
with webdriver.Firefox(options=options) as driver:
for (product_id, product_main_link) in all_links.items():
# try:
name = product_id.split(',')[0].replace(' ','-').lower()
filepath = os.path.join(basedir,f'html_pages/individual-items/{name}/')
# print(filepath)
ensure_dir(filepath)
filepathname = f'{filepath}/main_page.html'
# if BeautifulSoup(open(filepathname,'r'),'html').find('html'):
# counter += 1
# continue
driver.get(str(product_main_link))
time.sleep(2)
with open(filepathname,'w') as file:
file.write(driver.page_source)
main_page_soup = BeautifulSoup(driver.page_source,'html')
review_url_init = main_page_soup.find('a',{'class' : 'cw-btn cw-btn--default',
'href': True})['href']
review_url_stem = review_url_init.replace('NEWEST','MOST_RELEVANT')[0:-1]
# print(review_url_stem)
for pagenum in pages:
filepathname = f'{filepath}/page{pagenum}.html'
# if BeautifulSoup(open(filepathname,'r'),'html').find('html'):
# continue
review_url = 'https://www.chewy.com'+review_url_stem+ str(pagenum)
# print(f'getting {review_url}')
driver.get(review_url)
time.sleep(2)
with open(filepathname,'w') as file:
file.write(driver.page_source)
# except:
# continue
print(counter)
# # print(url)
# # print(filepathname)
# driver.get(url)
# page = driver.page_source
# with open(filepathname, 'w') as file:
# file.write(page)
<jupyter_output>WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1289900f0>: Failed to establish a new connection: [Errno 61] Connection refused',)': /session/ff801570-d817-074b-9a33-86c83f0a4deb
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x128990278>: Failed to establish a new connection: [Errno 61] Connection refused',)': /session/ff801570-d817-074b-9a33-86c83f0a4deb
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x128990518>: Failed to establish a new connection: [Errno 61] Connection refused',)': /session/ff801570-d817-074b-9a33-86c83[...]<jupyter_text>### How many products seem to be a variety pack?<jupyter_code>import pickle
exclude_words = ['variety','medley','multi']
all_links = pickle.load( open('links.pkl','rb'))
variety_counter = 0
for name in all_links.keys():
# print(name)
if any([name.lower().find(word) != -1 for word in exclude_words]):
variety_counter +=1
print(variety_counter)<jupyter_output>164
<jupyter_text>### Let's organize what we have so far into a dataframe.
What information do we want for each product? Most basic info would be name of product, oz per can, number of cans, price, review rating (useful?), and the review text. Make one table that contains information about cat foods. Make another one just for reviews. Then do an inner join on the product name at the end. Forget about price and stuff for now.#### REVIEWS CSV<jupyter_code>import glob
import pickle
import re
import csv
basedir = '/Users/danielben-zion/Dropbox/insight/teefies/scraping'
columns = ['product', 'review_author','rating','review_text','helpful_rank']
with open('/Users/danielben-zion/Dropbox/insight/teefies/scraping/CatfoodReviewsInfo.csv','w') as catfoodcsv:
writer = csv.writer(catfoodcsv)
writer.writerow(columns)
counter = 0
for (product_name, product_link) in all_links.items():
product_dir = os.path.join('.',product_name.split(',')[0].replace(' ','-').lower())
# main_page_soup = BeautifulSoup(open(f'./{product_dir}/main_page.html','r'),'html')
# try: # just make sure we actually have html data for this product
# price = main_page_soup.find('span',{'class' : 'ga-eec__price'}).text.strip()
# except:
# continue
for reviewpage in glob.glob(f'{basedir}/html_pages/individual-items/{product_dir}/page*.html'):
review_soup = BeautifulSoup(open(reviewpage,'r'),'html')
for review in review_soup.find_all("li", {'itemprop': 'review'}):
rating = review.find('meta',{'itemprop':'ratingValue'})['content']
review_text = review.find('span',{'class' : 'ugc-list__review__display'}).text
review_author = review.find('span',{'itemprop' : 'author'}).text
helpful = review.find('button',{'class' : 'cw-btn cw-btn--white cw-btn--sm ugc__like js-like'}).span.text
writer.writerow([product_name,review_author,rating,review_text,helpful])
# print(product_name)
# print(re.search(' (.*)-oz', product_name))
splitted = product_name.split(',')
# print(splitted)
nn = splitted[0].strip()
# opc = splitted[1].split('-')[0]
# ncans = splitted[2].split(' ')[-1]
# print(f'Full product name: {product_name}')
# print(f"Name: {nn}" )
# print(f'oz_per_can: {opc}')
# print(f'num_cans: {ncans}')
# print(f'price: {price}')
# row = [nn,opc,ncans,price]
# print(row)
# writer.writerow(row)
# counter += 1
# if counter>1:
# break
# for f in glob.glob('*.html'):
# with open(f,'r') as file:
# soup = BeautifulSoup(file,'html')
# all_product_boxes = soup.find_all('article', {"class" : 'product-holder js-tracked-product cw-card cw-card-hover'.split()})
# for li in all_product_boxes:
# links[li['data-name']] = 'https://www.chewy.com'+li.a['href']
# with open('links.pkl','wb') as file:
# pickle.dump(links,file)<jupyter_output>/Users/danielben-zion/Dropbox/insight/teefies/scraping/html_pages/individual-items
<jupyter_text>#### PRODUCT INFO CSV<jupyter_code>import glob
import pickle
import re
import csv
%cd '{basedir}/html_pages/individual-items/'
columns = ['product', 'price','oz_per_can','num_cans','price_per_oz']
with open('/Users/danielben-zion/Dropbox/insight/teefies/scraping/CatfoodProductInfo.csv','w') as catfoodcsv:
writer = csv.writer(catfoodcsv)
writer.writerow(columns)
counter = 0
for (product_name, product_link) in all_links.items():
product_dir = os.path.join('.',product_name.split(',')[0].replace(' ','-').lower())
main_page_soup = BeautifulSoup(open(f'./{product_dir}/main_page.html','r'),'html')
try: # just make sure we actually have html data for this product
price = main_page_soup.find('span',{'class' : 'ga-eec__price'}).text.strip()
except:
continue
splitted = product_name.split(',')
nn = splitted[0].strip()
oz_per_can = float(re.search('(\d.?\d?)-oz',product_name).group(1))
num_cans = float(re.search('case of (\d+)',product_name).group(1))
ppo = float(price[1:])/(oz_per_can*num_cans)
# print(f'Full product name: {product_name}')
# print(f"Name: {nn}" )
# print(f'oz_per_can: {opc}')
# print(f'num_cans: {ncans}')
# print(f'price: {price}')
row = [nn,oz_per_can,num_cans,price,ppo]
writer.writerow(row)
# counter += 1
# if counter>10:
# break
# for f in glob.glob('*.html'):
# with open(f,'r') as file:
# soup = BeautifulSoup(file,'html')
# all_product_boxes = soup.find_all('article', {"class" : 'product-holder js-tracked-product cw-card cw-card-hover'.split()})
# for li in all_product_boxes:
# links[li['data-name']] = 'https://www.chewy.com'+li.a['href']
# with open('links.pkl','wb') as file:
# pickle.dump(links,file)
test_page = BeautifulSoup(open('./9-lives-poultry-&-beef-favorites-variety-pack-canned-cat-food/page4.html','r'),'html')
review = test_page.find('li',{'itemprop' : 'review'})
review.find('meta',{'itemprop':'ratingValue'})['content']<jupyter_output><empty_output><jupyter_text>#### Check reviews csv<jupyter_code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib
font = {'family' : 'Helvetica Neue',
'weight' : 'normal',
'size' : 16}
matplotlib.rc('font', **font)
matplotlib.rc('text',usetex=True)
check_reviews = pd.read_csv('../data/CatfoodReviewsInfo.csv')
num_reviews = len(check_reviews)
unique_authors = len(check_reviews['review_author'].unique())
unique_products = len(check_reviews['product'].unique())
print(f'We have {num_reviews} reviews written by {unique_authors} unique authors across {unique_products} different products')
# fig = plt.figure()
print('Mean reviews written per author:',check_reviews.groupby('review_author')['rating'].agg('count').mean())
print('Max reviews written by one author:',check_reviews.groupby('review_author')['rating'].agg('count').max())
print("Inspecting top reviewers to determine how likely these numbers are due to username collision")
print( check_reviews.groupby('review_author')['rating'].agg('count').sort_values(ascending=False).head())
print("COLLABORATIVE FILTERING IS (probably) OFF!")
print('If I did the groupby correctly, there is at least one product with %i reviews written by the same review author name.'
% check_reviews.groupby(['product','review_author'])['rating'].agg('count').max())
print('Number of users with more than 5 reviews:',sum(check_reviews.groupby('review_author')['rating'].count() > 5))
# check_reviews.groupby(['product']).mean()
# len(check_reviews['product'].unique())
fig = plt.figure()
sns.distplot(check_reviews['rating'],kde=True,hist=False)
plt.title('User Ratings Tendencies')
plt.xlabel('Rating')
# check_reviews.sample(5)
plt.savefig('user-ratings-histogram.png',bbox_inches='tight')
pd.set_option('display.max_colwidth', -1) # more options can be specified also
check_reviews[check_reviews['review_author']=='zoey']<jupyter_output><empty_output><jupyter_text>### Check product info <jupyter_code>check_product = pd.read_csv('CatfoodProductInfo.csv')
check_product.sample(5)
# len(check_product['product'].unique())
# check.groupby(['product']).mean()
num_products = len(check_product['product'].unique())
num_nan_sizes = sum(check_product['oz_per_can'].isna())
print(f'For product info, we have information on {num_products} products. We are missing pricing info for {num_nan_sizes} of them.')
all_info_dataframe = check_reviews.join(check_product.set_index('product'), on='product',how='left')
pd.set_option('display.max_colwidth', 80) # more options can be specified also
all_info_dataframe.head(5)
fig = plt.figure()
sns.scatterplot(x='price_per_oz',y='rating',
data=all_info_dataframe[['product','rating','price_per_oz']].groupby('product').mean())
soup = BeautifulSoup(open('sample-base-page-1.html','r'),'html')
'Variety' in soup.find('div', {'id' : 'attributes'}).text<jupyter_output><empty_output>
|
permissive
|
/teefies/scraping/chewy-scrape.ipynb
|
dbz10/teefies
| 11 |
<jupyter_start><jupyter_text># Initial Exploration part 2<jupyter_code>import requests
%matplotlib inline
import json
import pandas as pd
import matplotlib.pyplot as plt
from pandas.io.json import json_normalize
import numpy as np
import matplotlib.pyplot as plt
# Penn Course Review API overview : https://github.com/pennlabs/pcr/blob/master/docs/api.md
#let's get the course history for Marketing 212
mktg212 = requests.get('http://api.penncoursereview.com/v1/coursehistories/MKTG-212/?token=public').json()
mktg212
departments = requests.get('http://api.penncoursereview.com/v1/depts/?token=public').json()
departments
json_normalize(departments['result']['values'])
econ = requests.get('http://api.penncoursereview.com/v1/depts/econ/reviews/?token=rPY7nzxkE0Dqjzlu9LUDX2mV30W0qo').json()
econ
json_normalize(econ['result']['values'])
## REVIEWS
# one way to call within the system
# comm_revs = requests.get('http://api.penncoursereview.com/v1/depts/comm/reviews/?token=rPY7nzxkE0Dqjzlu9LUDX2mV30W0qo').json()
#another way for our complex data is as follows:
#download the JSON to Mac and upload to data sub-folder in final project folder
with open("../data/comm_revs.json") as f:
df = json.load(f)
comm_revs_df = json_normalize(df['values'])
comm_revs_df
#now for political science
with open("../data/psci_revs.json") as f:
df = json.load(f)
psci_revs_df = json_normalize(df['values'])
psci_revs_df
#now for departments
with open("../data/departments.json") as f:
df = json.load(f)
departments_df = json_normalize(df['values'])
departments_df
#let's clean that up a bit by making a new version
departments_df2 = departments_df[['id','name']]
departments_df2
#rename the columns
departments_df2.columns = ['dptcode','dptname']
departments_df2.head()
#what are the columns we've extracted from comm_revs_df
comm_revs_df.columns
#we can get rid of the unnecessary columns for our analysis
comm_revs_df.head()
#use the drop function to get rid of unnecessary columns for our analysis:
comm_revs2 = comm_revs_df.drop(columns=['comments','id', 'instructor.id', 'instructor.name','instructor.path','path','section.aliases',
'section.id','section.path',])
comm_revs2.columns
comm_revs2
#rename columns to be more useful
comm_revs2.columns = ['instructor_first', 'instructor_last', 'num_reviewers','num_students',
'AmountLearned',
'CommAbility', 'CourseQuality', 'Difficulty',
'InstructorAccess', 'InstructorQuality',
'ReadingsValue', 'RecommendMajor',
'RecommendNonMajor', 'StimulateInterest',
'TAQuality', 'WorkRequired', 'CourseTitle',
'CourseCode', 'Section', 'Semester']
comm_revs2.head()
comm_revs2.shape #we have 760 courses and 20 category columns
#rearrange the columns' order to be more convenient to work with.
#first get a string list of the column names to then rearrange
cols = list(comm_revs2.columns.values)
cols
#establish new column ordering
comm_revs2 = comm_revs2[['CourseCode','Section','CourseTitle','Semester','instructor_first',
'instructor_last','CourseQuality','InstructorQuality','Difficulty','AmountLearned','WorkRequired',
'StimulateInterest','InstructorAccess','CommAbility','ReadingsValue','TAQuality','RecommendMajor',
'RecommendNonMajor','num_reviewers','num_students']]
comm_revs2.head()
#what are all the comm courses covered in this df?
comm_revs2['CourseTitle'].unique()
#course codes might be easier
comm_class_list = sorted(comm_revs2['CourseCode'].unique())
comm_class_list
#note: we do have some cross-listed courses here
#two ways to get professors list
#method 1
sorted(comm_revs_df['instructor.id'].unique())
#method 2 will just give last names:
sorted(comm_revs2['instructor_last'].unique())
#let's say we want to see all reviews for Kathleen Jamieson
#but sorted in chronological order from most recent to oldest
comm_revs2[comm_revs2['instructor_last']=='JAMIESON'].sort_values('Semester', ascending=False)
#let's say we are only interested in Intro to Political Comm
comm_revs2[comm_revs2['CourseCode'].str.contains("COMM-226")].sort_values('Semester',
ascending=False)
#Show only the summer courses
comm_summer = comm_revs2[comm_revs2['Semester'].str.contains("B")]
comm_summer
#How big are summer classes?
comm_summer['num_students'].describe()
#mean class size is 13, min was 3 and max was 26
#What was the best and worst comm class by course quality?
#Worst comm class:
comm_revs2[comm_revs2['CourseQuality']==comm_revs2['CourseQuality'].min()]
#Best comm class:
comm_revs2[comm_revs2['CourseQuality']==comm_revs2['CourseQuality'].max()]
#we have a 19-way tie for best comm class b/c they maxed out at 4.00 quality
# we need to convert most of the columns to integers instead of objects so we can do stats
comm_revs2[['CourseQuality', 'InstructorQuality', 'Difficulty',
'AmountLearned', 'WorkRequired', 'StimulateInterest',
'InstructorAccess', 'CommAbility', 'ReadingsValue', 'TAQuality',
'RecommendMajor', 'RecommendNonMajor', 'num_reviewers',
'num_students']] = comm_revs2[['CourseQuality', 'InstructorQuality', 'Difficulty',
'AmountLearned', 'WorkRequired', 'StimulateInterest',
'InstructorAccess', 'CommAbility', 'ReadingsValue', 'TAQuality',
'RecommendMajor', 'RecommendNonMajor', 'num_reviewers', 'num_students']].apply(pd.to_numeric)
comm_revs2.describe()
#which COMM professor has highest and lowest average instructor quality rating?
comm_instrQual = comm_revs2.groupby('instructor_last')['InstructorQuality'].mean()
comm_instrQual.sort_values(ascending=False)
#highest is Tsapatsaris, Johnston, Erdener, Henrichsen (yay Jenn!)
#lowest is Simons
#note: smaller sample size means easier to get a very high or very low mean
#let's look at the mean everything for comm classes
comm_means = comm_revs2.mean()
comm_means
#how did these things change over time?
comm_means_years = comm_revs2.groupby('Semester').mean()
comm_means_years
comm_revs2.columns
#Compare Course Quality and Instructor Quality
comm_means_years[['CourseQuality','InstructorQuality']].plot(kind='bar', figsize=(16,6))
#another way to compare CourseQuality and InstructorQuality
comm_means_years[['CourseQuality','InstructorQuality','AmountLearned']].plot()
#clear that instructor quality is correlated with course quality but is usually higher
#why the giant spikes? summer classes with smaller sample sizes is a likely cause
#can we do that without the summer classes
comm_nosummer = comm_revs2[comm_revs2['Semester'].str.contains("A|C")]
comm_nosummer_means_years = comm_nosummer.groupby('Semester').mean()
comm_nosummer_means_years
ax = comm_nosummer_means_years[['CourseQuality']].plot(figsize=(20,10))
ax.set_ylim(2,3.6)
plt.xticks(np.arange(len(comm_nosummer_means_years.index)), comm_nosummer_means_years.index)
plt.xticks(rotation=90)
#add plotly express
ax2 = comm_nosummer_means_years[['CourseQuality','InstructorQuality','AmountLearned']].plot()
ax2.set_ylim(2,3.6)
#interesting - course quality was higher than amount learned until 2012 or so then it changed
#now people rate amount learned higher than course quality
#in any case we see that these three are highly correlated
#the correlations between categories for non-summer classes:
comm_nosummer.corr()
#how many comm classes have been offered each semester over the years?
#have to change to make it sort by index so we don't just get increasing number
comm_courses_years = comm_nosummer.Semester.value_counts().sort_index()
comm_courses_years
comm_courses_years.columns=['Semester','num_courses']
#plot number of courses per semester
comm_courses_years.plot(kind='bar')
comm_nosummer.tail(25)
#how many students are enrolled in comm classes over the years
comm_numstudents = comm_nosummer.groupby('Semester')[['num_students']].sum()
comm_numstudents.reset_index()
comm_numstudents.plot(kind='bar')
#it looks like typically more students take COMM classes in the fall
#until recently when that switched to more common in the spring
#maybe more courses offered in spring?
comm_summer = comm_revs2[comm_revs2['Semester'].str.contains("B")]
#summer class analysis
comm_summer_mean = comm_summer.mean()
comm_summer_mean
#compare summer to non summer
comm_nosummer_mean = comm_nosummer.mean()
comm_nosummer_mean
#merge the two but drop number of reviewers and students because we already know that's smaller in summer
comm_concat_mean = pd.concat([comm_nosummer_mean, comm_summer_mean], axis=1)
comm_concat_mean.columns = ['no_summer', 'summer']
comm_concat_mean = comm_concat_mean.drop(['num_reviewers','num_students'])
comm_concat_mean.plot(kind='bar', figsize=(20,10))
#summer courses are on average "better," "easier," stimulate more interest
#potentially because class sections are smaller
comm_revs2.head()
#how many classes have the comm professors taught
comm_revs2['instructor_last'].value_counts()
#course quality each semester
comm_revs2.groupby('instructor_last')['InstructorQuality'].agg(['mean','min','max'])
#association plot
corrplot = comm_revs2.plot.scatter(x='AmountLearned', y='InstructorAccess')
comm_revs2.corr('pearson')
#to split a course code to get level
comm_revs2['level']= comm_revs2['CourseCode'].apply(lambda v: v.split("-")[1])<jupyter_output><empty_output>
|
no_license
|
/example_projects/coreymberman/data_analysis/initial_exploration_2.ipynb
|
lshalit/comm318_fall2020
| 1 |
<jupyter_start><jupyter_text># Классификация тем статей ВедомостейВ прошлом ноутбуке мы выкачали архив новостей Ведомостей по разным темам. В ссылку на каждую статью зашита её тема. Попробуем по тексту статьи определить её<jupyter_code>import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns<jupyter_output><empty_output><jupyter_text>### Данные<jupyter_code>df = pd.read_pickle('data/vedomosti_archive.pkl')
df.head()
df.topic.value_counts()<jupyter_output><empty_output><jupyter_text>### Оставим себе задачу бинарной классификации: политика или финансы?<jupyter_code>df = df[(df.topic=='politics')|(df.topic=='finance')]
df.index = range(len(df))
df["topic"] = df.topic.apply(lambda x: 1 if x=='politics' else 0)
df.topic.value_counts()<jupyter_output><empty_output><jupyter_text>Уберём все токены, которые не являются словами, при этом все числа заменив на "num"<jupyter_code>df["normalized_words"] = df.normalized.apply(lambda x: ' '.join([w if w.isalpha() else 'num' for w in x.split() if w.isalpha() or w.isdigit()]))<jupyter_output><empty_output><jupyter_text>### Векторизуем текстыЗдесь мы немного забежим вперёд и используем векторизацию текста в мешок слов ([bag of words](https://en.wikipedia.org/wiki/Bag-of-words_model))<jupyter_code>from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(max_features=5000)
%%time
X = vectorizer.fit_transform(df.normalized_words).toarray()
y = df.topic
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.2)
y_train.value_counts()<jupyter_output><empty_output><jupyter_text>### Обучим обычным деревом**Задание:** с помощью кросс-валидации по 10 фолдам и критерием качества f1 score подберите оптимальную глубину дерева в интервале 2-15
*можете переиспользовать код из тетрадки 2, но не забудьте настроить параметры*### Делаем предсказаниеИтак, кросс-валидация помогла нам выбрать глубину дерева. Обучим его с этим знанием**Задание:** обучите дерево и сделайте предсказание как классов, так и вероятностей принадлежности классам### Оценим качество**Задание:** оцените с помощью как минимум 3 метрик качество построенного решения### Посмотрим на дерево решенийhttp://www.webgraphviz.com<jupyter_code>from sklearn.tree import export_graphviz
def get_tree_dot_view(clf, feature_names=None, class_names=None):
print(export_graphviz(clf, out_file=None, filled=True, feature_names=feature_names, class_names=class_names))
get_tree_dot_view(clf, vectorizer.get_feature_names(), ['finance','politics'])<jupyter_output><empty_output><jupyter_text>### Какие слова самые важные?<jupyter_code>def get_top_indexes(s):
return sorted(range(len(s)), key=lambda k: s[k], reverse=True)
feature_names = vectorizer.get_feature_names()
top_indexes = get_top_indexes(clf.feature_importances_)
top_indexes = top_indexes[:10]
top_indexes.reverse()
top_importances = [clf.feature_importances_[i] for i in top_indexes]
top_words = [feature_names[i] for i in top_indexes]
plt.barh(np.arange(len(top_importances)), top_importances)
plt.yticks(np.arange(len(top_words)), top_words)
''<jupyter_output><empty_output>
|
no_license
|
/ml_block/2_trees/.ipynb_checkpoints/5 vedomosti classifying-checkpoint.ipynb
|
viktoriasin/n_homework
| 7 |
<jupyter_start><jupyter_text># Air Quality Dataset( Time-series) Prediction.multivariate time series forecasting – Vector Auto Regression (VAR)
https://archive.ics.uci.edu/ml/datasets/Air+Quality#
The dataset contains 9358 instances of hourly averaged responses from an array of 5 metal oxide chemical sensors embedded in an Air Quality Chemical Multisensor Device. The device was located on the field in a significantly polluted area, at road level,within an Italian city. Data were recorded from March 2004 to February 2005 (one year)representing the longest freely available recordings of on field deployed air quality chemical sensor devices responses. Ground Truth hourly averaged concentrations for CO, Non Metanic Hydrocarbons, Benzene, Total Nitrogen Oxides (NOx) and Nitrogen Dioxide (NO2) and were provided by a co-located reference certified analyzer. Evidences of cross-sensitivities as well as both concept and sensor drifts are present as described in De Vito et al., Sens. And Act. B, Vol. 129,2,2008 (citation required) eventually affecting sensors concentration estimation capabilities. Missing values are tagged with -200 value.
This dataset can be used exclusively for research purposes. Commercial purposes are fully excluded.
Attribute Information:
0 Date (DD/MM/YYYY)
1 Time (HH.MM.SS)
2 True hourly averaged concentration CO in mg/m^3 (reference analyzer)
3 PT08.S1 (tin oxide) hourly averaged sensor response (nominally CO targeted)
4 True hourly averaged overall Non Metanic HydroCarbons concentration in microg/m^3 (reference analyzer)
5 True hourly averaged Benzene concentration in microg/m^3 (reference analyzer)
6 PT08.S2 (titania) hourly averaged sensor response (nominally NMHC targeted)
7 True hourly averaged NOx concentration in ppb (reference analyzer)
8 PT08.S3 (tungsten oxide) hourly averaged sensor response (nominally NOx targeted)
9 True hourly averaged NO2 concentration in microg/m^3 (reference analyzer)
10 PT08.S4 (tungsten oxide) hourly averaged sensor response (nominally NO2 targeted)
11 PT08.S5 (indium oxide) hourly averaged sensor response (nominally O3 targeted)
12 Temperature in °C
13 Relative Humidity (%)
14 AH Absolute Humidity <jupyter_code>#import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pylab import rcParams
import seaborn as sns
rcParams['figure.figsize']=14,12
%matplotlib inline
df = pd.read_csv("/home/preeti/Desktop/AirQualityUCI.csv", sep =';',decimal=',')
sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis')<jupyter_output><empty_output><jupyter_text>We see last two columns are blankm so we will eliminate them.<jupyter_code>df.info()
df.describe()
df.columns
df.Date.isnull().values.any()
df.dropna(axis=0, how ='all',inplace=True)
df.dropna(axis=1,inplace=True)
df.plot.box()
df.apply(lambda x : x == -200).sum()<jupyter_output><empty_output><jupyter_text>The dataset description states that the missing values have been filled as -200, so we need to fill it with some relevant number ! But as in NMHC(GT), 8443/9357 are -200, lets drop it.<jupyter_code>df.drop(['NMHC(GT)'],axis=1, inplace= True)
df.replace(to_replace= -200, value= np.NaN, inplace= True)
df['Date']=pd.to_datetime(df.Date, format='%d/%m/%Y') #format date column
#df.set_index('Date', inplace=True)
df
df['month'] = df['Date'].dt.month
#df.set_index('Date', inplace=True)#Create month column (Run once)
df.head()
df['Time'] = pd.to_datetime(df['Time'],format= '%H.%M.%S').dt.hour
type(df['Time'][0])
df['CO(GT)']=df['CO(GT)'].fillna(df.groupby(['month','Time'])['CO(GT)'].transform('mean'))
df['NOx(GT)']=df['NOx(GT)'].fillna(df.groupby(['month','Time'])['NOx(GT)'].transform('mean'))
df['NO2(GT)']=df['NO2(GT)'].fillna(df.groupby(['month','Time'])['NO2(GT)'].transform('mean'))
df['PT08.S1(CO)']=df['PT08.S1(CO)'].fillna(df.groupby(['month','Time'])['PT08.S1(CO)'].transform('mean'))
df['C6H6(GT)']=df['C6H6(GT)'].fillna(df.groupby(['month','Time'])['C6H6(GT)'].transform('mean'))
df['PT08.S2(NMHC)']=df['PT08.S2(NMHC)'].fillna(df.groupby(['month','Time'])['PT08.S2(NMHC)'].transform('mean'))
df['PT08.S3(NOx)']=df['PT08.S3(NOx)'].fillna(df.groupby(['month','Time'])['PT08.S3(NOx)'].transform('mean'))
df['PT08.S4(NO2)']=df['PT08.S4(NO2)'].fillna(df.groupby(['month','Time'])['PT08.S4(NO2)'].transform('mean'))
df['PT08.S5(O3)']=df['PT08.S5(O3)'].fillna(df.groupby(['month','Time'])['PT08.S5(O3)'].transform('mean'))
df['T']=df['T'].fillna(df.groupby(['month','Time'])['T'].transform('mean'))
df['RH']=df['RH'].fillna(df.groupby(['month','Time'])['RH'].transform('mean'))
df['AH']=df['AH'].fillna(df.groupby(['month','Time'])['AH'].transform('mean'))
df.isnull().sum()
df['CO(GT)']=df['CO(GT)'].fillna(df.groupby(['Time'])['CO(GT)'].transform('mean'))
df['NOx(GT)']=df['NOx(GT)'].fillna(df.groupby(['Time'])['NOx(GT)'].transform('mean'))
df['NO2(GT)']=df['NO2(GT)'].fillna(df.groupby(['Time'])['NO2(GT)'].transform('mean'))
df.isnull().sum()
#Use heatmap to see corelation between variables
sns.heatmap(df.corr(),annot=True,cmap='viridis')
plt.title('Heatmap of co-relation between variables',fontsize=16)
plt.show()<jupyter_output><empty_output><jupyter_text># Exploratory Data Analysis<jupyter_code>sns.set_style('whitegrid')
eda_data = df.drop(['Time','RH','AH','T'], axis=1)
sns.pairplot(eda_data)
df.set_index('Date', inplace=True)
df.drop(['Time','RH','AH','T'], axis=1).resample('M').mean().plot(figsize = (20,8))
plt.legend(loc=1)
plt.xlabel('Month')
plt.ylabel('All Toxic Gases in the Air')
plt.title("All Toxic Gases' Frequency by Month");
<jupyter_output><empty_output><jupyter_text>The above graph shows all the toxic gases present in air.The Brown line shows Nitrogen Oxides (NOx) and Yellow line shows NO2 which is part of NOx.Two of the most toxicologically significant compounds are nitric oxide (NO) and nitrogen dioxide (NO2).Nitrogen Oxides(NOx),one of the most dangerous forms of air pollution .<jupyter_code>df['NOx(GT)'].resample('M').mean().plot(kind='bar', figsize=(18,6))
plt.xlabel('Month')
plt.ylabel('Total Nitrogen Oxides (NOx) in ppb') # Parts per billion (ppb)
plt.title("Mean Total Nitrogen Oxides (NOx) Level by Month")
plt.figure(figsize=(20,6))
sns.barplot(x='Time',y='NOx(GT)',data=df, ci=False)
plt.xlabel('Hours')
plt.ylabel('Total Nitrogen Oxides in ppb') # Parts per billion (ppb)
plt.title("Mean Total Nitrogen Oxides (NOx) Frequency During Days")
df.plot(x='NO2(GT)',y='NOx(GT)', kind='scatter', figsize = (10,6), alpha=0.3)
plt.xlabel('Level of NO2')
plt.ylabel('Level of NOx in ppb') # Parts per billion (ppb)
plt.title("Mean Total of NOx Frequency During Days")
plt.tight_layout();
plt.figure(figsize=(10,8))
sns.heatmap(df.corr(), annot=True, linewidths=.20)<jupyter_output><empty_output><jupyter_text># Train, Test ,Model using Linear Regression<jupyter_code>X = df.drop(['NOx(GT)','T','Time'], axis=1)
y= df['NOx(GT)']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train, y_train)
print(lm.intercept_)
coeff_data = pd.DataFrame(lm.coef_, index=X.columns, columns=['Coefficient'])
coeff_data<jupyter_output><empty_output><jupyter_text># Prediction<jupyter_code>prediction = lm.predict(X_test)
linear_regression_score = lm.score(X_test, y_test)
linear_regression_score<jupyter_output><empty_output><jupyter_text># Residual Histogram<jupyter_code>sns.distplot((y_test-prediction), bins=70, color="purple")
from sklearn import metrics
print('MAE:',metrics.mean_absolute_error(y_test, prediction))
print('MSE:',metrics.mean_squared_error(y_test, prediction))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_test, prediction)))
coeff_data<jupyter_output><empty_output>
|
no_license
|
/Air Quality Dataset( Time-series) Prediction.(1).ipynb
|
kumarpreetilata/Air-Quality-Prediction-Time-Series-
| 8 |
<jupyter_start><jupyter_text>## Nama : Kartika Haritami
## Nim : 09011281621053
## Kelas : SK 7 B
Jurnal referensi: http://seminar.ilkom.unsri.ac.id/index.php/ars/article/view/80 ## IMPLEMENTASI METODE FUZZY SUGENO DALAM MENETUKAN ARAH GERAK ROBOT PENGHINDAR RINTANGANDalam tugas ini, terdapat 3 buah sensor sebagai variabel inputan yakni ,
1. Sensor Jarak Kiri Sebagai Variabel Kiri
2. Sensor Jarak Tengah Sebagai Variabel Tengah
3. Sensor Jarak Kanan Sebagai Variabel Kanan
dalam tugas ini, ketiga sensor terletak dengan sudut 60° untuk sensor kanan , 90° untuk sensor tengah dan 120° untuk sensor kiri yang digunakan untuk mendeteksi benda / halangan dengan jara 0 sampai 300 centimeter
yang mana akan dibagi menjadi 3 himpunan fuzzy , yang pertama yaitu Dekat (Poor), Sedang (Average) dan Jauh (Good) dan Output berupa Sudut yang mana akan menentukan arah pergerakan robot berupa:
1. Sudut 30° untuk Kekanan
2. Sudut 60° untuk Kekanan Sedang
3. Sudut 90° untuk Lurus
4. Sudut 120° untuk kekiri Sedang
5. Sudut 150° untuk KekiriDimana Himpunan Keanggotaan Untuk Input dan Output dapat direpersentasikan sebagai berikut:Kurva naik<jupyter_code>from IPython.display import Image
Image(filename='up.jpg') <jupyter_output><empty_output><jupyter_text>Kurva Turun<jupyter_code>from IPython.display import Image
Image(filename='Down.jpg') <jupyter_output><empty_output><jupyter_text>Kurva Segitiga<jupyter_code>from IPython.display import Image
Image(filename='Tri.jpg') <jupyter_output><empty_output><jupyter_text>Dan Himpunan Keanggotaan Untuk Ketiga Variabel Input Direpresentasi kan dalam Himpunan anggota dibawah :<jupyter_code>from IPython.display import Image
Image(filename='Himpunan.jpg') <jupyter_output><empty_output><jupyter_text>Himpunan Keanggotaan Untuk Output :<jupyter_code>from IPython.display import Image
Image(filename='Output.jpg') <jupyter_output><empty_output><jupyter_text>**Berikut Ini Adalah Program Fuzzy untuk Sistem Robot Penghidar Rintangan**<jupyter_code>import numpy as np
import skfuzzy as fuzz
import skfuzzy.control as ctrl
import matplotlib.pyplot as plt
k = np.arange(100, 300, 1)
t = np.arange(100, 300, 1)
n = np.arange(100, 300, 1)
o = np.arange(30, 150, 1)
Kiri= ctrl.Antecedent(k,'VKiri')
Tengah= ctrl.Antecedent(t,'VTengah')
Kanan= ctrl.Antecedent(n,'VKanan')
Out= ctrl.Consequent(o,'Output')<jupyter_output><empty_output><jupyter_text>> Disini terdapat Kiri, Tengah dan Kanan sebagai Variabel Input dan Out sebagai Variabel Output<jupyter_code>Kiri ['poor'] = fuzz.trimf(Kiri.universe, [0, 100, 200])
Kiri ['average'] = fuzz.trimf(Kiri.universe, [100, 200, 300])
Kiri ['good'] = fuzz.trimf(Kiri.universe, [200, 300, 300])
Tengah ['poor'] = fuzz.trimf(Tengah.universe, [0, 100, 200])
Tengah ['average'] = fuzz.trimf(Tengah.universe, [100, 200, 300])
Tengah ['good'] = fuzz.trimf(Tengah.universe, [200, 300, 300])
Kanan ['poor'] = fuzz.trimf(Kanan.universe, [0, 100, 200])
Kanan ['average'] = fuzz.trimf(Kanan.universe, [100, 200, 300])
Kanan ['good'] = fuzz.trimf(Kanan.universe, [200, 300, 300])
Out ['Kanan'] = fuzz.trimf(Out.universe, [0, 30, 60])
Out ['KananSedang'] = fuzz.trimf(Out.universe, [30, 60, 90])
Out ['Lurus'] = fuzz.trimf(Out.universe, [60, 90, 120])
Out ['KiriSedang'] = fuzz.trimf(Out.universe, [90, 120, 150])
Out ['Kiri'] = fuzz.trimf(Out.universe, [120, 150, 150])
k_lo = fuzz.trimf(k, [0, 100, 200])
k_md = fuzz.trimf(k, [100, 200, 300])
k_hi = fuzz.trimf(k, [200, 300, 300])
t_lo = fuzz.trimf(t, [0, 100, 200])
t_md = fuzz.trimf(t, [100, 200, 300])
t_hi = fuzz.trimf(t, [200, 300, 300])
n_lo = fuzz.trimf(n, [0, 100, 200])
n_md = fuzz.trimf(n, [100, 200, 300])
n_hi = fuzz.trimf(n, [200, 300, 300])
o_lo = fuzz.trimf(o, [0, 30, 60])
o_mlo = fuzz.trimf(o,[30, 60, 90])
o_md = fuzz.trimf(o, [60, 90, 120])
o_mhi = fuzz.trimf(o, [90, 120, 150])
o_hi = fuzz.trimf(o, [120, 150, 150])<jupyter_output><empty_output><jupyter_text>> Proses Mebership Fuction , dimana Setiap Input Dibagi menjadi 3 himpunan dan begitu juga pada Output
> Dan berkut adalah hasil plot membership functon diatas <jupyter_code>fig, (ax0, ax1,ax2) = plt.subplots(nrows=3, figsize=(9, 5))
ax0.plot(k, k_lo, 'b', linewidth=1.5, label='Poor')
ax1.plot(k, k_md, 'y', linewidth=1.5, label='Average')
ax2.plot(k, k_hi, 'g', linewidth=1.5, label='Good')
ax0.set_title('VKiri Poor')
ax1.set_title('VKiri Average')
ax2.set_title('VKiri Good')
Kiri.view()
fig, (ax3, ax4,ax5) = plt.subplots(nrows=3, figsize=(9, 5))
ax3.plot(t, t_lo, 'b', linewidth=1.5, label='Poor')
ax4.plot(t, t_md, 'y', linewidth=1.5, label='Average')
ax5.plot(t, t_hi, 'g', linewidth=1.5, label='Good')
ax3.set_title('VTengah Poor')
ax4.set_title('VTengah Average')
ax5.set_title('VTengah Good')
Tengah.view()
fig, (ax6, ax7,ax8) = plt.subplots(nrows=3, figsize=(9, 5))
ax6.plot(n, n_lo, 'b', linewidth=1.5, label='Poor')
ax7.plot(n, n_md, 'y', linewidth=1.5, label='Average')
ax8.plot(n, n_hi, 'g', linewidth=1.5, label='Good')
ax3.set_title('VKanan Poor')
ax4.set_title('VKanan Average')
ax5.set_title('VKanan Good')
Kanan.view()
fig, (ax9,ax10,ax11) = plt.subplots(nrows=3, figsize=(9, 5))
ax9.plot(o, o_lo, 'b', linewidth=1.5, label='Kanan')
ax10.plot(o, o_mlo, 'y', linewidth=1.5, label='Kanan Sedang')
ax11.plot(o, o_md, 'g', linewidth=1.5, label='Lurus')
ax9.set_title('VOut Kanan')
ax10.set_title('VOut Kanan Sedang')
ax11.set_title('VOut Lurus')
fig, (ax12, ax13) = plt.subplots(nrows=2, figsize=(9, 5))
ax12.plot(o, o_mhi, 'r', linewidth=1.5, label='Kiri Sedang')
ax13.plot(o, o_hi, 'm', linewidth=1.5, label='Kiri')
ax12.set_title('VOut Kiri Sedang')
ax13.set_title('VOut Kiri')
Out.view()
<jupyter_output>C:\Anaconda3\lib\site-packages\matplotlib\figure.py:448: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
% get_backend())
<jupyter_text>Berikut adalah Tabel Untuk Basis aturan
| No | Kiri | Tengah | Kanan | Output |
|----|--------|--------|---------|--------------|
| 1 | Dekat | Dekat | Dekat | Kekanan |
| 2 | Dekat | Dekat | Sedang | Kekanan |
| 3 | Dekat | Dekat | Jauh | Kekanan |
| 4 | Dekat | Sedang | Dekat | Kekanan |
| 5 | Dekat | Sedang | Sedang | Kekanan |
| 6 | Dekat | Sedang | Jauh | Kanan Sedang |
| 7 | Dekat | Jauh | Dekat | Kekanan |
| 8 | Dekat | Jauh | Sedang | Kekanan |
| 9 | Dekat | Jauh | Jauh | Kanan Sedang |
| 10 | Sedang | Dekat | Dekat | Kekiri |
| 11 | Sedang | Dekat | Sedang | Kekanan |
| 12 | Sedang | Dekat | Jauh | Kanan Sedang |
| 13 | Sedang | Sedang | Dekat | Kekiri |
| 14 | Sedang | Sedang | Sedang | Kekanan |
| 15 | Sedang | Sedang | Jauh | Kekanan |
| 16 | Sedang | Jauh | Dekat | Kekiri |
| 17 | Sedang | Jauh | Sedang | Lurus |
| 18 | Sedang | Jauh | Jauh | Lurus |
| 19 | Jauh | Dekat | Dekat | Kekiri |
| 20 | Jauh | Dekat | Sedang | Kekiri |
| 21 | Jauh | Dekat | Jauh | Kekiri |
| 22 | Jauh | Sedang | Dekat | Kekiri |
| 23 | Jauh | Sedang | Sedang | Kiri Sedang |
| 24 | Jauh | Sedang | Juah | Kiri Sedang |
| 25 | Juah | Jauh | Dekat | Kiri Sedang |
| 26 | Jauh | Jauh | Sedang | Lurus |
| 27 | Jauh | Jauh | Jauh | Lurus |<jupyter_code>rule1 = ctrl.Rule(Kiri['poor'] & Tengah['poor'] & Kanan['poor'],Out['Kanan'])
rule2 = ctrl.Rule(Kiri['poor'] & Tengah['poor'] & Kanan['average'],Out['Kanan'])
rule3 = ctrl.Rule(Kiri['poor'] & Tengah['poor'] & Kanan['good'],Out['Kanan'])
rule4 = ctrl.Rule(Kiri['poor'] & Tengah['average'] & Kanan['poor'],Out['Kanan'])
rule5 = ctrl.Rule(Kiri['poor'] & Tengah['average'] & Kanan['average'],Out['Kanan'])
rule6 = ctrl.Rule(Kiri['poor'] & Tengah['average'] & Kanan['good'],Out['KananSedang'])
rule7 = ctrl.Rule(Kiri['poor'] & Tengah['good'] & Kanan['poor'],Out['Kanan'])
rule8 = ctrl.Rule(Kiri['poor'] & Tengah['good'] & Kanan['average'],Out['Kanan'])
rule9 = ctrl.Rule(Kiri['poor'] & Tengah['good'] & Kanan['good'],Out['KananSedang'])
rule10 = ctrl.Rule(Kiri['average'] & Tengah['poor'] & Kanan['poor'],Out['Kiri'])
rule11 = ctrl.Rule(Kiri['average'] & Tengah['poor'] & Kanan['average'],Out['Kanan'])
rule12 = ctrl.Rule(Kiri['average'] & Tengah['poor'] & Kanan['good'],Out['KananSedang'])
rule13 = ctrl.Rule(Kiri['average'] & Tengah['average'] & Kanan['poor'],Out['Kiri'])
rule14 = ctrl.Rule(Kiri['average'] & Tengah['average'] & Kanan['average'],Out['Kanan'])
rule15 = ctrl.Rule(Kiri['average'] & Tengah['average'] & Kanan['good'],Out['Kanan'])
rule16 = ctrl.Rule(Kiri['average'] & Tengah['good'] & Kanan['poor'],Out['Kiri'])
rule17 = ctrl.Rule(Kiri['average'] & Tengah['good'] & Kanan['average'],Out['Lurus'])
rule18 = ctrl.Rule(Kiri['average'] & Tengah['good'] & Kanan['good'],Out['Lurus'])
rule19 = ctrl.Rule(Kiri['good'] & Tengah['poor'] & Kanan['poor'],Out['Kiri'])
rule20 = ctrl.Rule(Kiri['good'] & Tengah['poor'] & Kanan['average'],Out['Kiri'])
rule21 = ctrl.Rule(Kiri['good'] & Tengah['poor'] & Kanan['good'],Out['Kiri'])
rule22 = ctrl.Rule(Kiri['good'] & Tengah['average'] & Kanan['poor'],Out['Kiri'])
rule23 = ctrl.Rule(Kiri['good'] & Tengah['average'] & Kanan['average'],Out['KiriSedang'])
rule24 = ctrl.Rule(Kiri['good'] & Tengah['average'] & Kanan['good'],Out['KiriSedang'])
rule25 = ctrl.Rule(Kiri['good'] & Tengah['good'] & Kanan['poor'],Out['KiriSedang'])
rule26 = ctrl.Rule(Kiri['good'] & Tengah['good'] & Kanan['average'],Out['Lurus'])
rule27 = ctrl.Rule(Kiri['good'] & Tengah['good'] & Kanan['good'],Out['Lurus'])
rule1.view()
Kontrol = ctrl.ControlSystem(rules=[rule1, rule2, rule3, rule4,rule5,rule6,rule7,rule8,rule9,rule10,rule11,rule12,rule13,rule14,rule15,rule16,rule17,rule18,rule19,rule20,rule21,rule22,rule23,rule24,rule25,rule26,rule27])
Hasil = ctrl.ControlSystemSimulation(Kontrol)
<jupyter_output><empty_output><jupyter_text>> Berikut Ialah Hasil Fuzzy Berdasarkan Rule yang diatas <jupyter_code>Hasil.input['VKiri'] = 200
Hasil.input['VTengah'] = 250
Hasil.input['VKanan'] = 100
Hasil.compute()
print(Hasil.output['Output'])
Out.view(sim=Hasil)<jupyter_output>137.8139534883721
|
no_license
|
/09011281621053_KartikaHaritami_SK7B.ipynb
|
isysrg/sharing_tugas_fuzzy2019
| 10 |
<jupyter_start><jupyter_text># IEEE-CIS Fraud Detection
#### Can you detect fraud from customer transactions?
## Fraud Detection
Fraud Detection
In this paper the target is to reach the Prediction with Boosting Algorithms.
First we will try to implement the Ada Boost Algorithm, after that we compare that result with the performance of the XG Boost.
On top of that we compare 8 different classifing Algorithms to on a subsample to get a gerenal idea what kind of classifiers are useful for this issue.
Lets see what place we can reach in this competition....# 1 Imports <jupyter_code># standard imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json
from pandas import ExcelWriter
from pandas import ExcelFile
from sklearn.svm import SVR
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import datetime as dt
from datetime import date
from scipy import stats
#from pandas.core import datetools
from plotly import tools
import chart_studio.plotly as py
import plotly.figure_factory as ff
import plotly.tools as tls
import plotly.graph_objs as go
import warnings
import seaborn as sns
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn import metrics
import json
from time import sleep
from datetime import datetime
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.model_selection import learning_curve, GridSearchCV
from sklearn.model_selection import cross_val_score
from sklearn.metrics import classification_report, confusion_matrix # confusion matrix
from sklearn import metrics
import pandas as pd
import numpy as np
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from sklearn import metrics #Additional scklearn functions
from sklearn.model_selection import GridSearchCV
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.pylab import rcParams
import gc
from pylab import rcParams
rcParams['figure.figsize'] = 30, 30
rcParams['font.size'] = 20
rcParams['axes.facecolor'] = 'white'
%matplotlib inline
warnings.filterwarnings("ignore")
# plt.style.available
plt.style.use("seaborn-whitegrid")<jupyter_output><empty_output><jupyter_text>### Read in the Datasets<jupyter_code>test_identity = pd.read_csv('test_identity.csv') # Source: Kaggle
test_transaction = pd.read_csv('test_transaction.csv') # Source: Kaggle
train_identity = pd.read_csv('train_identity.csv') # Source: Kaggle
train_transaction = pd.read_csv('train_transaction.csv') # Source: Kaggle
sample_submission = pd.read_csv('sample_submission.csv') # Source: Kaggle
<jupyter_output><empty_output><jupyter_text># 2.Reduce memory usage of data
#### There are 3 ways to handle big data
- Chunk sizing Data with csv.read()
- Filter out unimportant columns
- Change dtypes for columns
The last two aspects we try to apply in the next steps
### 1. Filter Out Unimportant Columns
Reducing Memory due to Datacleaning, unimportant columns like:
- Features with only 1 unique value
- Features with more than 10% missing Values
- Features with the top value appears more than 90% of the time<jupyter_code>
# https://www.kaggle.com/nroman/recursive-feature-elimination
train = pd.merge(train_transaction, train_identity, on='TransactionID', how='left')
test = pd.merge(test_transaction, test_identity, on='TransactionID', how='left')
#del test_identity, test_transaction, train_identity, train_transaction
gc.collect()
one_value_cols = [col for col in train.columns if train[col].nunique() <= 1]
one_value_cols_test = [col for col in test.columns if test[col].nunique() <= 1]
many_null_cols = [col for col in train.columns if train[col].isnull().sum() / train.shape[0] > 0.1]
many_null_cols_test = [col for col in test.columns if test[col].isnull().sum() / test.shape[0] > 0.1]
big_top_value_cols = [col for col in train.columns if train[col].value_counts(dropna=False, normalize=True).values[0] > 0.9]
big_top_value_cols_test = [col for col in test.columns if test[col].value_counts(dropna=False, normalize=True).values[0] > 0.9]
cols_to_drop = list(set(many_null_cols + many_null_cols_test + big_top_value_cols + big_top_value_cols_test + one_value_cols + one_value_cols_test))
cols_to_drop.remove('isFraud')
print('{} features are going to be dropped for being useless'.format(len(cols_to_drop)))
# adding part of whole dataset
part = (len(cols_to_drop)/(len(train.columns)+len(test.columns))*100)
print('Thats {} % of the whole dataset '.format(round(part, 2)))
train = train.drop(cols_to_drop, axis=1)
test = test.drop(cols_to_drop, axis=1)<jupyter_output>376 features are going to be dropped for being useless
Thats 43.37 % of the whole dataset
<jupyter_text>### 2. Safe Memory Size by Changing Datatypes<jupyter_code>#source= https://wkirgsn.github.io/2018/02/10/auto-downsizing-dtypes/
from joblib import Parallel, delayed
import gc
#__AUTHOR__ = 'Kirgsn'
class Reducer:
"""
Class that takes a dict of increasingly big numpy datatypes to transform
the data of a pandas dataframe to in order to save memory usage.
"""
memory_scale_factor = 1024**2 # memory in MB
def __init__(self, conv_table=None):
"""
:param conv_table: dict with np.dtypes-strings as keys
"""
if conv_table is None:
self.conversion_table = \
{'int': [np.int8, np.int16, np.int32, np.int64],
'uint': [np.uint8, np.uint16, np.uint32, np.uint64],
'float': [np.float16, np.float32, ]}
else:
self.conversion_table = conv_table
#gc.collect()
def _type_candidates(self, k):
for c in self.conversion_table[k]:
i = np.iinfo(c) if 'int' in k else np.finfo(c)
yield c, i
def reduce(self, df, verbose=False):
"""Takes a dataframe and returns it with all data transformed to the
smallest necessary types.
:param df: pandas dataframe
:param verbose: If True, outputs more information
:return: pandas dataframe with reduced data types
"""
ret_list = Parallel(n_jobs=-1, max_nbytes=None)(delayed(self._reduce)
(df[c], c, verbose) for c in
df.columns)
return pd.concat(ret_list, axis=1)
def _reduce(self, s, colname, verbose):
# skip NaNs
if s.isnull().any():
if verbose:
print(colname, 'has NaNs - Skip..')
return s
# detect kind of type
coltype = s.dtype
if np.issubdtype(coltype, np.integer):
conv_key = 'int' if s.min() < 0 else 'uint'
elif np.issubdtype(coltype, np.floating):
conv_key = 'float'
else:
if verbose:
print(colname, 'is', coltype, '- Skip..')
print(colname, 'is', coltype, '- Skip..')
return s
# find right candidate
for cand, cand_info in self._type_candidates(conv_key):
if s.max() <= cand_info.max and s.min() >= cand_info.min:
if verbose:
print('convert', colname, 'to', str(cand))
return s.astype(cand)
# reaching this code is bad. Probably there are inf, or other high numbs
print(("WARNING: {} "
"doesn't fit the grid with \nmax: {} "
"and \nmin: {}").format(colname, s.max(), s.min()))
print('Dropping it..')
reducer = Reducer()
data_test = reducer.reduce(test)
reducer = Reducer()
data_train = reducer.reduce(train)
test.info()
data_test.info()
def reduce_mem_usage(df):
start_mem_usg = df.memory_usage().sum() / 1024**2
print("Memory usage of properties dataframe is :",start_mem_usg," MB")
NAlist = [] # Keeps track of columns that have missing values filled in.
for col in df.columns:
if df[col].dtype != object: # Exclude strings
# Print current column type
print("******************************")
print("Column: ",col)
print("dtype before: ",df[col].dtype)
# make variables for Int, max and min
IsInt = False
mx = df[col].max()
mn = df[col].min()
print("min for this col: ",mn)
print("max for this col: ",mx)
# Integer does not support NA, therefore, NA needs to be filled
if not np.isfinite(df[col]).all():
NAlist.append(col)
df[col].fillna(mn-1,inplace=True)
# test if column can be converted to an integer
asint = df[col].fillna(0).astype(np.int64)
result = (df[col] - asint)
result = result.sum()
if result > -0.01 and result < 0.01:
IsInt = True
# Make Integer/unsigned Integer datatypes
if IsInt:
if mn >= 0:
if mx < 255:
df[col] = df[col].astype(np.uint8)
elif mx < 65535:
df[col] = df[col].astype(np.uint16)
elif mx < 4294967295:
df[col] = df[col].astype(np.uint32)
else:
df[col] = df[col].astype(np.uint64)
else:
if mn > np.iinfo(np.int8).min and mx < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif mn > np.iinfo(np.int16).min and mx < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif mn > np.iinfo(np.int32).min and mx < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif mn > np.iinfo(np.int64).min and mx < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
# Make float datatypes 32 bit
else:
df[col] = df[col].astype(np.float32)
# Print new column type
print("dtype after: ",df[col].dtype)
print("******************************")
# Print final result
print("___MEMORY USAGE AFTER COMPLETION:___")
mem_usg = df.memory_usage().sum() / 1024**2
print("Memory usage is: ",mem_usg," MB")
print("This is ",(100*mem_usg/start_mem_usg).round(2),"% of the initial size")
return df, NAlist
min_test, NAlist = reduce_mem_usage(data_test)
min_train, NAlist = reduce_mem_usage(data_train)<jupyter_output>Memory usage of properties dataframe is : 206.6880989074707 MB
******************************
Column: TransactionID
dtype before: uint32
min for this col: 2987000
max for this col: 3577539
dtype after: uint32
******************************
******************************
Column: isFraud
dtype before: uint8
min for this col: 0
max for this col: 1
dtype after: uint8
******************************
******************************
Column: TransactionDT
dtype before: uint32
min for this col: 86400
max for this col: 15811131
dtype after: uint32
******************************
******************************
Column: TransactionAmt
dtype before: float16
min for this col: 0.251
max for this col: 31940.0
dtype after: float32
******************************
******************************
Column: card1
dtype before: uint16
min for this col: 1000
max for this col: 18396
dtype after: uint16
******************************
******************************
Column: card2
dtype before:[...]<jupyter_text># 3.EDA <jupyter_code>ax = sns.countplot(x='isFraud', data=min_train, saturation = 2)
min_train[['TransactionDT']]
# idea: https://www.kaggle.com/robikscube/ieee-fraud-detection-first-look-and-eda
train_transaction_fr = min_train.loc[train_transaction['isFraud'] == 1]
train_transaction_nofr = min_train.loc[train_transaction['isFraud'] == 0]
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 8))
train_transaction_fr.groupby('card4')['card4'].count().plot(kind='barh', ax=ax1, title='Count of card4 fraud')
train_transaction_nofr.groupby('card4')['card4'].count().plot(kind='barh', ax=ax2, title='Count of card4 non-fraud')
train_transaction_fr.groupby('card6')['card6'].count().plot(kind='barh', ax=ax3, title='Count of card6 fraud')
train_transaction_nofr.groupby('card6')['card6'].count().plot(kind='barh', ax=ax4, title='Count of card6 non-fraud')
plt.show()<jupyter_output><empty_output><jupyter_text># 4. Adaboost preperation
<jupyter_code>import os
import random
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.ensemble import AdaBoostClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, PredefinedSplit
import os
import numpy as np
import pandas as pd
from sklearn import preprocessing
import xgboost as xgb
test = min_test
train = min_train
y_train = train['isFraud'].copy()
X_train = train.drop('isFraud', axis=1)
X_test = test.copy()
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
# Label Encoding, otherwise we cannot work with the data
for f in X_train.columns:
if X_train[f].dtype=='object' or X_test[f].dtype=='object':
lbl = preprocessing.LabelEncoder()
lbl.fit(list(X_train[f].values) + list(X_test[f].values))
X_train[f] = lbl.transform(list(X_train[f].values))
X_test[f] = lbl.transform(list(X_test[f].values)) <jupyter_output><empty_output><jupyter_text>## 4.1 Cleaning NA ValuesThere are many oportunities to deal with NA Values.
We wanted to implement a ML Algorithm which deals with them intelligent.
Therefore we are using sklearn Libary with the Intertive Imputer.
We read a lot about the different options of estimators like Decision Tree or KNN but for us Bayesian Ridge makes the most sense here.<jupyter_code># idea f rom thissource: https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html
import numpy as np
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.impute import SimpleImputer
from sklearn.impute import IterativeImputer
from sklearn.linear_model import BayesianRidge
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.neighbors import KNeighborsRegressor
# estimators: BayesianRidge(), DecisionTreeRegressor(max_features='sqrt', random_state=0),
# ExtraTreesRegressor(n_estimators=10, random_state=0),
# KNeighborsRegressor(n_neighbors=15)
imp = IterativeImputer(estimator = BayesianRidge(),
missing_values=np.nan,
sample_posterior=False,
max_iter=10,
tol=0.001,
n_nearest_features=10, # amount of features,
imputation_order = 'ascending',
min_value = None,
max_value = None,
verbose = 2, # controls the debug messages
random_state = None,
add_indicator = True,
initial_strategy='mean')
imp.fit(X_train)
imputed_train = pd.DataFrame(data=imp.transform(X_train),
dtype='int')
imp.fit(X_test)
imputed_test = pd.DataFrame(data=imp.transform(X_test),
dtype='int')<jupyter_output>[IterativeImputer] Completing matrix with shape (506691, 57)
[IterativeImputer] Ending imputation round 1/10, elapsed time 0.69
[IterativeImputer] Early stopping criterion reached.
[IterativeImputer] Completing matrix with shape (506691, 57)
<jupyter_text>## 4.2 Saving the Datafiles to perform in google Colabs<jupyter_code>imputed_test.to_csv('X_test.csv')
imputed_train.to_csv('X_train.csv')
y_train.to_csv('y_train.csv')
<jupyter_output><empty_output><jupyter_text>#### 4.2 1 Result:
Google Colabs didn´t perform better than our maschines# 5. First implementation with Adaboost### Subsampling of the Dataset<jupyter_code>#Trying the Adaboost with a small dataset and no fitted parameters
X_traino = X_train
X_testo = X_test
y_traino = y_train
subo = sample_submission
X_train = X_train.iloc[:1000]
X_test = X_test.iloc[:1000]
y_train = y_train.iloc[:1000]
sub = sample_submission.iloc[:1000]
#implementation of Adaboost Anlgorithm
clf = AdaBoostClassifier(n_estimators=1000, random_state=0)
clf.fit(X_train, y_train)
#Predicting the Probability with Adaboost
test = pd.DataFrame(clf.predict_proba(X_test)[:,1])
test <jupyter_output><empty_output><jupyter_text># 6. First implementation of XG-Boost
using the subsample<jupyter_code>xgb1 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=3, # Typical values: 3-10 also for over and underfitting .... max of trees ... high deduced overfitting
min_child_weight=1, #over or underfitting / to low = overfitting, to high underfitting
gamma=0,
subsample=0.8, # Lower values make the algorithm more conservative and prevents overfitting but too small values might lead to under-fitting. Typical values: 0.5-1
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
tree_method='exact') #seed= 27
xgb1.fit(X_train, y_train)
dtrain_predprob = pd.DataFrame(xgb1.predict_proba(X_test)[:,1])
dtrain_predprob<jupyter_output><empty_output><jupyter_text># 7. Comparison of 8 different Classification and pred. Algorithms
using the subsample<jupyter_code>names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(10),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
results = {}
for n, cl in zip(names, classifiers):
scores = cross_val_score(cl, X_train, y_train, cv=3) # the cv are the parts in which the data is splitted
results[n] = scores
r = pd.DataFrame.from_dict(results).T
r['Accurancy'] = r.mean(axis=1)
Accurancy = r[['Accurancy']].sort_values(by = ['Accurancy'], ascending = False)
Accurancy<jupyter_output><empty_output><jupyter_text># 8. Feature Engineering<jupyter_code>target = 'isFraud'
IDcol = '0'
# Source: Combination out of:
# https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/
# & Graphic : https://machinelearningmastery.com/feature-importance-and-feature-selection-with-xgboost-in-python/
def modelfit(alg, dtrain, y_train, X_test, useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgtrain = xgb.DMatrix(dtrain.values, label=y_train.values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,
metrics='auc', early_stopping_rounds=early_stopping_rounds)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(dtrain, y_train,eval_metric='auc')
#Predict training set:
dtrain_predictions = alg.predict(dtrain)
dtrain_predprob = alg.predict_proba(dtrain)[:,1]
#Print model report:
print ("\nModel Report")
print ("Accuracy : %.4g" % metrics.accuracy_score(y_train.values, dtrain_predictions))
print ("ROC_AUC Score: %f" % metrics.roc_auc_score(y_train.values, dtrain_predprob))
xgb.plot_importance(alg, max_num_features=10) #nice function of XG Boost to plot the most important features
#Choose all predictors except target & IDcols
rcParams['figure.figsize'] = 17, 10
rcParams['font.size'] = 15
#predictors = [x for x in xg_df.columns if x not in [target, IDcol]]
xgb1 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=3, # Typical values: 3-10 also for over and underfitting .... max of trees ... high deduced overfitting
min_child_weight=1, #over or underfitting / to low = overfitting, to high underfitting
gamma=0,
subsample=0.8, # Lower values make the algorithm more conservative and prevents overfitting but too small values might lead to under-fitting. Typical values: 0.5-1
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1)
modelfit(xgb1, X_train, y_train, X_test)<jupyter_output>
Model Report
Accuracy : 0.988
ROC_AUC Score: 0.966362
<jupyter_text># 9. Hypertuning of the Adaboost<jupyter_code>from sklearn.ensemble import AdaBoostClassifier
from sklearn import tree
from sklearn.model_selection import GridSearchCV
import numpy as np
#from pydataset import data
import pandas as pd
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
# finding the right max depth
crossvalidation=KFold(n_splits=10,shuffle=True,random_state=1)
for depth in range (1,10):
tree_classifier=tree.DecisionTreeClassifier(max_depth=depth,random_state=1)
if tree_classifier.fit(X_train,y_train).tree_.max_depth<depth:
break
score=np.mean(cross_val_score(tree_classifier,X_train,y_train,scoring='accuracy', cv=crossvalidation,n_jobs=1))
print(depth, score)
ada=AdaBoostClassifier()
search_grid={'n_estimators':[500,1000,2000],'learning_rate':[0.01,.1,1]}
search=GridSearchCV(estimator=ada,param_grid=search_grid,scoring='accuracy',n_jobs=1,cv=crossvalidation)
search.fit(X_train,y_train)
search.best_params_
search.best_score_
score=np.mean(cross_val_score(ada,X_train,y_train,scoring='accuracy',cv=crossvalidation,n_jobs=1))
score<jupyter_output><empty_output><jupyter_text>## 9.1 Creating bigger subsample
<jupyter_code># Getting The Whole Dataset
X_test = X_testo.iloc[:100000]
X_train = X_traino.iloc[:100000]
y_train = y_traino.iloc[:100000]
sub = subo
# finding the right max depth
crossvalidation=KFold(n_splits=10,shuffle=True,random_state=1)
for depth in range (1,10):
tree_classifier=tree.DecisionTreeClassifier(max_depth=depth,random_state=1)
if tree_classifier.fit(X_train,y_train).tree_.max_depth<depth:
break
score=np.mean(cross_val_score(tree_classifier,X_train,y_train,scoring='accuracy', cv=crossvalidation,n_jobs=1))
print(depth, score)
ada=AdaBoostClassifier()
search_grid={'n_estimators':[500,1000],'learning_rate':[0.1,1]}
search=GridSearchCV(estimator=ada,param_grid=search_grid,scoring='accuracy',n_jobs=1,cv=crossvalidation)
search.fit(X_train,y_train)
search.best_params_
search.best_score_
score=np.mean(cross_val_score(ada,X_train,y_train,scoring='accuracy',cv=crossvalidation,n_jobs=1))
score
<jupyter_output><empty_output><jupyter_text>## 9.2 Result of the Hyperparameter Tuning
We werte trying to hypertune different hyperparameters. In order to do that we had to use multible subsamples, because the original data was just to big to perform on our maschines. The Result of our Hyperparameter Tuning is that we choose n_estimators=1000 for our final model, with a learning rate:1 .The max_depth showed better results as we increased the score, but we decided to use a max depth of 3 because the performance of the adaboost took so long and the result wasn´t better than using the XG-Boost. Furthermore the lower max depth also provides our model from overfitting and still give a good overall result.## 9.3 Preperation of the whole Dataset<jupyter_code># Getting The Whole Dataset
X_test = X_testo
X_train = X_traino
y_train = y_traino
sub = subo <jupyter_output><empty_output><jupyter_text># 10 Final implemenation of the Adaboost for prediction<jupyter_code>max_dep=3
n_est = 1000
target = 'isFraud'
score = 'roc_auc'
%%time
ada = AdaBoostClassifier( max_depth=max_dep ,n_estimators=n_est)
param_grid = {'model__learning_rate': [1]}
pipe = Pipeline([('model', model)])
cv = GridSearchCV(pipe, cv=5, param_grid=param_grid, scoring=score)
cv.fit(X_train, y_train)
print('best_params_={}\nbest_score_={}'.format(repr(cv.best_params_), repr(cv.best_score_)))<jupyter_output>best_params_={'model__learning_rate': 1}
best_score_=0.7590799282397711
CPU times: user 3h 59min 56s, sys: 6min 31s, total: 4h 6min 28s
Wall time: 6h 13min 23s
<jupyter_text>Result of the Code
best_params_={'model__learning_rate': 1}
best_score_=0.7590799282397711
CPU times: user 3h 59min 56s, sys: 6min 31s, total: 4h 6min 28s
Wall time: 6h 13min 23s<jupyter_code>lol = pd.DataFrame(cv.predict_proba(X_test)[:,1])
as1_first_result = lol
#as1_first_result.to_csv('as1_first_result.csv')
#ubmission['isFraud']
sample_submission['isFraud'] = cv.predict_proba(X_test)[:,1]
sample_submission = sample_submission.set_index('TransactionID')
sample_submission.to_csv('as1_first_result.csv')
sample_submission<jupyter_output><empty_output>
|
no_license
|
/Fraud Fetection/1. Fraud Detection_AdaBoost_final.ipynb
|
marcgehring1995/Machine-Learning
| 17 |
<jupyter_start><jupyter_text># Sparkify Project Workspace
This workspace contains a tiny subset (128MB) of the full dataset available (12GB). Feel free to use this workspace to build your project, or to explore a smaller subset with Spark before deploying your cluster on the cloud. Instructions for setting up your Spark cluster is included in the last lesson of the Extracurricular Spark Course content.
You can follow the steps below to guide your data analysis and model building portion of this project.<jupyter_code># import libraries
from pyspark.sql import SparkSession
from pyspark.sql.functions import isnan, count, when, col, desc, udf, col, sort_array, asc, avg, lit
from pyspark.sql.functions import sum as Fsum
from pyspark.sql.window import Window
from pyspark.sql.types import IntegerType
import datetime
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import MinMaxScaler
from pyspark.ml.feature import PCA
from pyspark.ml.classification import LogisticRegression
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.classification import GBTClassifier
from pyspark.ml.classification import NaiveBayes
# create a Spark session
spark = SparkSession \
.builder \
.appName("DataFrames Practice") \
.getOrCreate()
df = spark.read.json('mini_sparkify_event_data.json')<jupyter_output><empty_output><jupyter_text># Load and Clean Dataset
In this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. Load and clean the dataset, checking for invalid or missing data - for example, records without userids or sessionids. <jupyter_code>df.take(5)
df.printSchema()
df.filter(df['userId'] == '').count()
df = df.filter(df['userId']!='')<jupyter_output><empty_output><jupyter_text># Exploratory Data Analysis
When you're working with the full dataset, perform EDA by loading a small subset of the data and doing basic manipulations within Spark. In this workspace, you are already provided a small subset of data you can explore.
### Define Churn
Once you've done some preliminary analysis, create a column `Churn` to use as the label for your model. I suggest using the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users. As a bonus task, you can also look into the `Downgrade` events.
### Explore Data
Once you've defined churn, perform some exploratory data analysis to observe the behavior for users who stayed vs users who churned. You can start by exploring aggregates on these two groups of users, observing how much of a specific action they experienced per a certain time unit or number of songs played.<jupyter_code># Gender distribution
df.select('gender') \
.groupBy('gender') \
.agg({'gender':'count'}) \
.withColumnRenamed('count(gender)', 'gender_count') \
.sort(desc('gender_count')) \
.show()
# Location distribution
df.select('location') \
.groupBy('location') \
.agg({'location':'count'}) \
.withColumnRenamed('count(location)', 'loc_count') \
.sort(desc('loc_count')) \
.show()
# Artist Popularity
df.filter(df.page == 'NextSong') \
.select('Artist') \
.groupBy('Artist') \
.agg({'Artist':'count'}) \
.withColumnRenamed('count(Artist)', 'Artistcount') \
.sort(desc('Artistcount')) \
.show()
# Level distribution
df.select('level') \
.groupBy('level') \
.agg({'level':'count'}) \
.withColumnRenamed('count(level)', 'lev_count') \
.sort(desc('lev_count')) \
.show()
# Registration distribution
get_month = udf(lambda x: datetime.datetime.fromtimestamp(x / 1000.0).month)
df = df.withColumn("month", get_month(df.ts))
df.select('month') \
.groupBy('month') \
.agg({'month': 'count'}) \
.withColumnRenamed('count(month)', 'reg_month_count') \
.sort(desc('reg_month_count')) \
.show()<jupyter_output>+-----+---------------+
|month|reg_month_count|
+-----+---------------+
| 10| 144916|
| 11| 133234|
| 12| 4|
+-----+---------------+
<jupyter_text>**Define & Separate Churn from the Dataframe**<jupyter_code>churn_record = df.filter((df['page']=='Cancellation Confirmation') | (df['page']=='Submit Downgrade')) \
.dropDuplicates(['userId'])
churn_users = churn_record.select('userId') \
.rdd.flatMap(lambda x: x) \
.collect()
df_churn = df.filter(df['userId'].isin(churn_users))
df_not_churn = df.filter(~df['userId'].isin(churn_users))
cancel_record = df.filter(df['page']=='Cancellation Confirmation')
canceled_users = cancel_record.select('userId') \
.rdd.flatMap(lambda x: x) \
.collect()
df_cancel = df.filter(df['userId'].isin(canceled_users))
paid_users = df.filter(df['level']=='paid') \
.select('userId') \
.rdd.flatMap(lambda x: x) \
.collect()
df_paid = df.filter(df['userId'].isin(paid_users))<jupyter_output><empty_output><jupyter_text>**Visualization**<jupyter_code># Gender Ratio
churn_gender_pd = df_churn.groupBy('gender').count().toPandas()
not_churn_gender_pd = df_not_churn.groupBy('gender').count().toPandas().drop(1, axis=0)
not_churn_gender_pd['ratio'] = not_churn_gender_pd['count'] / not_churn_gender_pd['count'].sum()
churn_gender_pd['ratio'] = churn_gender_pd['count'] / churn_gender_pd['count'].sum()
label = churn_gender_pd['gender'].tolist()
index = np.arange(len(label))
bar_width = 0.4
plt.figure(figsize=(9, 4))
p1 = plt.bar(index, churn_gender_pd['ratio'],
width = bar_width,
color = 'blue',
alpha = 0.5)
p2 = plt.bar(index + bar_width, not_churn_gender_pd['ratio'],
width = bar_width,
color = 'red',
alpha = 0.5)
plt.title('Dodged Bar Chart of Ratio of Gender by Churn', fontsize=20)
plt.ylabel('Ratio', fontsize=15)
plt.xlabel('Gender', fontsize=15)
plt.xticks(index + bar_width/2, label, fontsize=15)
plt.legend((p1[0], p2[0]), ('Churn', 'Current'), loc='upper right', fontsize=10)
plt.show()
# Location Ratio
churn_loc_pd = df_churn.groupBy('location').count().toPandas()
not_churn_loc_pd = df_not_churn.groupBy('location').count().toPandas()
top3_loc_churn = churn_loc_pd.sort_values('count', ascending=False).head(3)
top3_loc_not_churn = not_churn_loc_pd.sort_values('count', ascending=False).head(3)
top3_loc_churn['ratio'] = top3_loc_churn['count'] / top3_loc_churn['count'].sum()
top3_loc_not_churn['ratio'] = top3_loc_not_churn['count'] / top3_loc_not_churn['count'].sum()
label_churn = top3_loc_churn['location'].tolist()
label_not_churn = top3_loc_not_churn['location'].tolist()
index_churn = np.arange(len(label_churn))
index_not_churn = np.arange(len(label_not_churn))
fig = plt.figure(figsize=(20, 10))
fig.subplots_adjust(wspace=4)
p1 = fig.add_subplot(1, 2, 1)
p2 = fig.add_subplot(1, 2, 2)
p1.bar(index_churn, top3_loc_churn['ratio'],
width = 0.35,
color = 'blue',
alpha = 0.5)
p1.set_title("Top 3 Locations of Churned Customers", fontsize=20)
plt.setp(p1, xticks=index_churn, xticklabels=label_churn, yticks=[0, 0.3, max(top3_loc_churn['ratio'])])
p1.set_xticklabels(label_churn, rotation=30)
p2.bar(index_not_churn, top3_loc_not_churn['ratio'],
width = 0.35,
color = 'red',
alpha = 0.5)
p2.set_title("Top 3 Locations of Current Custommers", fontsize=20)
plt.setp(p2, xticks=index_not_churn, xticklabels=label_not_churn, yticks=[0, 0.3, max(top3_loc_not_churn['ratio'])])
p2.set_xticklabels(label_not_churn, rotation=30)
plt.tight_layout()
plt.show()
# User Behavior - Usage of Sparkify
churn_usage = df_churn.filter(df_churn.page == 'NextSong') \
.select('userId') \
.groupBy('userId') \
.agg({'userId':'count'}) \
.withColumnRenamed('count(userId)', 'usage_count') \
.sort(desc('usage_count')) \
.toPandas()
not_churn_usage = df_not_churn.filter(df_not_churn.page == 'NextSong') \
.select('userId') \
.groupBy('userId') \
.agg({'userId':'count'}) \
.withColumnRenamed('count(userId)', 'usage_count') \
.sort(desc('usage_count')) \
.toPandas()
churn_TAU = churn_usage.usage_count.sum() / churn_usage.userId.count()
current_TAU = not_churn_usage.usage_count.sum() / not_churn_usage.userId.count()
label = ['Churn', 'Current']
bar_width =0.4
plt.figure(figsize=(6, 4))
plot = plt.bar(label, [churn_TAU, current_TAU],
color=['blue', 'red'],
alpha=0.5,
width=bar_width)
plt.title('Total Average Usages of Type of Users', fontsize=20)
plt.ylabel('TAU', fontsize=15)
plt.xlabel('Type of Users', fontsize = 15)
plt.legend((plot[0], plot[1]), ('Churn', 'Current'), loc='upper right', fontsize=10)
plt.show()
# Status of usage when users cancel sparkify service
fr_cancel_num = df_cancel.filter((df_cancel['page']=="Cancellation Confirmation") & (df_cancel['level']=="free")) \
.select('userId') \
.count()
pr_cancel_num = df_cancel.filter((df_cancel['page']=="Cancellation Confirmation") & (df_cancel['level']=="paid")) \
.select('userId') \
.count()
label = ['Free', 'Premium']
bar_width = 0.4
plt.figure(figsize=(6, 4))
plot = plt.bar(label, [fr_cancel_num, pr_cancel_num],
color=['blue', 'red'],
alpha=0.5,
width=bar_width)
plt.title('Status of Usage When Users Cancel', fontsize=20)
plt.ylabel('Count of Users', fontsize=15)
plt.xlabel('Type of Usage', fontsize = 15)
plt.legend((plot[0], plot[1]), ('Free', 'Premium'), loc='upper left', fontsize=10)
plt.show()
# Status of usage when paid users cancel sparkify service
fr_cancel_num = df_paid.filter((df_paid['page']=="Cancellation Confirmation") & (df_paid['level']=="free")) \
.select('userId') \
.count()
pr_cancel_num = df_paid.filter((df_paid['page']=="Cancellation Confirmation") & (df_paid['level']=="paid")) \
.select('userId') \
.count()
label = ['Free', 'Premium']
bar_width = 0.4
plt.figure(figsize=(6, 4))
plot = plt.bar(label, [fr_cancel_num, pr_cancel_num],
color=['blue', 'red'],
alpha=0.5,
width=bar_width)
plt.title('Status of Usage When Paid Users Cancel', fontsize=20)
plt.ylabel('Count of Users', fontsize=15)
plt.xlabel('Type of Usage', fontsize = 15)
plt.legend((plot[0], plot[1]), ('Free', 'Premium'), loc='upper left', fontsize=10)
plt.show()
# The frequency of accessing Downgrade page
d_page_access_churn = df_churn.filter(df_churn['page']=='Downgrade') \
.select('userId') \
.groupBy('userId') \
.agg({'userId':'count'}) \
.withColumnRenamed('count(userId)', 'downgrade page access') \
.toPandas()
d_page_access_current = df_not_churn.filter(df_churn['page']=='Downgrade') \
.select('userId') \
.groupBy('userId') \
.agg({'userId':'count'}) \
.withColumnRenamed('count(userId)', 'downgrade page access') \
.toPandas()
d_frequency_churn = d_page_access_churn['downgrade page access'].sum() / d_page_access_churn.shape[0]
d_frequency_current = d_page_access_current['downgrade page access'].sum() / d_page_access_current.shape[0]
label = ['Churn', 'Current']
bar_width = 0.4
plt.figure(figsize=(8, 6))
plot = plt.bar(label, [d_frequency_churn, d_frequency_current],
color=['blue', 'red'],
alpha=0.5,
width=bar_width)
plt.title('Frequency of Accessing Downgrade Page', fontsize=20)
plt.ylabel('Average Access', fontsize=15)
plt.xlabel('Type of Users', fontsize = 15)
plt.legend((plot[0], plot[1]), ('Churn', 'Current'), loc='upper right', fontsize=10)
plt.show()<jupyter_output><empty_output><jupyter_text># Feature Engineering
Once you've familiarized yourself with the data, build out the features you find promising to train your model on. To work with the full dataset, you can follow the following steps.
- Write a script to extract the necessary features from the smaller subset of data
- Ensure that your script is scalable, using the best practices discussed in Lesson 3
- Try your script on the full data set, debugging your script if necessary
If you are working in the classroom workspace, you can just extract features based on the small subset of data contained here. Be sure to transfer over this work to the larger dataset when you work on your Spark cluster.<jupyter_code>df_churn = df_churn.withColumn("churn", lit(1))
df_not_churn = df_not_churn.withColumn("churn", lit(0))
df = df_churn.union(df_not_churn).select('userId', 'gender', 'level', 'page', 'churn')
df_level_cnt = df.groupby('userId', 'level') \
.pivot('level', ['free', 'paid']) \
.count() \
.drop('level') \
.orderBy('userId')
df_page_cnt = df.filter((df['page']=='NextSong') | (df['page']=='Submit Downgrade') | (df['page']=='Submit Upgrade') |
(df['page']=='Downgrade') | (df['page']=='Thumbs Up') | (df['page']=='Thumbs Down') |
(df['page']=='Add to Playlist') | (df['page']=='Add Friend')) \
.groupby('userId', 'page') \
.pivot('page') \
.count() \
.drop('page') \
.orderBy('userId')
df = df.drop('level', 'page') \
.join(df_level_cnt, 'userId', 'inner') \
.join(df_page_cnt, 'userId', 'inner') \
.orderBy('userId') \
.dropDuplicates()
df = df.groupby('userId', 'gender').max()
features = df.select(['gender', 'max(free)', 'max(paid)', 'max(Add Friend)',
'max(Add to Playlist)', 'max(Downgrade)', 'max(NextSong)',
'max(Submit Downgrade)', 'max(Submit Upgrade)', 'max(Thumbs Down)',
'max(Thumbs Up)', 'max(churn)']) \
.withColumnRenamed('max(free)', 'free') \
.withColumnRenamed('max(paid)', 'paid') \
.withColumnRenamed('max(Add Friend)', 'add_friend') \
.withColumnRenamed('max(Add to Playlist)', 'add_playlist') \
.withColumnRenamed('max(Downgrade)', 'd_pge_access') \
.withColumnRenamed('max(NextSong)', 'num_music') \
.withColumnRenamed('max(Submit Downgrade)', 'num_downgrade') \
.withColumnRenamed('max(Submit Upgrade)', 'num_upgrade') \
.withColumnRenamed('max(Thumbs Down)', 'dislike') \
.withColumnRenamed('max(Thumbs Up)', 'like') \
.withColumnRenamed('max(churn)', 'churn') \
.fillna(0)
features.persist()<jupyter_output><empty_output><jupyter_text># Modeling
Split the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize.<jupyter_code>rest, validation = features.randomSplit([0.75, 0.25], seed=42)
indexer = StringIndexer(inputCol="gender", outputCol="gender_index")
assembler = VectorAssembler(inputCols=['gender_index', 'free', 'paid', 'add_friend', 'add_playlist', 'd_pge_access',
'num_music', 'num_downgrade', 'num_upgrade', 'dislike', 'like'],
outputCol="features")
scaler = MinMaxScaler(inputCol="features", outputCol="scaledFeatures")
pca = PCA(k=2, inputCol="scaledFeatures", outputCol="pcaFeatures")
lr = LogisticRegression(featuresCol='pcaFeatures', labelCol='churn', maxIter=10, regParam=0.3, elasticNetParam=0.8)
base_pipeline = Pipeline(stages=[indexer, assembler, scaler, pca, lr])
paramGrid = ParamGridBuilder() \
.addGrid(lr.regParam, [0, 0.3, 0.5]) \
.addGrid(lr.elasticNetParam, [0.2, 0.8]) \
.build()
crossval = CrossValidator(estimator=base_pipeline,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(labelCol='churn'),
numFolds=10)
base_model = crossval.fit(rest)
results = base_model.transform(validation)
evaluator = MulticlassClassificationEvaluator(labelCol='churn')
evaluator.evaluate(results.select(['prediction', 'churn']))
base_model.bestModel.stages[-1]._java_obj.parent().getRegParam(), \
base_model.bestModel.stages[-1]._java_obj.parent().getElasticNetParam()
gbt = GBTClassifier(featuresCol="pcaFeatures", labelCol="churn", maxIter=10)
gbt_pipeline = Pipeline(stages=[indexer, assembler, scaler, pca, gbt])
gbt_paramGrid = ParamGridBuilder() \
.addGrid(gbt.maxDepth, [5, 10]) \
.build()
crossval_gbt = CrossValidator(estimator=base_pipeline,
estimatorParamMaps=gbt_paramGrid,
evaluator=MulticlassClassificationEvaluator(labelCol='churn'),
numFolds=10)
gbt_model = crossval_gbt.fit(rest)
results_gbt = gbt_model.transform(validation)
evaluator.evaluate(results_gbt.select(['prediction', 'churn']))
nb = NaiveBayes(featuresCol='scaledFeatures', labelCol='churn', smoothing=1.0, modelType="multinomial")
nb_pipeline = Pipeline(stages=[indexer, assembler, scaler, nb])
nb_paramGrid = ParamGridBuilder() \
.addGrid(nb.smoothing, [0.2, 0.4, 0.6, 0.8, 1]) \
.build()
crossval_nb = CrossValidator(estimator=nb_pipeline,
estimatorParamMaps=nb_paramGrid,
evaluator=MulticlassClassificationEvaluator(labelCol='churn'),
numFolds=10)
nb_model = crossval_nb.fit(rest)
results_nb = nb_model.transform(validation)
evaluator.evaluate(results_nb.select(['prediction', 'churn']))
nb_model.bestModel.stages[-1]._java_obj.parent().getSmoothing()<jupyter_output><empty_output>
|
no_license
|
/code-preparation/Sparkify.ipynb
|
SeongbinLim94/sparkify-churn-prediction
| 7 |
<jupyter_start><jupyter_text># ML Pipeline using Sagemaker Clarify<jupyter_code>!./setup.sh
# Load libraries
import sagemaker
import boto3
from sagemaker import Session, get_execution_role, local, Model, utils, fw_utils, s3
import pandas as pd
import numpy as np
import urllib
import os
import time
from time import strftime
sess = boto3.Session(region_name='ap-southeast-1')
session = sagemaker.Session(boto_session=sess)
region = session.boto_region_name
role = get_execution_role()
region = session.boto_region_name
client = session.boto_session.client(
"sts", region_name=region, endpoint_url=utils.sts_regional_endpoint(region)
)
account = client.get_caller_identity()['Account']
bucket = 'ml-innovate-2021'
prefix = 'clarify-test'
create_date = strftime("%Y-%m-%d-%H-%M-%S")
print(create_date)
new_prefix = os.path.join(prefix, f'Exp-{create_date}')<jupyter_output>2021-01-24-03-30-16
<jupyter_text>### Download Data<jupyter_code>adult_columns = ["Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital Status",
"Occupation", "Relationship", "Ethnic group", "Gender", "Capital Gain", "Capital Loss",
"Hours per week", "Country", "Target"]
urllib.request.urlretrieve('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data','adult.data')
print('adult.data saved!')
urllib.request.urlretrieve('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test','adult.test')
print('adult.test saved!')<jupyter_output>adult.data saved!
adult.test saved!
<jupyter_text>### Prepare Data<jupyter_code>training_data = pd.read_csv("adult.data",
names=adult_columns,
sep=r'\s*,\s*',
engine='python',
na_values="?").dropna()
testing_data = pd.read_csv("adult.test",
names=adult_columns,
sep=r'\s*,\s*',
engine='python',
na_values="?",
skiprows=1).dropna()
training_data.head()
# Visualize distribution of protected attribute
training_data['Gender'].value_counts().sort_values().plot(kind='bar', title='Counts of Gender', rot=0)
# Visualize distribution of outcome corresponding to protected attribute
training_data['Gender'].where(training_data['Target']=='>50K').value_counts().sort_values().plot(kind='bar', title='Counts of Gender earning >$50K', rot=0)
# Encode Female to '0' and Male to '1'
from sklearn import preprocessing
def number_encode_features(df):
result = df.copy()
encoders = {}
for column in result.columns:
if result.dtypes[column] == np.object:
encoders[column] = preprocessing.LabelEncoder()
# print('Column:', column, result[column])
result[column] = encoders[column].fit_transform(result[column].fillna('None'))
return result, encoders
training_data = pd.concat([training_data['Target'], training_data.drop(['Target'], axis=1)], axis=1)
training_data, _ = number_encode_features(training_data)
training_data.to_csv('train_data.csv', index=False)
testing_data, _ = number_encode_features(testing_data)
test_features = testing_data.drop(['Target'], axis = 1)
test_target = testing_data['Target']
test_features.to_csv('test_features.csv', index=False)
# Upload data to S3
from sagemaker.s3 import S3Uploader
from sagemaker.inputs import TrainingInput
train_uri = S3Uploader.upload('train_data.csv', 's3://{}/{}'.format(bucket, prefix))
train_input = TrainingInput(train_uri, content_type='text/csv')
test_uri = S3Uploader.upload('test_features.csv', 's3://{}/{}'.format(bucket, prefix))
print('train_uri:', train_uri)
print('train_input:', train_input)
output_path = f's3://{bucket}/{prefix}/{create_date}/output'<jupyter_output>train_uri: s3://ml-innovate-2021/clarify-test/train_data.csv
train_input: <sagemaker.inputs.TrainingInput object at 0x7fce590f1160>
<jupyter_text>### Pre-Training Fairness Assessment<jupyter_code>from sagemaker import clarify
clarify_processor = clarify.SageMakerClarifyProcessor(role=role,
instance_count=1,
instance_type='ml.c4.xlarge',
sagemaker_session=session)
bias_report_output_path = 's3://{}/{}/clarify-bias-{}'.format(bucket, prefix, create_date)
bias_data_config = clarify.DataConfig(s3_data_input_path=train_uri,
s3_output_path=bias_report_output_path,
label='Target',
headers=training_data.columns.to_list(),
dataset_type='text/csv')
bias_config = clarify.BiasConfig(label_values_or_threshold=[1],
facet_name='Gender',
facet_values_or_threshold=[0],
group_name='Education')
clarify_processor.run_pre_training_bias(data_config=bias_data_config,
data_bias_config=bias_config,
methods='all',
wait=True,
logs=True,
job_name=f'pretraining-bias-{create_date}')<jupyter_output>
Job Name: pretraining-bias-2021-01-24-03-30-16
Inputs: [{'InputName': 'dataset', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://ml-innovate-2021/clarify-test/train_data.csv', 'LocalPath': '/opt/ml/processing/input/data', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}, {'InputName': 'analysis_config', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-ap-southeast-1-963992372437/pretraining-bias-2021-01-24-03-30-16/input/analysis_config/analysis_config.json', 'LocalPath': '/opt/ml/processing/input/config', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs: [{'OutputName': 'analysis_result', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://ml-innovate-2021/clarify-test/clarify-bias-2021-01-24-03-30-16', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
................................[34mINFO:sag[...]<jupyter_text>### Training: HPO<jupyter_code>from sagemaker.estimator import Estimator
new_prefix = os.path.join(prefix, f'Exp-{create_date}')
hyperparameters = {
"max_depth": "10",
"eta": "1",
"gamma": "1",
"min_child_weight": "6",
"objective": 'binary:logistic',
"num_class": "2",
"num_round": "40",
"s3_bucket": os.path.join(bucket, new_prefix, 'debug'),
"protected": "Gender",
"thresh": "0.8"
}
output_path = 's3://{0}/{1}/output/'.format(bucket, new_prefix)
estimator = Estimator(
image_uri='963992372437.dkr.ecr.ap-southeast-1.amazonaws.com/clarify-xgb:latest',
role=role,
sagemaker_session=session,
instance_count=1,
instance_type='ml.m5.4xlarge',
output_path=output_path,
hyperparameters=hyperparameters
)
from sagemaker.tuner import HyperparameterTuner, ContinuousParameter, IntegerParameter
from time import gmtime, strftime
hyperparameter_ranges = {
'eta': ContinuousParameter(1,2),
#'alpha': ContinuousParameter(0,1000),
'gamma': IntegerParameter(1,5),
'min_child_weight': IntegerParameter(1,10),
'max_depth': IntegerParameter(1,10),
"num_round": IntegerParameter(4,30)
}
metric_definitions = [{'Name': 'validation-f1',
'Regex': 'validation-f1: ([0-9\\\\.]+)'}]
# Configure HyperparameterTuner
my_tuner = HyperparameterTuner(estimator=estimator,
objective_metric_name='validation-f1',
objective_type='Maximize',
hyperparameter_ranges=hyperparameter_ranges,
metric_definitions=metric_definitions,
strategy='Random',
max_jobs=100,
max_parallel_jobs=10)
# Start hyperparameter tuning job
tuning_job_name = "tuning-{}".format(create_date)
my_tuner.fit({'train': train_input}, wait=False, include_cls_metadata=False, job_name=tuning_job_name)
my_tuner.describe()['HyperParameterTuningJobStatus']
tuning_job_name<jupyter_output><empty_output><jupyter_text>### Finding the Best Model: On Performance<jupyter_code>df=my_tuner.analytics().dataframe().sort_values(['FinalObjectiveValue'], ascending=False)
df
from sagemaker.s3 import S3Downloader
import tarfile
import pickle as pkl
import xgboost
smclient=boto3.client('sagemaker',region_name='ap-southeast-1')
trjobs = smclient.list_training_jobs_for_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuning_job_name,
MaxResults=100,
SortBy='FinalObjectiveMetricValue',
SortOrder='Descending')
trjob = trjobs['TrainingJobSummaries'][0]
exp_destination = os.path.join('s3://', bucket, new_prefix)
model_path = os.path.join(exp_destination,'output',trjob["TrainingJobName"],'output','model.tar.gz')
model_path<jupyter_output><empty_output><jupyter_text>### Deploy Model<jupyter_code>from sagemaker.xgboost.model import XGBoostModel
xgboost_model = XGBoostModel(
model_data=model_path,
role=role,
entry_point="container/inference.py",
framework_version="1.2-1"
)
predictor = xgboost_model.deploy(
endpoint_name="testing-clarify1-endpoint-{}".format(create_date),
instance_type='ml.m4.xlarge',
initial_instance_count=1
)
predictor.serializer = sagemaker.serializers.CSVSerializer(content_type='text/csv')
print(xgboost_model.name)
X=test_features.to_numpy()
print(predictor.predict(X[0]))
res = []
for i in range(0,len(X),10):
t=predictor.predict(X[i:i+10])
res.extend([x[0] for x in t])
pred = [float(r) for r in res]
tr = list(testing_data['Target'].to_numpy())
from sklearn.metrics import f1_score
from fairlearn.metrics import demographic_parity_difference, demographic_parity_ratio, equalized_odds_difference
from fairlearn.metrics import selection_rate
def fair_metrics(tr,pred,column,thresh):
pred = [1 if p > thresh else 0 for p in pred]
na0=0
na1=0
nd0=0
nd1=0
for p,c in zip(pred,column):
if (p==1 and c==0):
nd1 += 1
if (p==1 and c==1):
na1 += 1
if (p==0 and c==0):
nd0 += 1
if (p==0 and c==1):
na0 += 1
Pa1, Pd1, Pa0, Pd0 = na1/(na1+na0), nd1/(nd1+nd0), na0/(na1+na0), nd0/(nd1+nd0)
dsp_metric = np.abs(Pd1-Pa1)
sr_metric = selection_rate(tr, pred, pos_label=1)
dpd_metric = demographic_parity_difference(tr, pred, sensitive_features=column)
dpr_metric = demographic_parity_ratio(tr, pred, sensitive_features=column)
eod_metric = equalized_odds_difference(tr, pred, sensitive_features=column)
f1_metric = f1_score(tr, pred, average='macro')
return f1_metric, dsp_metric, sr_metric, dpd_metric, dpr_metric, eod_metric
protected_col = list(testing_data['Gender'].to_numpy())
thresh = 0.8
f1_metric, dsp_metric, sr_metric, dpd_metric, dpr_metric, eod_metric = fair_metrics(tr,pred,protected_col,thresh)
print('f1_metric: ',f1_metric)
print('dsp_metric: ',dsp_metric)
print('sr_metric: ',sr_metric)
print('dpd_metric: ',dpd_metric)
print('dpr_metric: ',dpr_metric)
print('eod_metric: ',eod_metric)<jupyter_output>f1_metric: 0.7132269603177037
dsp_metric: 0.08899609287138739
sr_metric: 0.09721115537848606
dpd_metric: 0.08899609287138739
dpr_metric: 0.2950481230554506
eod_metric: 0.07816235217641893
<jupyter_text>### Visual Assessment of Test Data<jupyter_code># Visualize distribution of protected attribute
testing_data['Gender'].value_counts().sort_values().plot(kind='bar', title='Counts of Gender', rot=0)
# Visualize distribution of outcome corresponding to protected attribute
testing_data['Gender'].where(testing_data['Target']==1).value_counts().sort_values().plot(kind='bar', title='Counts of Gender earning >$50K', rot=0)
res_val=[1 if float(r) >= 0.8 else 0 for r in res]
testing_result = testing_data.copy()
testing_result['Pred_Target'] = np.array(res_val)
testing_result['Gender'].where(testing_result['Pred_Target']==1).value_counts().sort_values().plot(kind='bar', title='Counts of Gender earning >$50K', rot=0)<jupyter_output><empty_output><jupyter_text>### Post-Training Fairness Assessment - Training Data<jupyter_code>model_name = xgboost_model.name
model_config = clarify.ModelConfig(model_name=model_name,
instance_type='ml.c5.xlarge',
instance_count=1,
accept_type='text/csv')
predictions_config = clarify.ModelPredictedLabelConfig(probability_threshold=0.8)
clarify_processor.run_post_training_bias(data_config=bias_data_config,
data_bias_config=bias_config,
model_config=model_config,
model_predicted_label_config=predictions_config,
methods='all',
wait=True,
logs=True,
job_name=f'posttraining-bias-{create_date}')<jupyter_output>
Job Name: posttraining-bias-2021-01-24-03-30-16
Inputs: [{'InputName': 'dataset', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://ml-innovate-2021/clarify-test/train_data.csv', 'LocalPath': '/opt/ml/processing/input/data', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}, {'InputName': 'analysis_config', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-ap-southeast-1-963992372437/posttraining-bias-2021-01-24-03-30-16/input/analysis_config/analysis_config.json', 'LocalPath': '/opt/ml/processing/input/config', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs: [{'OutputName': 'analysis_result', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://ml-innovate-2021/clarify-test/clarify-bias-2021-01-24-03-30-16', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
.................................[34mINFO:[...]<jupyter_text>### Explaining Predictions - Training Data<jupyter_code>shap_config = clarify.SHAPConfig(baseline=[test_features.iloc[0].values.tolist()],
num_samples=15,
agg_method='mean_abs')
explainability_output_path = 's3://{}/{}/clarify-explainability-{}'.format(bucket, prefix, create_date)
explainability_data_config = clarify.DataConfig(s3_data_input_path=train_uri,
s3_output_path=explainability_output_path,
label='Target',
headers=training_data.columns.to_list(),
dataset_type='text/csv')
clarify_processor.run_explainability(data_config=explainability_data_config,
model_config=model_config,
explainability_config=shap_config)<jupyter_output>
Job Name: Clarify-Explainability-2021-01-24-05-24-04-977
Inputs: [{'InputName': 'dataset', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://ml-innovate-2021/clarify-test/train_data.csv', 'LocalPath': '/opt/ml/processing/input/data', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}, {'InputName': 'analysis_config', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-ap-southeast-1-963992372437/Clarify-Explainability-2021-01-24-05-24-04-977/input/analysis_config/analysis_config.json', 'LocalPath': '/opt/ml/processing/input/config', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs: [{'OutputName': 'analysis_result', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://ml-innovate-2021/clarify-test/clarify-explainability-2021-01-24-03-30-16', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
...............[...]<jupyter_text>### Post-Training Fairness Assessment - Testing Data<jupyter_code>testing_data.to_csv('testing_data.csv', index=False)
testing_uri = S3Uploader.upload('testing_data.csv', 's3://{}/{}'.format(bucket, prefix))
test_bias_report_output_path = 's3://{}/{}/clarify-bias-test-{}'.format(bucket, prefix, create_date)
test_bias_data_config = clarify.DataConfig(s3_data_input_path=testing_uri,
s3_output_path=test_bias_report_output_path,
label='Target',
headers=testing_data.columns.to_list(),
dataset_type='text/csv')
test_bias_config = clarify.BiasConfig(label_values_or_threshold=[1],
facet_name='Gender',
facet_values_or_threshold=[0],
group_name='Education')
clarify_processor.run_post_training_bias(data_config=test_bias_data_config,
data_bias_config=test_bias_config,
model_config=model_config,
model_predicted_label_config=predictions_config,
methods='all',
wait=True,
logs=True,
job_name=f'posttesting--bias-{create_date}')<jupyter_output>
Job Name: posttesting--bias-2021-01-24-03-30-16
Inputs: [{'InputName': 'dataset', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://ml-innovate-2021/clarify-test/testing_data.csv', 'LocalPath': '/opt/ml/processing/input/data', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}, {'InputName': 'analysis_config', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-ap-southeast-1-963992372437/posttesting--bias-2021-01-24-03-30-16/input/analysis_config/analysis_config.json', 'LocalPath': '/opt/ml/processing/input/config', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs: [{'OutputName': 'analysis_result', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://ml-innovate-2021/clarify-test/clarify-bias-test-2021-01-24-03-30-16', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
.................................[3[...]
|
permissive
|
/with_bias.ipynb
|
tvkpz/ml-innovate-2021
| 11 |
<jupyter_start><jupyter_text>## Задание №1
# Хомутов Евгений БПМ-151Отключим предупреждения Jupyter Notebook<jupyter_code>import warnings
warnings.filterwarnings('ignore')<jupyter_output><empty_output><jupyter_text>http://snakeproject.ru/rubric/article.php?art=python_mysql
Импорт библиотеки для работы с MySQL сервером<jupyter_code>import pymysql
pymysql.install_as_MySQLdb()
import MySQLdb<jupyter_output><empty_output><jupyter_text>Подключение к локальному серверу MySQL<jupyter_code>Connection = MySQLdb.connect(host='localhost',database='mysql',user='root',password='root', charset = "utf8", use_unicode = True
, init_command='SET NAMES UTF8')<jupyter_output><empty_output><jupyter_text>Создание базы данных, если она не существует, с именем task1<jupyter_code>Connection.query('CREATE DATABASE IF NOT EXISTS task1')<jupyter_output><empty_output><jupyter_text>Создание курсора и настройка кодировки<jupyter_code>Cursor = Connection.cursor()
Cursor.execute('SET NAMES `utf8`')<jupyter_output><empty_output><jupyter_text>Создание таблиц<jupyter_code>Cursor.execute("CREATE TABLE IF NOT EXISTS supplier( idS VARCHAR(6) PRIMARY KEY NOT NULL, surname NCHAR(20), rating INT, city NCHAR(20) )")
Cursor.execute("CREATE TABLE IF NOT EXISTS products ( idP VARCHAR(6) PRIMARY KEY NOT NULL, name NCHAR(20),color NCHAR(20) ,weight INT, city NCHAR(20))")
Cursor.execute("CREATE TABLE IF NOT EXISTS item ( idJ VARCHAR(6) PRIMARY KEY NOT NULL , name NCHAR(20), city NCHAR(20))")
Cursor.execute("CREATE TABLE IF NOT EXISTS supply ( idS VARCHAR(6) NOT NULL, idP VARCHAR(6) NOT NULL, idJ VARCHAR(6) NOT NULL, number int)")<jupyter_output><empty_output><jupyter_text>Добавление столбца deliverydate в таблицу supply<jupyter_code>Cursor.execute("ALTER TABLE supply ADD deliverydate DATE")<jupyter_output><empty_output><jupyter_text>Заполнение таблиц<jupyter_code>Cursor.execute("INSERT INTO supplier VALUES('S1','Smith',20,'London')")
Cursor.execute("INSERT INTO supplier VALUES('S2','Jones',10,'Paris')")
Cursor.execute("INSERT INTO supplier VALUES('S3','Blake',30,'Paris')")
Cursor.execute("INSERT INTO supplier VALUES('S4','Clarke',20,'London')")
Cursor.execute("INSERT INTO supplier VALUES('S5','Adams',30,'Athens')")
Cursor.execute("SELECT * FROM supplier")
for i in Cursor:
print(i)
Cursor.execute("INSERT INTO products VALUES('P1','nut','red',12,'London')")
Cursor.execute("INSERT INTO products VALUES('P2','bolt','green',17,'Paris')")
Cursor.execute("INSERT INTO products VALUES('P3','screw','blue',17,'Rome')")
Cursor.execute("INSERT INTO products VALUES('P4','screw','red',14,'London')")
Cursor.execute("INSERT INTO products VALUES('P5','tappet','blue',12,'Paris')")
Cursor.execute("INSERT INTO products VALUES('P6','bloom','red',19,'London')")
Cursor.execute("SELECT * FROM products")
for i in Cursor:
print(i)
Cursor.execute("INSERT INTO item VALUES('J1','hard drive','Paris')")
Cursor.execute("INSERT INTO item VALUES('J2','perforator','Rome')")
Cursor.execute("INSERT INTO item VALUES('J3','redear','Athens')")
Cursor.execute("INSERT INTO item VALUES('J4','printer','Athens')")
Cursor.execute("INSERT INTO item VALUES('J5','floppy disk','London')")
Cursor.execute("INSERT INTO item VALUES('J6','terminal','Oslo')")
Cursor.execute("INSERT INTO item VALUES('J7','tape','London')")
Cursor.execute("SELECT * FROM item")
for i in Cursor:
print(i)
Cursor.execute("INSERT INTO supply VALUES('S1','P1','J1',200, Null)")
Cursor.execute("INSERT INTO supply VALUES('S1','P1','J4',700, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P3','J1',400, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P3','J2',200, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P3','J3',200, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P3','J4',500, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P3','J5',600, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P3','J6',400, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P3','J7',800, Null)")
Cursor.execute("INSERT INTO supply VALUES('S2','P5','J2',100, Null)")
Cursor.execute("INSERT INTO supply VALUES('S3','P3','J1',200, Null)")
Cursor.execute("INSERT INTO supply VALUES('S3','P4','J2',500, Null)")
Cursor.execute("INSERT INTO supply VALUES('S4','P6','J3',300, Null)")
Cursor.execute("INSERT INTO supply VALUES('S4','P6','J7',300, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P2','J2',200, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P2','J4',100, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P5','J5',500, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P5','J7',100, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P6','J2',200, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P1','J4',100, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P3','J4',200, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P4','J4',800, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P5','J4',400, Null)")
Cursor.execute("INSERT INTO supply VALUES('S5','P6','J4',500, Null)")
Cursor.execute("SELECT * FROM supply")
for i in Cursor:
print(i)<jupyter_output>('S1', 'P1', 'J1', 200, None)
('S1', 'P1', 'J4', 700, None)
('S2', 'P3', 'J1', 400, None)
('S2', 'P3', 'J2', 200, None)
('S2', 'P3', 'J3', 200, None)
('S2', 'P3', 'J4', 500, None)
('S2', 'P3', 'J5', 600, None)
('S2', 'P3', 'J6', 400, None)
('S2', 'P3', 'J7', 800, None)
('S2', 'P5', 'J2', 100, None)
('S3', 'P3', 'J1', 200, None)
('S3', 'P4', 'J2', 500, None)
('S4', 'P6', 'J3', 300, None)
('S4', 'P6', 'J7', 300, None)
('S5', 'P2', 'J2', 200, None)
('S5', 'P2', 'J4', 100, None)
('S5', 'P5', 'J5', 500, None)
('S5', 'P5', 'J7', 100, None)
('S5', 'P6', 'J2', 200, None)
('S5', 'P1', 'J4', 100, None)
('S5', 'P3', 'J4', 200, None)
('S5', 'P4', 'J4', 800, None)
('S5', 'P5', 'J4', 400, None)
('S5', 'P6', 'J4', 500, None)
<jupyter_text># Задание 1
Выдать номера и фамилии поставщиков, поставляющих по крайней мере одну деталь, поставляемую по крайней мере одним поставщиком, который поставляет по крайней мере одну красную деталь.<jupyter_code>Cursor.execute("SELECT idS, surname FROM supplier WHERE idS in "
+ "( SELECT idS FROM supply WHERE idP in "
+ "(SELECT idP FROM products WHERE ( color = 'red' OR name in (SELECT name FROM products WHERE color = 'red' ) ) )"
+ "GROUP BY idS ) "
)
for i in Cursor:
print(i)<jupyter_output>('S1', 'Smith')
('S2', 'Jones')
('S3', 'Blake')
('S4', 'Clarke')
('S5', 'Adams')
<jupyter_text># Задание 2
Получить полный список деталей для всех изделий, изготавливаемых в Лондоне.<jupyter_code>Cursor.execute("SELECT idP FROM supply WHERE idJ in (SELECT idJ from item WHERE city = 'London' ) GROUP BY idP")
for i in Cursor:
print(i)<jupyter_output>('P3',)
('P6',)
('P5',)
<jupyter_text># Задание 3
Построить таблицу, содержащую список номеров деталей, которые поставляются либо каким-нибудь поставщиком из Лондона, либо для какого-либо изделия в Лондон.<jupyter_code>Cursor.execute(
" CREATE TEMPORARY TABLE IF NOT EXISTS t1 (SELECT idP FROM supply WHERE ( idJ in (SELECT idJ from item WHERE city = 'London' ) )"
+
" OR ( idS in (SELECT idS from supplier WHERE city = 'London' )) GROUP BY idP ) "
)
Cursor.execute(
"SELECT * FROM t1"
)
for i in Cursor:
print(i)<jupyter_output>('P1',)
('P3',)
('P6',)
('P5',)
<jupyter_text># Задание 4
Вставить в таблицу S нового поставщика с номером S10 с фамилией Уайт из города Нью-Йорк с неизвестным рейтингом.<jupyter_code>Cursor.execute("INSERT INTO supplier VALUES('S10','Wight',Null,'New-York')")
Cursor.execute("SELECT * FROM supplier")
for i in Cursor:
print(i)<jupyter_output>('S1', 'Smith', 20, 'London')
('S10', 'Wight', None, 'New-York')
('S2', 'Jones', 10, 'Paris')
('S3', 'Blake', 30, 'Paris')
('S4', 'Clarke', 20, 'London')
('S5', 'Adams', 30, 'Athens')
<jupyter_text># Задание 5
Для каждой поставляемой для некоторого изделия детали выдать ее номер, номер изделия и соответствующее общее количество деталей.<jupyter_code>Cursor.execute("SELECT idP, idJ, number FROM supply")
for i in Cursor:
print(i)<jupyter_output>('P1', 'J1', 200)
('P1', 'J4', 700)
('P3', 'J1', 400)
('P3', 'J2', 200)
('P3', 'J3', 200)
('P3', 'J4', 500)
('P3', 'J5', 600)
('P3', 'J6', 400)
('P3', 'J7', 800)
('P5', 'J2', 100)
('P3', 'J1', 200)
('P4', 'J2', 500)
('P6', 'J3', 300)
('P6', 'J7', 300)
('P2', 'J2', 200)
('P2', 'J4', 100)
('P5', 'J5', 500)
('P5', 'J7', 100)
('P6', 'J2', 200)
('P1', 'J4', 100)
('P3', 'J4', 200)
('P4', 'J4', 800)
('P5', 'J4', 400)
('P6', 'J4', 500)
<jupyter_text># Задание 6
Получить номера изделий, для которых детали полностью поставляет поставщик S2.<jupyter_code>Cursor.execute(
" SELECT idJ FROM supply WHERE idJ not in (SELECT idJ FROM supply WHERE idS in (SELECT idS FROM supply WHERE idS != 'S2') GROUP BY idJ) "
)
for i in Cursor:
print(i)<jupyter_output>('J6',)
<jupyter_text>Удаление таблиц<jupyter_code>Cursor.execute("DROP TABLE supplier")
Cursor.execute("DROP TABLE products")
Cursor.execute("DROP TABLE item")
Cursor.execute("DROP TABLE supply")<jupyter_output><empty_output><jupyter_text>Закрытие курсора<jupyter_code>Cursor.close()<jupyter_output><empty_output><jupyter_text>Удаление базы данных<jupyter_code>Connection.query('DROP DATABASE task1')<jupyter_output><empty_output><jupyter_text>Сохранение изменения на сервере<jupyter_code>Connection.commit()<jupyter_output><empty_output><jupyter_text>Отключение от сервера<jupyter_code>Connection.close()<jupyter_output><empty_output>
|
permissive
|
/DataBase/First_Task_Khomutov_variant_10_database.ipynb
|
evgeniy97/MyOldCode
| 19 |
<jupyter_start><jupyter_text>### Exploring the Data
* Import necessary libraries
* Find patterns/missing variables in the data<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
pd.options.display.max_columns = None
#https://www.kaggle.com/bobbyscience/league-of-legends-diamond-ranked-games-10-min/download
df = pd.read_csv('high_diamond_ranked_10min.csv')
df.head()
df.shape
df.columns
#every feature is numerical, like expected
df.info();
#no missing values
df.isna().sum();<jupyter_output><empty_output><jupyter_text>### Visualizing some Features<jupyter_code>#get values where blue got first blood
plt.figure(figsize = (10,5))
ax = sns.countplot(df['blueFirstBlood'], hue = df['blueWins'], palette= ['#DE3F24','#165CAA'])
ax.legend(['red team win','blue team win']);
ax.set_xlabel(xlabel=None)
labels = [item.get_text() for item in ax.get_xticklabels()]
labels[0] = 'Red First Blood'
labels[1] = 'Blue First Blood'
ax.set_xticklabels(labels)
ax.set_title('First Blood Wins');<jupyter_output><empty_output><jupyter_text>There seems to be inaccurate data on the amount of wards placed
* It is impossible to place more than 200 wards yet, even 100 wards in a 10 minute game
* The wards placed counts, champion skills such as shaco boxes and nidalee's traps<jupyter_code>sns.boxplot(df['blueWardsPlaced'])
sns.kdeplot(df['blueWardsPlaced'])
sns.scatterplot(x= df['blueWardsPlaced'],y= df['blueTotalGold'],alpha =.1)
sns.boxplot(x= df['blueDeaths'],y= df['blueTotalGold'])
btotalMonsters_wins =df.groupby('blueEliteMonsters').mean()['blueWins']
ax = sns.barplot(x = btotalMonsters_wins.index ,y = btotalMonsters_wins.values);
ax.set_ylabel('Percent Wins')
ax.set_xlabel('Number of Rift Heralds & Dragons Killed')
ax.set_title('Percentage of blue wins by Monsters killed (10min)')
df.groupby('blueWins').mean()
sns.catplot(x= 'blueWins', y= 'blueWardsPlaced', data = df,kind = 'bar');
plt.title('Average Wards Placed vs blueWins');
df.columns<jupyter_output><empty_output><jupyter_text>## Feature Engineering
* Create KDA ratio columns
* Create comparison columns#### Create kda ratio column
* kills + assists(.5) / deaths
*Since deaths can be ZERO, divide by 1 if deaths == 0.*<jupyter_code>def KDR(kills,assists,deaths,new_col):
df[new_col] = np.where(df[deaths]== 0, (df[kills]+df[assists]*.5)/1,(df[kills]+df[assists]*.5)/df[deaths])
return df[new_col]
KDR('redKills','redAssists','redDeaths','redKDR');
KDR('blueKills','blueAssists','blueDeaths','blueKDR');
#check blueKDR
df['blueKDR'].head()<jupyter_output><empty_output><jupyter_text>#### Create diff. columns, subtract respective blue & red columns
* bluecols - redcols<jupyter_code>blue_cols = [col for col in df.columns if col.startswith('blue')]
red_cols = [col for col in df.columns if col.startswith('red')]
#there is no red wins so drop blue wins
blue_cols.remove('blueWins')
#make sure lenth of cols are equal length
print(len(blue_cols))
print(len(red_cols))
zipped = zip(blue_cols, red_cols)
def comparison_columns(zipped):
for blue,red in zipped:
new_col = 'diff' + blue
df[new_col] = df[blue] - df[red]
comparison_columns(zipped)<jupyter_output><empty_output><jupyter_text>#### Create a function to compare Red and Blue teams by their values
* If blue == red then equal
* If blue > red then greater
* if blue < red then less<jupyter_code>def comparing_teams(blue,red,colname):
conditions = [df[blue] == df[red], df[blue] >df[red]]
choices = ['equal','greater']
df[colname] = np.select(conditions,choices,default = 'less')<jupyter_output><empty_output><jupyter_text>Three columns created:
* bluevredEliteMonsters
* bluevredTowersDestroyed
* bluevredWardsPlaced
<jupyter_code>comparing_teams('blueEliteMonsters','redEliteMonsters','bluevredEliteMonsters')
comparing_teams('blueTowersDestroyed','redTowersDestroyed','bluevredTowersDestroyed')
comparing_teams('blueWardsPlaced','redWardsPlaced','bluevredWardsPlaced')
%cd
'''lol_cleaned_df = \
df.to_csv("C:/Users/jshyo/desktop/projects/league-of-legends-diamond-ranked-games-10-min/lol_cleaned_df.csv")'''<jupyter_output><empty_output><jupyter_text>### Correlation Matrix
* Remove highly correlated variables<jupyter_code>correlation = df.corr()
%%time
plt.figure(figsize = (15,10))
sns.heatmap(correlation)
diff_columns = [col for col in df if col.startswith('d')]
diff_corr = df[diff_columns].corr()
plt.figure(figsize = (15,10))
sns.heatmap(diff_corr.abs(),annot= True,fmt= '.1g')<jupyter_output><empty_output><jupyter_text>* KDR is highly correlated with kills, deaths, assists, which make sense because KDR is made up of these three variables.
* KDR is also highly correlated with gold, level, and experience
* Elite Monsters is correlated with dragon and heralds killed because Elite Monsters = dragons + heralds
* CS per minute is correlated with total minions killed, because total minions killed/ 10 is CS per min.
*We will keep diffblueKDR, diffblueEliteMonsters, ,diffblueTowersDestroyed, diffblueTotalJungleMinionsKilled, diffblueTotalMinionsKilled,'blueFirstBlood','diffblueWardsDestroyed',diffblueWardsPlaced'*### Building a Model
* Logistic Regression model
* Get the model parameters to see importance of variables in the features<jupyter_code>df.columns
features = ['diffblueKDR','diffblueEliteMonsters','diffblueTowersDestroyed','diffblueTotalJungleMinionsKilled',\
'diffblueTotalMinionsKilled','diffblueWardsPlaced','diffblueWardsDestroyed','blueFirstBlood']
response = 'blueWins'
X = df[features].copy()
y = df[response].copy()
X.info()
from sklearn.preprocessing import StandardScaler
X_fitted = StandardScaler().fit_transform(X)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_fitted,y, test_size = .2, random_state = 101)
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
tn, fp, fn, tp = confusion_matrix(y_test,predictions).ravel()
print(tn, fp, fn, tp)
print(classification_report(y_test,predictions))
logmodel.score(X_test,y_test)
logmodel.score(X_train,y_train)
model_parameters = logmodel.coef_
#features= features of model named above
#zip together and sort the parameters inorder of importance
sorted(zip(model_parameters[0],features),reverse = True)
odds_ratio = np.exp(model_parameters)
sorted(zip(odds_ratio[0],features),reverse= True)
np.exp(logmodel.intercept_)<jupyter_output><empty_output>
|
no_license
|
/League_win_model.ipynb
|
yoonsunghwan/League-of-Legends-Win-LGM
| 9 |
<jupyter_start><jupyter_text>Remember from the tutorial:
1. No for loops! Use matrix multiplication and broadcasting whenever possible.
2. Think about numerical stability<jupyter_code>import nn_utils # module containing helper functions for checking the correctness of your code<jupyter_output><empty_output><jupyter_text>## Task 1: Affine layer
Implement `forward` and `backward` functions for `Affine` layer<jupyter_code>class Affine:
def forward(self, inputs, weight, bias):
"""Forward pass of an affine (fully connected) layer.
Args:
inputs: input matrix, shape (N, D)
weight: weight matrix, shape (D, H)
bias: bias vector, shape (H)
Returns
out: output matrix, shape (N, H)
"""
self.cache = (inputs, weight, bias)
#############################################################
# TODO
out = inputs @ weight + bias
#############################################################
assert out.shape[0] == inputs.shape[0]
assert out.shape[1] == weight.shape[1] == bias.shape[0]
return out
def backward(self, d_out):
"""Backward pass of an affine (fully connected) layer.
Args:
d_out: incoming derivaties, shape (N, H)
Returns:
d_inputs: gradient w.r.t. the inputs, shape (N, D)
d_weight: gradient w.r.t. the weight, shape (D, H)
d_bias: gradient w.r.t. the bias, shape (H)
"""
inputs, weight, bias = self.cache
#############################################################
# TODO
d_inputs = d_out @ weight.T
d_weight = inputs.T @ d_out
#d_bias = d_out.T.sum(axis = 1)
N = d_out.shape[0]
d_bias = d_out.T @ np.ones(N)
#############################################################
assert np.all(d_inputs.shape == inputs.shape)
assert np.all(d_weight.shape == weight.shape)
assert np.all(d_bias.shape == bias.shape)
return d_inputs, d_weight, d_bias
affine = Affine()
nn_utils.check_affine(affine)<jupyter_output>All checks passed succesfully!
<jupyter_text>## Task 2: ReLU layer
Implement `forward` and `backward` functions for `ReLU` layer<jupyter_code>class ReLU:
def forward(self, inputs):
"""Forward pass of a ReLU layer.
Args:
inputs: input matrix, arbitrary shape
Returns:
out: output matrix, has same shape as inputs
"""
self.cache = inputs
#############################################################
# TODO
out = np.maximum(inputs, 0)
#############################################################
assert np.all(out.shape == inputs.shape)
return out
def backward(self, d_out):
"""Backward pass of an ReLU layer.
Args:
d_out: incoming derivatives, same shape as inputs in forward
Returns:
d_inputs: gradient w.r.t. the inputs, same shape as d_out
"""
inputs = self.cache
#############################################################
# TODO
der_relu = np.zeros(inputs.shape)
der_relu = 1 * (inputs > 0)
d_inputs = der_relu * d_out
#############################################################
assert np.all(d_inputs.shape == inputs.shape)
return d_inputs
relu = ReLU()
nn_utils.check_relu(relu)<jupyter_output>All checks passed succesfully!
<jupyter_text>## Task 3: CategoricalCrossEntropy layer
Implement `forward` and `backward` for `CategoricalCrossEntropy` layer<jupyter_code>a = np.array([[1,2,3],[11,10,6],[7,8,9]])
a
a.max(axis = 1)
class CategoricalCrossEntropy:
def forward(self, logits, labels):
"""Compute categorical cross-entropy loss.
Args:
logits: class logits, shape (N, K)
labels: target labels in one-hot format, shape (N, K)
Returns:
loss: loss value, float (a single number)
"""
#############################################################
# TODO
logits_shifted = logits - logits.max(axis=1, keepdims=True)
log_sum_exp = np.log(np.sum(np.exp(logits_shifted), axis=1, keepdims=True))
log_probs = logits_shifted - log_sum_exp
N = labels.shape[0]
loss = -np.sum(labels * log_probs) / N
probs = np.exp(log_probs)
#############################################################
# probs is the (N, K) matrix of class probabilities
self.cache = (probs, labels)
assert isinstance(loss, float)
return loss
def backward(self, d_out=1.0):
"""Backward pass of the Cross Entropy loss.
Args:
d_out: Incoming derivatives. We set this value to 1.0 by default,
since this is the terminal node of our computational graph
(i.e. we usually want to compute gradients of loss w.r.t.
other model parameters).
Returns:
d_logits: gradient w.r.t. the logits, shape (N, K)
d_labels: gradient w.r.t. the labels
we don't need d_labels for our models, so we don't
compute it and set it to None. It's only included in the
function definition for consistency with other layers.
"""
probs, labels = self.cache
#############################################################
# TODO
N = labels.shape[0]
d_logits = d_out * (probs - labels) / N
d_labels = ()
#############################################################
d_labels = None
assert np.all(d_logits.shape == probs.shape == labels.shape)
return d_logits, d_labels
cross_entropy = CategoricalCrossEntropy()
nn_utils.check_cross_entropy(cross_entropy)<jupyter_output>All checks passed succesfully!
<jupyter_text># Logistic regression (with backpropagation) --- nothing to do in this section<jupyter_code>class LogisticRegression:
def __init__(self, num_features, num_classes, learning_rate=1e-2):
"""Logistic regression model.
Gradients are computed with backpropagation.
The model consists of the following sequence of opeartions:
input -> affine -> softmax
"""
self.learning_rate = learning_rate
# Initialize the model parameters
self.params = {
'W': np.zeros([num_features, num_classes]),
'b': np.zeros([num_classes])
}
# Define layers
self.affine = Affine()
self.cross_entropy = CategoricalCrossEntropy()
def predict(self, X):
"""Generate predictions for one minibatch.
Args:
X: data matrix, shape (N, D)
Returns:
Y_pred: predicted class probabilities, shape (N, D)
Y_pred[n, k] = probability that sample n belongs to class k
"""
logits = self.affine.forward(X,self.params['W'], self.params['b'])
Y_pred = softmax(logits, axis=1)
return Y_pred
def step(self, X, Y):
"""Perform one step of gradient descent on the minibatch of data.
1. Compute the cross-entropy loss for given (X, Y).
2. Compute the gradients of the loss w.r.t. model parameters.
3. Update the model parameters using the gradients.
Args:
X: data matrix, shape (N, D)
Y: target labels in one-hot format, shape (N, K)
Returns:
loss: loss for (X, Y), float, (a single number)
"""
# Forward pass - compute the loss on training data
logits = self.affine.forward(X, self.params['W'], self.params['b'])
loss = self.cross_entropy.forward(logits, Y)
# Backward pass - compute the gradients of loss w.r.t. all the model parameters
grads = {}
d_logits, _ = self.cross_entropy.backward()
_, grads['W'], grads['b'] = self.affine.backward(d_logits)
# Apply the gradients
for p in self.params:
self.params[p] = self.params[p] - self.learning_rate * grads[p]
return loss
# Specify optimization parameters
learning_rate = 1e-2
max_epochs = 501
report_frequency = 50
log_reg = LogisticRegression(num_features=D, num_classes=K)
for epoch in range(max_epochs):
loss = log_reg.step(X_train, Y_train)
if epoch % report_frequency == 0:
print(f'Epoch {epoch:4d}, loss = {loss:.4f}')
y_test_pred = log_reg.predict(X_test).argmax(1)
y_test_true = Y_test.argmax(1)
print(f'test set accuracy = {accuracy_score(y_test_true, y_test_pred):.3f}')<jupyter_output>test set accuracy = 0.953
<jupyter_text># Feed-forward neural network (with backpropagation)<jupyter_code>def xavier_init(shape):
"""Initialize a weight matrix according to Xavier initialization.
See pytorch.org/docs/stable/nn.init#torch.nn.init.xavier_uniform_ for details.
"""
a = np.sqrt(6.0 / float(np.sum(shape)))
return np.random.uniform(low=-a, high=a, size=shape)<jupyter_output><empty_output><jupyter_text>## Task 4: Implement a two-layer `FeedForwardNeuralNet` model
You can use the `LogisticRegression` class for reference<jupyter_code>class FeedforwardNeuralNet:
def __init__(self, input_size, hidden_size, output_size, learning_rate=1e-2):
"""A two-layer feedforward neural network with ReLU activations.
(input_layer -> hidden_layer -> output_layer)
The model consists of the following sequence of opeartions:
input -> affine -> relu -> affine -> softmax
"""
self.learning_rate = learning_rate
# Initialize the model parameters
self.params = {
'W1': xavier_init([input_size, hidden_size]),
'b1': np.zeros([hidden_size]),
'W2': xavier_init([hidden_size, output_size]),
'b2': np.zeros([output_size]),
}
# Define layers
############################################################
# TODO
self.affine1 = Affine()
self.relu = ReLU()
self.affine2 = Affine()
self.cross_entropy = CategoricalCrossEntropy()
############################################################
def predict(self, X):
"""Generate predictions for one minibatch.
Args:
X: data matrix, shape (N, D)
Returns:
Y_pred: predicted class probabilities, shape (N, D)
Y_pred[n, k] = probability that sample n belongs to class k
"""
############################################################
# TODO
X1 = X
logits1 = self.affine1.forward(X1,self.params['W1'], self.params['b1'])
X2 = self.relu.forward(logits1)
logits2 = self.affine2.forward(X2,self.params['W2'], self.params['b2'])
Y_pred = softmax(logits2, axis=1)
############################################################
return Y_pred
def step(self, X, Y):
"""Perform one step of gradient descent on the minibatch of data.
1. Compute the cross-entropy loss for given (X, Y).
2. Compute the gradients of the loss w.r.t. model parameters.
3. Update the model parameters using the gradients.
Args:
X: data matrix, shape (N, D)
Y: target labels in one-hot format, shape (N, K)
Returns:
loss: loss for (X, Y), float, (a single number)
"""
############################################################
# TODO
# Forward pass - compute the loss on training data
X1 = X
logits1 = self.affine1.forward(X1,self.params['W1'], self.params['b1'])
X2 = self.relu.forward(logits1)
logits2 = self.affine2.forward(X2,self.params['W2'], self.params['b2'])
loss = self.cross_entropy.forward(logits2, Y)
# Backward pass - compute the gradients of loss w.r.t. all the model parameters
grads = {}
d_logits2, _ = self.cross_entropy.backward(1.0)
d_hidden, grads['W2'], grads['b2'] = self.affine2.backward(d_logits2)
d_logits1 = self.relu.backward(d_hidden)
_, grads['W1'], grads['b1'] = self.affine1.backward(d_logits1)
# Apply the gradients
for p in self.params:
self.params[p] = self.params[p] - self.learning_rate * grads[p]
############################################################
return loss
H = 32 # size of the hidden layer
# Specify optimization parameters
learning_rate = 1e-2
max_epochs = 501
report_frequency = 50
model = FeedforwardNeuralNet(input_size=D, hidden_size=H, output_size=K, learning_rate=learning_rate)
for epoch in range(max_epochs):
loss = model.step(X_train, Y_train)
if epoch % report_frequency == 0:
print(f'Epoch {epoch:4d}, loss = {loss:.4f}')
y_test_pred = model.predict(X_test).argmax(1)
y_test_true = Y_test.argmax(1)
print(f'test set accuracy = {accuracy_score(y_test_true, y_test_pred):.3f}')<jupyter_output>test set accuracy = 0.938
|
no_license
|
/DL1_backprop/exercise_07_notebook.ipynb
|
Yi-Hsiangf/IN2064-ML
| 7 |
<jupyter_start><jupyter_text># Classifier NNs
## NNs summary
- **Input: an uncertainty realization vector.**
- **Output: a label vector with multiple choices.**
- NNs components
- Dense
- BatchNormalization
- DropOut
- SoftMax
- Optimizer
- Adam
- Loss
- BinaryCrossEntropy
- Metric
- Accuracy [%]<jupyter_code>import pickle
import pprint
from sklearn.model_selection import train_test_split
from os import path
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import metrics
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import math<jupyter_output><empty_output><jupyter_text>## prepare dataset<jupyter_code>test_cases = [
'pglib_opf_case24_ieee_rts.pickle',
'pglib_opf_case30_ieee.pickle',
'pglib_opf_case57_ieee.pickle',
'pglib_opf_case73_ieee_rts.pickle',
'pglib_opf_case89_pegase.pickle',
'pglib_opf_case118_ieee.pickle',
'pglib_opf_case162_ieee_dtc.pickle',
'pglib_opf_case179_goc.pickle',
'pglib_opf_case200_tamu.pickle',
'pglib_opf_case240_pserc.pickle',
'pglib_opf_case300_ieee.pickle',
'pglib_opf_case500_tamu.pickle',
'pglib_opf_case588_sdet.pickle',
# 'pglib_opf_case1354_pegase.pickle',
# 'pglib_opf_case1888_rte.pickle',
# 'pglib_opf_case1951_rte.pickle',
# 'pglib_opf_case2000_tamu.pickle'
]
# choose a dataset
case_idx = 12
file_dir = path.join('./datasets/', test_cases[case_idx])
infile = open(file_dir,'rb')
dataset = pickle.load(infile)
infile.close()
# train & test
x_train, x_test, y_train, y_test = train_test_split(dataset['x'],
dataset['y'],
test_size=0.2,
random_state=19)
# train & validation
x_train, x_val, y_train, y_val = train_test_split(x_train,
y_train,
test_size=0.2,
random_state=19)
x_train = x_train.astype('float32')
x_val = x_val.astype('float32')
x_test = x_test.astype('float32')
y_train = y_train.astype('float32')
y_val = y_val.astype('float32')
y_test = y_test.astype('float32')
print((x_train.shape, y_train.shape))
print((x_val.shape, y_val.shape))
print((x_test.shape, y_test.shape))<jupyter_output>((32000, 588), (32000, 977))
((8000, 588), (8000, 977))
((10000, 588), (10000, 977))
<jupyter_text>## Set a Classifier Model<jupyter_code>%load_ext tensorboard
# Clear any logs from previous runs
!rm -rf ./logs/
inputs = keras.Input(shape=(x_train.shape[1], ), name='uncertainty_realization')
x = layers.Dense(x_train.shape[1], activation='relu', name='dense_1')(inputs)
x = layers.BatchNormalization(name='bn_1')(x)
x = layers.Dropout(rate=0.2, name='dropout_1')(x)
x = layers.Dense(x_train.shape[1]*2, activation='relu', name='dense_2')(x)
x = layers.BatchNormalization(name='bn_2')(x)
x = layers.Dropout(rate=0.2, name='dropout_2')(x)
x = layers.Dense(y_train.shape[1], activation='relu', name='dense_3')(x)
x = layers.BatchNormalization(name='bn_3')(x)
x = layers.Dropout(rate=0.2, name='dropout_3')(x)
outputs = layers.Dense(y_train.shape[1], activation='sigmoid', name='sigmoid')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="multi-label-classification")
model.compile(
optimizer=keras.optimizers.Adam(), # Optimizer
# Loss function to minimize
loss=keras.losses.BinaryCrossentropy(),
# List of metrics to monitor
metrics=['binary_accuracy', 'binary_crossentropy'])
# Define the Keras TensorBoard callback.
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model.summary()<jupyter_output>Model: "multi-label-classification"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
uncertainty_realization (Inp [(None, 588)] 0
_________________________________________________________________
dense_1 (Dense) (None, 588) 346332
_________________________________________________________________
bn_1 (BatchNormalization) (None, 588) 2352
_________________________________________________________________
dropout_1 (Dropout) (None, 588) 0
_________________________________________________________________
dense_2 (Dense) (None, 1176) 692664
_________________________________________________________________
bn_2 (BatchNormalization) (None, 1176) 4704
________________________________________[...]<jupyter_text>### Train the Model<jupyter_code>print('> Fit model on training data')
history = model.fit(
x_train,
y_train,
batch_size=128,
epochs=5,
validation_data=(x_val, y_val),
callbacks=[tensorboard_callback])
pp = pprint.PrettyPrinter(indent=4)
print('> History dict:')
pp.pprint(history.history)
%tensorboard --logdir logs<jupyter_output><empty_output><jupyter_text>### Evaluate the Model<jupyter_code>print('\n# Evaluate')
result = model.evaluate(x_test, y_test)
dict(zip(model.metrics_names, result))<jupyter_output>
# Evaluate
10000/10000 [==============================] - 2s 186us/sample - loss: 1.1690e-04 - binary_accuracy: 1.0000 - binary_crossentropy: 1.1690e-04
<jupyter_text>### Visualize the prediction<jupyter_code>y_pred = model.predict(x_test)
def result_reshape(data):
result_dim = math.ceil(math.sqrt(len(data)))
reshaped_data = np.zeros((result_dim, result_dim))
for i in range(result_dim):
for j in range(result_dim):
try:
reshaped_data[i][j] = data[result_dim * i + j]
except IndexError:
reshaped_data[i][j] = -1
return reshaped_data
def test_vs_pred(y_test, y_pred, data_idx, test_case):
y_test_reshaped = result_reshape(y_test[data_idx])
y_pred_reshaped = result_reshape(y_pred[data_idx])
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16, 8))
fig.suptitle('Active Constraints Distribution: ' + test_case.split('.')[0], size=19, y=0.88)
axes[0].set_title('y_test', size=15, y=1.03)
axes[1].set_title('y_pred', size=15, y=1.03)
sns.heatmap(y_test_reshaped,
xticklabels=False,
yticklabels=False,
cbar_kws={'ticks': [-1, 0, 1], 'shrink': .75},
square=True,
ax=axes[0])
sns.heatmap(y_pred_reshaped,
xticklabels=False,
yticklabels=False,
cbar_kws={'ticks': [-1, 0, 1], 'shrink': .75},
square=True,
ax=axes[1])
fig.show()
file_dir = path.join('./figures/', test_case.split('.')[0] + '.png')
fig.savefig(file_dir, format='png')<jupyter_output><empty_output><jupyter_text>#### Heatmap label:
- **1: active constraint**
- **0: inactive constraint**
- -1: meaningless data (added cells to make the array as a square matrix)
<jupyter_code>data_idx = 55
test_vs_pred(y_test, y_pred, data_idx, test_cases[case_idx])
y_train<jupyter_output><empty_output>
|
no_license
|
/codes/experiments/2. Classifier NNs.ipynb
|
jhyun0919/Project_EE394V_SPR2020
| 7 |
<jupyter_start><jupyter_text># 判断语句## 基本用法判断,基于一定的条件,决定是否要执行特定的一段代码,例如判断一个数是不是正数:<jupyter_code>x = 0.5
if x > 0:
print "Hey!"
print "x is positive"<jupyter_output>Hey!
x is positive
<jupyter_text>在这里,如果 `x > 0` 为 `False` ,那么程序将不会执行两条 `print` 语句。虽然都是用 `if` 关键词定义判断,但与**C,Java**等语言不同,**Python**不使用 `{}` 将 `if` 语句控制的区域包含起来。**Python**使用的是缩进方法。同时,也不需要用 `()` 将判断条件括起来。上面例子中的这两条语句:
```python
print "Hey!"
print "x is positive"
```
就叫做一个代码块,同一个代码块使用同样的缩进值,它们组成了这条 `if` 语句的主体。不同的缩进值表示不同的代码块,例如:`x > 0` 时:<jupyter_code>x = 0.5
if x > 0:
print "Hey!"
print "x is positive"
print "This is still part of the block"
print "This isn't part of the block, and will always print."<jupyter_output>Hey!
x is positive
This is still part of the block
This isn't part of the block, and will always print.
<jupyter_text>`x < 0` 时:<jupyter_code>x = -0.5
if x > 0:
print "Hey!"
print "x is positive"
print "This is still part of the block"
print "This isn't part of the block, and will always print."<jupyter_output>This isn't part of the block, and will always print.
<jupyter_text>在这两个例子中,最后一句并不是`if`语句中的内容,所以不管条件满不满足,它都会被执行。一个完整的 `if` 结构通常如下所示(注意:条件后的 `:` 是必须要的,缩进值需要一样):
if :
elif :
else:
当条件1被满足时,执行 `if` 下面的语句,当条件1不满足的时候,转到 `elif` ,看它的条件2满不满足,满足执行 `elif` 下面的语句,不满足则执行 `else` 下面的语句。对于上面的例子进行扩展:<jupyter_code>x = 0
if x > 0:
print "x is positive"
elif x == 0:
print "x is zero"
else:
print "x is negative"<jupyter_output>x is zero
<jupyter_text>`elif` 的个数没有限制,可以是1个或者多个,也可以没有。`else` 最多只有1个,也可以没有。可以使用 `and` , `or` , `not` 等关键词结合多个判断条件:<jupyter_code>x = 10
y = -5
x > 0 and y < 0
not x > 0
x < 0 or y < 0<jupyter_output><empty_output><jupyter_text>这里使用这个简单的例子,假如想判断一个年份是不是闰年,按照闰年的定义,这里只需要判断这个年份是不是能被4整除,但是不能被100整除,或者正好被400整除:<jupyter_code>year = 1900
if year % 400 == 0:
print "This is a leap year!"
# 两个条件都满足才执行
elif year % 4 == 0 and year % 100 != 0:
print "This is a leap year!"
else:
print "This is not a leap year."<jupyter_output>This is not a leap year.
<jupyter_text>## 值的测试**Python**不仅仅可以使用布尔型变量作为条件,它可以直接在`if`中使用任何表达式作为条件:大部分表达式的值都会被当作`True`,但以下表达式值会被当作`False`:- False
- None
- 0
- 空字符串,空列表,空字典,空集合<jupyter_code>mylist = [3, 1, 4, 1, 5, 9]
if mylist:
print "The first element is:", mylist[0]
else:
print "There is no first element."<jupyter_output>The first element is: 3
<jupyter_text>修改为空列表:<jupyter_code>mylist = []
if mylist:
print "The first element is:", mylist[0]
else:
print "There is no first element."<jupyter_output>There is no first element.
|
no_license
|
/02. python essentials/02.14 if statement.ipynb
|
xiangliTHU/python-tutorial
| 8 |
<jupyter_start><jupyter_text># Create the dataset from scratch
Crawl the github repos to find the code files/snippets written in different languages. Easiest way i used was to clone the relevant github repos and used glob.glob with the file extensions to get the path names of all the files and saved the files in different folders using shutil copy command. <jupyter_code># Create separate folders for each file extension type to store the corresponding files
os.makedirs("D:\\dataset\\test\\java")
os.makedirs("D:\\dataset\\test\\python")
os.makedirs("D:\\dataset\\test\\javascript")
os.makedirs("D:\\dataset\\test\\c")
# Get list of all .java files from all the subdirectories of your cloned repo and save them
import os, glob
import shutil
destination = "D:\\dataset\\test\\java"
for filename in glob.glob('C:\\Users\\Yogita\\Downloads\\guava-master\\guava-master\\**\\*.java', recursive = True):
shutil.copy(filename, destination)
# Get list of all .python files from all the subdirectories of your cloned repo and save them
destination = "D:\\dataset\\test\\python"
for filename in glob.glob('C:\\Users\\Downloads\\repo_name\\**\\*.py', recursive = True):
shutil.copy(filename, destination)
# Get list of all .c files from all the subdirectories of your repo
destination = "D:\\dataset\\test\\c"
for filename in glob.glob('C:\\Users\\Downloads\\repo_name\\**\\*.c', recursive = True):
shutil.copy(filename, destination)
# Get list of all .js files from all the subdirectories of your repo
destination = "D:\\dataset\\test\\c"
for filename in glob.glob('C:\\Users\\Downloads\\repo_name\\**\\*.js', recursive = True):
shutil.copy(filename, destination)<jupyter_output><empty_output><jupyter_text>## Loading the data and converting into dataframe for further processing
'load_files' function of 'sklearn.datasets' package loads the files from the subfolders (created above for each type of file), the real data lies in the filenames and the subfolder names are treated as categories. This function returns a dictionary object which has attributes like filenames (of the files holding data), the target or classification labels, label names and description of the dataset.<jupyter_code># load the data files
rawdata = skds.load_files("D:\dataset",load_content=False)
rawdata
#print(rawdata.target_names, rawdata.target)
# read data from the files and append to make a list
files = []
i=0
for file in rawdata.filenames:
files.append((file,rawdata.target_names[rawdata.target[i]],rawdata.target[i],Path(file).read_text(encoding="utf8")))
i += 1
# Convert the Filename, label and data available as list into a dataframe
data = pd.DataFrame.from_records(files, columns = ('filename', 'language', 'label','text'))
data.head()
<jupyter_output><empty_output><jupyter_text>## Create train and test datasets<jupyter_code>
random.seed(23)
filename_train, filename_test, y_train, y_test, label_train, label_test, data_train, data_test = train_test_split(
data.filename, data.language, data.label, data.text, test_size=0.2)
data_train.head()
print(len(data_train), len(data_test), len(y_train), len(y_test), len(filename_train), len(filename_test))
#print(y_train)
<jupyter_output>2055 514 2055 514 2055 514
<jupyter_text># Feature Extraction
Having created the train and test datasets, let us apply machine learning to identify the type of language a file is written in.
To do so, the text has to be converted to numeric tensors, this is called Vectorizing the text. The process invloves breaking the text down to either characters or words or n-grams called tokens called Tokenisation, and then associating a numeric vector with each token.
I used two techniques to convert the text into vectors - One hot encoding and Word embedding using Keras. The output of One hot encoding is fed to the Naive Bayes classifier, and the output of word embedding layer is fed to the deep neural networks.
## Word level One-hot encoding using Keras
This uses keras utilities to convert the raw text into sparse vector representation which is fed to Naive bayes classifier<jupyter_code># 1. Word level One-hot encoding using Keras -
max_features = 10000
# Creates a tokenizer to take into account top 10,000 most common words
tokenizer = Tokenizer(num_words=max_features )
# fit on the tokens to build the word index
tokenizer.fit_on_texts(data_train)
# converts strings into word indices
sequences = tokenizer.texts_to_sequences(data_train)
# Convert the words into vectors
X_train = tokenizer.texts_to_matrix(data_train, mode='tfidf')
# Apply the same transformation on the test data
X_test = tokenizer.texts_to_matrix(data_test, mode='tfidf')
# word index
dictionary = tokenizer.word_index
print('Found %s unique tokens.' % len(dictionary) )
#print('Found %s unique tokens.' % len(dictionary),dictionary )
# train the Naive Bayes classifier model
nbmodel = MultinomialNB().fit(X_train, y_train)
# Evaluation of the performance on the test set
predicted = nbmodel.predict(X_test)
print("Accuracy:", np.mean(predicted == y_test))
metrics.confusion_matrix(y_test, predicted)
prob = nbmodel.predict_proba(X_test)
#print(metrics.classification_report(y_test, predicted, target_names=y_test))
# Output - Test data set with actual and predicted labels
for i in range(len(x_test)):
print(filename_test.iloc[i] + ' => ' + 'Actual label:' + y_test.iloc[i] +', ' + "Predicted label: " + predicted[i])
print(rawdata.target_names, prob[i].round())
<jupyter_output>D:\dataset\java\CollectionRetainAllTester.java => Actual label:java, Predicted label: java
['c', 'java', 'javascript', 'python'] [0. 1. 0. 0.]
D:\dataset\java\FloatsTest.java => Actual label:java, Predicted label: java
['c', 'java', 'javascript', 'python'] [0. 1. 0. 0.]
D:\dataset\java\UncaughtExceptionHandlers.java => Actual label:java, Predicted label: java
['c', 'java', 'javascript', 'python'] [0. 1. 0. 0.]
D:\dataset\java\TestSortedMapGenerator.java => Actual label:java, Predicted label: java
['c', 'java', 'javascript', 'python'] [0. 1. 0. 0.]
D:\dataset\java\ArrayBasedUnicodeEscaperTest_gwt.java => Actual label:java, Predicted label: java
['c', 'java', 'javascript', 'python'] [0. 1. 0. 0.]
D:\dataset\javascript\test-cli-eval-event.js => Actual label:javascript, Predicted label: javascript
['c', 'java', 'javascript', 'python'] [0. 0. 1. 0.]
D:\dataset\java\EventBus.java => Actual label:java, Predicted label: java
['c', 'java', 'javascript', 'python'] [0. 1. 0. 0.]
D:\dataset\java\p[...]<jupyter_text># Predict the language of the (unseen) test code file<jupyter_code># Predict on new data
test_files_path = "D:\\dataset - extra files\\test data"
test_data = []
for filename in os.listdir(test_files_path):
path = os.path.join(test_files_path, filename)
with open(path, encoding="utf8") as f:
read_data = f.read().split()
test_data.append(read_data)
test_data1 = pd.Series(test_data)
#print(test_data1.head())
t_data = tokenizer.texts_to_matrix(test_data1, mode='tfidf')
i=0
for t in t_data:
prec = nbmodel.predict(np.array([t]))
print("File:", os.listdir(test_files_path)[i] ,'-> Predicted label: ' + prec[0])
i+=1<jupyter_output>File: AbstractAbstractFutureTest.java -> Predicted label: java
File: AbstractBiMap.java -> Predicted label: java
File: fmc-private.h -> Predicted label: c
File: sock_example.h -> Predicted label: c
File: test-crypto-hash-stream-pipe.js -> Predicted label: javascript
File: test-crypto-hash.js -> Predicted label: javascript
File: test-crypto-hmac.js -> Predicted label: javascript
File: test-crypto-keygen.js -> Predicted label: javascript
File: test-fs-error-messages.js -> Predicted label: javascript
File: test-readline-interface.js -> Predicted label: python
File: test-util-inspect.js -> Predicted label: javascript
File: xdpsock.h -> Predicted label: c
File: xdp_tx_iptunnel_common.h -> Predicted label: c
File: __init__.py -> Predicted label: python
<jupyter_text>## using Keras word embedding<jupyter_code># Feature Extraction
max_features = 10000
# Creates a tokenizer to take into account top 10,000 most common words
tokenizer = Tokenizer(num_words=max_features )
# fit on the tokens to build the word index
tokenizer.fit_on_texts(data_train)
# converts strings into word indices
sequences = tokenizer.texts_to_sequences(data_train)
# Converst the words into vectors
x_train = tokenizer.texts_to_matrix(data_train, mode='tfidf')
x_test = tokenizer.texts_to_matrix(data_test, mode='tfidf')
x_train = pad_sequences(x_train, 500)
x_test = pad_sequences(x_test, 500)
#dictionary = tokenizer.word_index
#print('Found %s unique tokens.' % len(dictionary) )
#print('Found %s unique tokens.' % len(dictionary),dictionary )
# categorise Target Classes
from keras.utils.np_utils import to_categorical
Y_train = to_categorical(label_train)
Y_test = to_categorical(label_test)
# Keras Model building and fitting
model = Sequential()
model.add(Embedding(max_features, 100, input_length=500))
model.add(Flatten())
model.add(Dense(512, activation='sigmoid', input_shape = (10000,)))
#model.add(Dropout(0.2))
#model.add(Dense(512, activation='sigmoid'))
#model.add(Dropout(0.3))
model.add(Dense(4, activation='softmax'))
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, Y_train,
epochs=15,
batch_size=32, verbose =1, validation_split=0.1)
# Model evaluation
score = model.evaluate(x_test, Y_test, batch_size=32, verbose=0)
print('loss: ' , score[0], 'Test accuracy:', score[1])
# Prediction on the test dataset
prediction = model.predict(x_test)
for i in range(len(x_test)):
print(filename_test.iloc[i] + ' => ' , 'Actual label:' , Y_test[i], ', ' , "Predicted label: " , prediction[i].round(2))
<jupyter_output>D:\dataset\java\CollectionRetainAllTester.java => Actual label: [0. 1. 0. 0.] , Predicted label: [0. 1. 0. 0.]
D:\dataset\java\FloatsTest.java => Actual label: [0. 1. 0. 0.] , Predicted label: [0.01 0.98 0.01 0.01]
D:\dataset\java\UncaughtExceptionHandlers.java => Actual label: [0. 1. 0. 0.] , Predicted label: [0.02 0.88 0.03 0.08]
D:\dataset\java\TestSortedMapGenerator.java => Actual label: [0. 1. 0. 0.] , Predicted label: [0. 0.99 0. 0. ]
D:\dataset\java\ArrayBasedUnicodeEscaperTest_gwt.java => Actual label: [0. 1. 0. 0.] , Predicted label: [0.11 0.67 0.14 0.08]
D:\dataset\javascript\test-cli-eval-event.js => Actual label: [0. 0. 1. 0.] , Predicted label: [0.11 0.67 0.14 0.08]
D:\dataset\java\EventBus.java => Actual label: [0. 1. 0. 0.] , Predicted label: [0. 1. 0. 0.]
D:\dataset\java\package-info.java => Actual label: [0. 1. 0. 0.] , Predicted label: [0.11 0.67 0.14 0.08]
D:\dataset\java\AbstractPackageSanityTests.java => Actual label: [0. 1. 0. 0.] , [...]<jupyter_text># Prediction<jupyter_code>test_files_path = "D:\\dataset - extra files\\test data"
test_data = []
for filename in os.listdir(test_files_path):
path = os.path.join(test_files_path, filename)
with open(path, encoding="utf8") as f:
read_data = f.read().split()
test_data.append(read_data)
test_data1 = pd.Series(test_data)
#print(test_data1.head())
t_data = tokenizer.texts_to_matrix(test_data1, mode='tfidf')
test_dat = pad_sequences(t_data, 500)
i=0
for t in test_dat:
predict = model.predict(np.array([t]))
print("File:", os.listdir(test_files_path)[i] ,' -> Predicted label: ' , predict[0].round(2))
i+=1<jupyter_output>File: AbstractAbstractFutureTest.java -> Predicted label: [0.01 0.97 0.01 0.01]
File: AbstractBiMap.java -> Predicted label: [0.11 0.67 0.14 0.08]
File: fmc-private.h -> Predicted label: [0.11 0.67 0.14 0.08]
File: sock_example.h -> Predicted label: [0.11 0.67 0.14 0.08]
File: test-crypto-hash-stream-pipe.js -> Predicted label: [0.11 0.67 0.14 0.08]
File: test-crypto-hash.js -> Predicted label: [0.08 0.45 0.15 0.31]
File: test-crypto-hmac.js -> Predicted label: [0.11 0.67 0.14 0.08]
File: test-crypto-keygen.js -> Predicted label: [0.1 0.4 0.44 0.06]
File: test-fs-error-messages.js -> Predicted label: [0.01 0.02 0.01 0.96]
File: test-readline-interface.js -> Predicted label: [0.11 0.67 0.14 0.08]
File: test-util-inspect.js -> Predicted label: [0.02 0.04 0.93 0.01]
File: xdpsock.h [...]
|
no_license
|
/Identify language of the source code file.ipynb
|
GeetSing/Predict-the-language-of-source-code-using-Machine-Learning
| 7 |
<jupyter_start><jupyter_text># Подбор лучших параметров для XGboost
- Основная идея : `GridSearch`
- Попробовать: `hyperopt + методы байес оптимизации`
<jupyter_code>from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV,GridSearchCV
import xgboost as xgb
import pandas
import numpy as np
import signal
import os
import json
import sys
import traceback
SCRIPT_DIR = os.path.dirname(os.path.realpath('hw03'))
data_path=SCRIPT_DIR + '\\HR.csv'
params_path = SCRIPT_DIR + '\\xgboost_params_example.json'
params_path_best = SCRIPT_DIR + '\\xgboost_amamish.json'
df = pandas.read_csv(data_path)
target = 'left'
features = [c for c in df if c != target]
target = np.array(df[target])
data = np.array(df[features])
# пример параметров
with open(params_path, 'r') as f:
params = json.load(f)
author_email = params.pop('author_email')
author_email,\
params
# ещеодин пример параметров
with open(params_path_best, 'r') as f:
params = json.load(f)
author_email = params.pop('author_email')
author_email,\
params
estimator = xgb.XGBClassifier(**params)
# скор на параметрах второго примера
score = np.mean(cross_val_score(
estimator, data, target,
scoring='accuracy',
cv=3))
score #0.7659867306794692<jupyter_output><empty_output><jupyter_text># GridSearch<jupyter_code># финальная сетка перебора
param_grid = {
'learning_rate' : [0.03,0.04,0.05,0.06,0.07],
'max_depth': [6,7,8,9],
'n_estimators': [98],
'min_child_weight': [2,3,4,5,6,7],
'seed': [42]
}
xgb_cl = xgb.XGBClassifier()
random_search = GridSearchCV(xgb_cl,param_grid,
# n_iter=param_comb,
scoring='accuracy',
n_jobs=-1,
verbose=1,
cv = 3,)
random_search.fit(data,target)<jupyter_output>Fitting 3 folds for each of 720 candidates, totalling 2160 fits
<jupyter_text># Get params<jupyter_code># функция для выделения параметров которые нужно сабмитить
def get_best_param(get_best_param):
names = ['learning_rate', 'max_depth', 'n_estimators', 'min_child_weight', 'seed']
dict_to_json = {}
for i in names:
dict_to_json[i] = estimator.get_xgb_params()[i]
return dict_to_json
# скор лучшего естиматора подобранного по гридсерчу
score = np.mean(cross_val_score(
random_search.best_estimator_, data, target,
scoring='accuracy',
cv=3))
score #0.7659867306794692 best = 0.7821216643328666 mybest(n est = 103) 0.7822549843301995
# добавим e-mail + проверка того, что получилось
param2json = get_best_param(random_search.best_estimator_)
param2json['author_email'] = str('[email protected]')
param2json<jupyter_output><empty_output><jupyter_text># Сделать финальный json для сабмита<jupyter_code>import json
a = param2json
with open("xgboost_Deni130497.json", "w") as fp:
json.dump(a , fp)
# пример параметров
with open('xgboost_Deni130497.json', 'r') as f:
params = json.load(f)
author_email = params.pop('author_email')
author_email,\
params<jupyter_output><empty_output>
|
no_license
|
/hw03/hw03_1_opt_param_xgboost.ipynb
|
IgorDenisenko/DMIA_HW
| 4 |
<jupyter_start><jupyter_text>## Build MRCNN Model
Pass data through MRCNN and then FCN and investigte output values from FCN
- First we generate the heatmaps, and also visually cehck them.
- the we pass the heatmaps to the routine that prodcues the scores <jupyter_code>from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys,os, pprint
pp = pprint.PrettyPrinter(indent=2, width=100)
print('Current working dir: ', os.getcwd())
pp.pprint(sys.path)
if '..' not in sys.path:
print("appending '..' to sys.path")
sys.path.append('..')
import numpy as np
import mrcnn.utils as utils
import mrcnn.visualize as visualize
from mrcnn.prep_notebook import build_mrcnn_inference_pipeline, get_inference_batch, get_image_batch, run_mrcnn_detection
from mrcnn.coco import prep_coco_dataset
input_parms =" --batch_size 1 "
input_parms +=" --mrcnn_logs_dir train_mrcnn_coco_subset "
# input_parms +=" --fcn_logs_dir train_fcn8_subset "
input_parms +=" --mrcnn_model last "
input_parms +=" --sysout screen "
# input_parms +=" --coco_classes 78 79 80 81 82 44 46 47 48 49 50 51 34 35 36 37 38 39 40 41 42 43 10 11 13 14 15 "
parser = utils.command_line_parser()
args = parser.parse_args(input_parms.split())<jupyter_output><empty_output><jupyter_text>## Defined training datasets<jupyter_code>from mrcnn.prep_notebook import build_coco_config
mrcnn_config = build_coco_config('mrcnn', mode='inference', args = args, verbose = 1)
##------------------------------------------------------------------------------------
## Build & Load Training and Validation datasets
##------------------------------------------------------------------------------------
# del dataset
# chair/cound/dining tbl/ electronics/ appliances -train : 34562 val: 1489
# furn_elect_appl = [62, 63, 67, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82]
## appliances / kitchen / sports -- train: 46891 val: 1954
# appl_ktch_sports = [78, 79, 80, 81, 82,44, 46, 47, 48, 49, 50, 51,34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
print(args.coco_classes)
dataset , generator = prep_coco_dataset(["train", "val"], mrcnn_config, generator = True, shuffle = True, load_coco_classes=args.coco_classes, loadAnns = 'all_classes')
# dataset = prep_coco_dataset(["train", "val35k"], mrcnn_config, generator = False , return_coco = True)
# dataset_val, generator_val = prep_coco_dataset(["minival"] , mrcnn_config, generator = True , return_coco = True, load_coco_classes=load_class_ids)<jupyter_output>None
COCO loading annotations file F:\MLDatasets\coco2014\annotations/instances_train2014.json into memory...
Done (t=24.49s)
creating index...
index created!
loading all classes
image dir : F:\MLDatasets\coco2014\train2014
json_path_dir : F:\MLDatasets\coco2014\annotations/instances_train2014.json
number of images : 82783
image_ids[:10] : [262145, 131074, 131075, 393221, 393223, 393224, 524297, 393227, 393228, 262146]
image_ids[1000:1010] : [22098, 1518, 132591, 1522, 394739, 525813, 1526, 525815, 327935, 394748]
COCO loading annotations file F:\MLDatasets\coco2014\annotations/instances_val2014.json into memory...
Done (t=14.01s)
creating index...
index created!
loading all classes
image dir : F:\MLDatasets\coco2014\val2014
json_path_dir : F:\MLDatasets\coco2014\annotations/instances_val2014.json
number of images : 40504
image_ids[:10] : [294605, 458755, 262148, 185686, 535523, 360073, 192039, 393226, 4888[...]<jupyter_text>#### Display active classes of `dataset`<jupyter_code>print(len(dataset.image_info)) # , len(dataset_val.image_info))
# print(dataset.ext_to_int_id)
# pp.pprint(dataset.class_from_source_map)
print(dataset.active_class_ids)
print(dataset.active_ext_class_ids)
# for ext_cls in dataset.active_ext_class_ids:
# int_cls = dataset.map_source_class_id( "coco.{}".format(ext_cls))
# print('ext_cls:',ext_cls, 'int_cls: ', int_cls, 'name:', dataset.class_info[int_cls]['category'],'-',dataset.class_info[int_cls]['name'])
pp.pprint(dataset.active_class_info)
# from operator import itemgetter
# id = 1
# for
# # cls_id = pairwise_list[id][0]
# # cls_name = dataset.class_names[cls_id]
# print(dataset.active_class_info.get)
# sortedlist = sorted(dataset.active_class_info.keys(), key=itemgetter('name'), reverse=True)
# print(sortedlist)
# print(np.where(dataset.image_ids == 489914))
# for i in dataset.image_info[0]:
# print(i)
ext_img_ids = [image_inf['id'] for image_inf in dataset.image_info]
# training: 489914 , 475929, 228407, 535233 val: 334642, 124983
print(len(ext_img_ids))
print(ext_img_ids[:100])
# print(len(dataset.image_ids))
print(ext_img_ids.index(228407))<jupyter_output>123287
[262145, 131074, 131075, 393221, 393223, 393224, 524297, 393227, 393228, 262146, 393230, 262159, 524291, 322975, 131093, 524311, 25, 393242, 131099, 262172, 524317, 30, 524320, 371376, 34, 131107, 262180, 524325, 410266, 262184, 283996, 524331, 202617, 131118, 262191, 49, 524338, 524340, 131126, 9, 131128, 262201, 362257, 61, 262207, 524352, 458763, 262213, 393286, 71, 72, 131084, 393290, 393291, 393292, 262221, 393294, 393297, 288061, 86, 524375, 131160, 89, 393306, 131087, 92, 94, 262239, 524386, 131172, 524389, 131174, 532398, 21826, 109, 110, 567997, 423362, 113, 262260, 262261, 524406, 131197, 22696, 127, 262273, 524420, 160614, 568001, 131208, 284012, 138, 131211, 524428, 240322, 262286, 131215, 144, 393362, 152942]
39422
<jupyter_text>#### Test problem in Coco when image has no annotations (problem when calling super()<jupyter_code>ctr = 0
for image_inf in dataset.image_info:
if len(image_inf['annotations']) == 0:
print('external image id:', image_inf['id'], ' has no annotations ')
sup = dataset.load_mask(image_id)
pp.pprint(image_inf)
print(sup)
ctr += 1
print(sys.version)
test = []
if test:
print('1')
else:
print('2')
from mrcnn.coco import CocoDataset, CocoConfig
print(dataset.__class__)
print(dir(dataset))
print('ctr: ', ctr)
for ext_image_id in [489914 , 475929, 228407, 535233, 334642, 124983]:
image_id = ext_img_ids.index(ext_image_id)
print(ext_image_id, ' --->', image_id)
if image_id%100 == 0:
print(' image id at ', idx)
image_info = dataset.image_info[image_id]
# dataset.load_mask(image_id)
pp.pprint(image_info)
# if image_info["source"] != "coco":
# print(' source is not coco: id: ', image_id)
# sup = super(dataset.__class__, dataset).load_mask(image_id)
# print(sup)
# print(len(dataset.image_ids))
# print(len(ext_img_ids))
<jupyter_output>489914 ---> 62003
{ 'annotations': [],
'height': 512,
'id': 489914,
'path': 'F:\\MLDatasets\\coco2014\\train2014\\COCO_train2014_000000489914.jpg',
'source': 'coco',
'width': 640}
475929 ---> 53464
{ 'annotations': [],
'height': 640,
'id': 475929,
'path': 'F:\\MLDatasets\\coco2014\\train2014\\COCO_train2014_000000475929.jpg',
'source': 'coco',
'width': 427}
228407 ---> 39422
{ 'annotations': [],
'height': 500,
'id': 228407,
'path': 'F:\\MLDatasets\\coco2014\\train2014\\COCO_train2014_000000228407.jpg',
'source': 'coco',
'width': 375}
535233 ---> 7213
{ 'annotations': [],
'height': 450,
'id': 535233,
'path': 'F:\\MLDatasets\\coco2014\\train2014\\COCO_train2014_000000535233.jpg',
'source': 'coco',
'width': 340}
334642 ---> 87140
{ 'annotations': [],
'height': 427,
'id': 334642,
'path': 'F:\\MLDatasets\\coco2014\\val2014\\COCO_val2014_000000334642.jpg',
'source': 'coco',
'width': 640}
124983 ---> 97597
{ 'annotations': [],
'height'[...]<jupyter_text>#### display category to class info<jupyter_code>print()
print('dataset.category_to_class_map')
pp.pprint(dataset.category_to_class_map)
print()
print('dataset.category_to_external_class_map')
pp.pprint(dataset.category_to_external_class_map)<jupyter_output>
dataset.category_to_class_map
{ 'accessory': [25, 26, 27, 28, 29],
'animal': [15, 16, 17, 18, 19, 20, 21, 22, 23, 24],
'appliance': [69, 70, 71, 72, 73],
'background': [0],
'electronic': [63, 64, 65, 66, 67, 68],
'food': [47, 48, 49, 50, 51, 52, 53, 54, 55, 56],
'furniture': [57, 58, 59, 60, 61, 62],
'indoor': [74, 75, 76, 77, 78, 79, 80],
'kitchen': [40, 41, 42, 43, 44, 45, 46],
'outdoor': [10, 11, 12, 13, 14],
'person': [1],
'sports': [30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
'vehicle': [2, 3, 4, 5, 6, 7, 8, 9]}
dataset.category_to_external_class_map
{ 'accessory': [27, 28, 31, 32, 33],
'animal': [16, 17, 18, 19, 20, 21, 22, 23, 24, 25],
'appliance': [78, 79, 80, 81, 82],
'background': [0],
'electronic': [72, 73, 74, 75, 76, 77],
'food': [52, 53, 54, 55, 56, 57, 58, 59, 60, 61],
'furniture': [62, 63, 64, 65, 67, 70],
'indoor': [84, 85, 86, 87, 88, 89, 90],
'kitchen': [44, 46, 47, 48, 49, 50, 51],
'outdoor': [10, 11, 13, 14, 15],
'person':[...]<jupyter_text>#### Display attributes of `dataset`<jupyter_code># for cls_info in dataset.class_info:
# print(cls_info)
# source_key = cls_info['source']+'.'+str(cls_info['id'])
# internal_id = dataset.class_from_source_map[source_key]
# print('{:25s} external class id: {:3d} internal id: {:3d}'.format(cls_info['name'], cls_info['id'], cls_info['internal_id']))
# cls_info['internal_id'] = internal_id
# dataset.class_from_source_map
# dataset.source_objs
# pp.pprint(dataset.class_info)
# pp.pprint(dataset.class_from_source_map)
# print(dataset.get_source_class_id(13,"coco"))
# dataset.sources
# print(dataset.source_class_ids)
# print(len(dataset._image_ids))
# print(dataset._image_ids[:10])
# print(dataset.image_ids[:10])
# pp.pprint(dataset.image_info[0].keys())
# pp.pprint(dataset.image_info[0]['annotations'])
# for i in range(1,81):
# print(' internal id : ', i, ' coco id:', dataset.get_source_class_id(i,'coco'))
# print(dataset.source_objs)
# pp.pprint(dataset.category_to_class_map)
# pp.pprint(dataset.category_to_external_class_map)
# pp.pprint(dataset.ext_to_int_id)
# pp.pprint(dataset.int_to_ext_id)
active_class_ids = sorted([i for i in dataset.active_class_info])
active_class_names = [dataset.active_class_info[i]['name'] for i in active_class_ids]
print(dataset.active_class_ids)
print(active_class_ids)
print(active_class_names)
# pp.pprint(dataset.class_info)<jupyter_output>[30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 69, 70, 71, 72, 73]
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 69, 70, 71, 72, 73]
['frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator']
<jupyter_text>## Display Images### Get next image from generator and display <jupyter_code># batch_x, batch_y = next(generator)
IMAGE_LIST = batch_x[1][:,0]
print(IMAGE_LIST)
# IMAGE_LIST = [179, 180]
# batch_x, batch_y = data_gen_simulate(dataset, mrcnn_config, [179, 180, 181, 182])
display_training_batch(dataset, batch_x, masks = False)
print('IMAGE_LIST : ', IMAGE_LIST)
for image_id in IMAGE_LIST:
print()
print('IMAGE_ID : ', image_id)
annotations = dataset.image_info[image_id]["annotations"]
# print(annotations)
for annotation in annotations:
class_id = dataset.map_source_class_id( "coco.{}".format(annotation['category_id']))
print("coco.id: {} --> class_id : {} - {} ".format(annotation['category_id'],class_id, dataset.class_names[class_id]))<jupyter_output>IMAGE_LIST : [ 9146 25735 6467 13167]
IMAGE_ID : 9146
coco.id: 37 --> class_id : 33 - sports ball
coco.id: 77 --> class_id : 68 - cell phone
coco.id: 41 --> class_id : 37 - skateboard
coco.id: 27 --> class_id : 25 - backpack
coco.id: 31 --> class_id : 27 - handbag
coco.id: 31 --> class_id : 27 - handbag
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 27 --> class_id : 25 - backpack
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 1 --> class_id : 1 - person
coco.id: 77 --> class_id : 68 - cell phone
coco.id: 1 --> class_id : 1 - person
IMAGE_ID : 25735
coco.id: 59 --> class_id :[...]<jupyter_text>### manipulate COCO object<jupyter_code># coco = dataset.source_objs['train']
# loadCat = coco.loadCats()
# print('coco classes: ', type(loadCat), len(loadCat))
# pp.pprint(loadCat)
# coco_class_ids = []
# for source in dataset.source_objs:
# print(source, dataset.source_objs[source])
# src_coco = dataset.source_objs[source]
# src_class_ids = sorted(src_coco.getCatIds())
# coco_class_ids.extend(src_class_ids)
# print(type(coco_class_ids), len(coco_class_ids))
# print(coco_class_ids)
# coco_class_ids = sorted(list(set(coco_class_ids)))
# print(type(coco_class_ids), len(coco_class_ids))
# print(coco_class_ids)<jupyter_output><empty_output><jupyter_text>## Build pairwise relation matrix<jupyter_code>### Get ONLY COCO class ids
# matrix_coco_ids = [i['internal_id'] for i in dataset.class_info if i['source'] == 'coco']
for i in dataset.active_coco_class_ids:
print(i, ' --> int_id: ', dataset.ext_to_int_id[i])
print(dataset.active_class_ids)
print(dataset.class_ids)
### Create pairwise matrix
matrix_class_ids = dataset.class_ids
num_classes = len(matrix_class_ids)
# pairwise_matrix = np.zeros((num_classes+1,num_classes+1))
pairwise_matrix = np.zeros((num_classes,num_classes))
pairwise_list = []
class_pairwise_list.append({
"id":0,
"external_id": 0,
"name": 'BG',
"category": 'background',
"img_count": imgCount })
row = 1
for i in matrix_class_ids:
# print(' row: {:3d} '.format(i), end='\n')
current_row = [0]
class_pairwise_list = []
for j in matrix_class_ids:
i_coco_id = dataset.int_to_ext_id[i]
j_coco_id = dataset.int_to_ext_id[j]
imgCount = 0
for source in dataset.source_objs:
src_coco = dataset.source_objs[source]
# print( ' int classes ', i,j, 'coco classes: ', i_coco_id, j_coco_id, type(imgCount), len(imgCount))
imgCount += len(src_coco.getImgIds(catIds=[i_coco_id,j_coco_id]))
print( ' int classes ', i,j, 'coco classes: ', i_coco_id, j_coco_id, type(imgCount), imgCount)
# print( ' {:2d} [{:2d}] {:12s}-{:20s} {:2d} [{:2d}] {:12s}-{:20s} number of images: {:5d}'.format(
# i, i_coco_id, dataset.class_info[i]['category'], dataset.class_names[i], j, j_coco_id, dataset.class_info[j]['category'],dataset.class_names[j],imgCount))
class_pairwise_list.append({
"id":j,
"external_id": j_coco_id,
"name":dataset.class_names[j],
"category": dataset.class_info[j]['category'],
"img_count": imgCount })
pairwise_matrix[i,j] = imgCount
# current_row.append(imgCount)
pairwise_list.append((i,class_pairwise_list))
# pairwise_matrix[row] = current_row
row += 1
print(len(pairwise_list), end = '')<jupyter_output> int classes 0 0 coco classes: 0 0 <class 'int'> 0
int classes 0 1 coco classes: 0 1 <class 'int'> 0
int classes 0 2 coco classes: 0 2 <class 'int'> 0
int classes 0 3 coco classes: 0 3 <class 'int'> 0
int classes 0 4 coco classes: 0 4 <class 'int'> 0
int classes 0 5 coco classes: 0 5 <class 'int'> 0
int classes 0 6 coco classes: 0 6 <class 'int'> 0
int classes 0 7 coco classes: 0 7 <class 'int'> 0
int classes 0 8 coco classes: 0 8 <class 'int'> 0
int classes 0 9 coco classes: 0 9 <class 'int'> 0
int classes 0 10 coco classes: 0 10 <class 'int'> 0
int classes 0 11 coco classes: 0 11 <class 'int'> 0
int classes 0 12 coco classes: 0 13 <class 'int'> 0
int classes 0 13 coco classes: 0 14 <class 'int'> 0
int classes 0 14 coco classes: 0 15 <class 'int'> 0
int classes 0 15 coco classes: 0 16 <class 'int'> 0
int classes 0 16 coco classes: 0 17 <class 'int'> 0
int classes 0 17 coco classes: 0 18 <class 'int'> 0
int classes 0 18 coco classe[...]<jupyter_text>#### used if we create separate pairwise matrices for eeach coco source<jupyter_code>np.set_printoptions(linewidth=200,precision=4,threshold=10000, suppress = True)
# print(pairwise_matrices["train"][69:75,69:75])
# print(pairwise_matrices["val35k"][69:75,69:75])
# pairwise_matrix_all = np.zeros((num_classes+1,num_classes+1))
# for source in dataset.source_objs:
# pairwise_matrix_all += pairwise_matrices[source]
print(type(pairwise_list), len(pairwise_list), len(pairwise_list[0]), len(pairwise_list[0][1]))
pp.pprint(pairwise_list[0])
print(pairwise_matrix.shape)
print(pairwise_matrix.shape)<jupyter_output>(82, 82)
<jupyter_text>#### Display pairwise counts for a given class ( id is index into `pairwise_list`)<jupyter_code># float_formatter = lambda x: "%10.4f" % x
# int_formatter = lambda x: "%10d" % x
# np_format = {}
# np_format['float']=float_formatter
# np_format['int']=int_formatter
# np.set_printoptions(linewidth=195, precision=3, floatmode='fixed', threshold =10000, formatter = np_format)
np.set_printoptions(linewidth=195, precision=3, floatmode='fixed', threshold =10000)
print(pairwise_matrix.shape)
print(len(pairwise_list))
print(pairwise_matrix[0:15,0:15])
pp.pprint(pairwise_list[1])<jupyter_output>(81, 81)
81
[[ 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000]
[ 0.000 66808.000 2765.000 8878.000 2914.000 1058.000 3154.000 1704.000 4151.000 2063.000 2563.000 731.000 666.000 340.000 4273.000]
[ 0.000 2765.000 3401.000 1287.000 478.000 19.000 451.000 111.000 469.000 129.000 483.000 110.000 129.000 81.000 350.000]
[ 0.000 8878.000 1287.000 12786.000 1436.000 366.000 2227.000 446.000 3816.000 277.000 2651.000 804.000 806.000 494.000 858.000]
[ 0.000 2914.000 478.000 1436.000 3661.000 17.000 313.000 27.000 662.000 24.000 337.000 44.000 83.000 34.000 124.000]
[ 0.000 1058.000 19.000 366.000 17.000 3083.000 74.000 8.000 661.000 80.000 23.000 5.000 5.000 3.000 26.000]
[ 0.000 3154.000 451.000 2227.000 313.000 74.000 4141.00[...]<jupyter_text>#### display pairwise relations for a given coco class<jupyter_code>from operator import itemgetter
id = 1
cls_id = pairwise_list[id][0]
cls_name = dataset.class_names[cls_id]
sortedlist = sorted(pairwise_list[id][1], key=itemgetter('category','img_count'), reverse=True)
print(type(sortedlist))
print(cls_id, '-' , cls_name)
for item in sortedlist:
print( ' {:2d} ({:2d}), {:12s},{:20s} count: {:5d}'.format(
item["id"],item["external_id"] ,item["category"], item["name"] , item["img_count"]))
<jupyter_output><class 'list'>
1 - person
3 ( 3), vehicle ,car count: 8878
8 ( 8), vehicle ,truck count: 4151
6 ( 6), vehicle ,bus count: 3154
4 ( 4), vehicle ,motorcycle count: 2914
2 ( 2), vehicle ,bicycle count: 2765
9 ( 9), vehicle ,boat count: 2063
7 ( 7), vehicle ,train count: 1704
5 ( 5), vehicle ,airplane count: 1058
33 (37), sports ,sports ball count: 4256
37 (41), sports ,skateboard count: 3541
39 (43), sports ,tennis racket count: 3527
38 (42), sports ,surfboard count: 3507
31 (35), sports ,skis count: 3164
36 (40), sports ,baseball glove count: 2701
35 (39), sports ,baseball bat count: 2571
34 (38), sports ,kite count: 2181
30 (34), sports ,frisbee count: 1894
32 ([...]<jupyter_text>## Display pairwise counts### `display_pairwise_heatmap` routine<jupyter_code>import matplotlib.pyplot as plt
def display_pairwise_heatmap(show_matrix, show_labels, category, colormap = cm.coolwarm):
# show_labels = labels[1:10]
num_classes = len(show_labels)
# print(' num classes: ', num_classes, 'matrix shape: ',show_matrix.shape)
fig, ax = plt.subplots(1,1,figsize=(num_classes,num_classes))
im = ax.imshow(show_matrix, cmap=colormap)
# cmap=cm.bone, # cmap=cm.Dark2 # cmap = cm.coolwarm # cmap=cm.YlOrRd
# We want to show all ticks...
ax.set_xticks(np.arange(num_classes))
ax.set_yticks(np.arange(num_classes))
# ... and label them with the respective list entries
ax.set_xticklabels(show_labels, size = 9)
ax.set_yticklabels(show_labels, size = 9)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",rotation_mode="anchor")
plt.setp(ax.get_yticklabels(), rotation=45, ha="right",rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(num_classes):
for j in range(num_classes):
text = ax.text(j, i, show_matrix[i, j],
ha="center", va="center", color="w")
# if i != j:
# text = ax.text(j, i, show_matrix[i, j],
# ha="center", va="center", color="w")
# else:
# text = ax.text(j, i, ' --',
# ha="center", va="center", color="w")
ax.set_title("pairwise relations"+category.upper())
# fig.tight_layout()
plt.show()
return fig
<jupyter_output><empty_output><jupyter_text>### For a given category, get class_ids from `category_to_class_map`, and display heatmap <jupyter_code>
for category in sorted(dataset.category_to_class_map): #category_to_class_map:
if category == 'background':
continue
hmfig = display_pairwise_heatmap_2(dataset, category, pairwise_matrix)
# indices = dataset.category_to_class_map[category]
# ext_indices = dataset.category_to_external_class_map[category]
# show_labels = [str(idx)+'('+str(ext_idx)+')-'+dataset.class_names[idx] for idx, ext_idx in zip(indices, ext_indices)]
# show_matrix = (pairwise_matrix[indices])[:,indices]
# print(' ',category, len(indices), 'matrix shape',show_matrix.shape)
# print(' indices : ', indices)
# print(' ext_indices: ', ext_indices)
# print(' labels : ', show_labels)
# category_ttl = ' - '+category
# hmfig = display_pairwise_heatmap(show_matrix, show_labels, category_ttl, cm.coolwarm)
# hmfig.savefig("E:\\Users\\Kevin.Bardool\\Desktop\\NN Related files\\"+category_ttl)
def display_pairwise_heatmap_2(dataset, categories, pairwise_matrix, colormap = cm.coolwarm):
if not isinstance(categories, list):
categories = [categories]
indices = []
external_indices = []
category_ttl = ''
for category in categories: #category_to_class_map:
print(category, ' ', dataset.category_to_class_map[category])
indices.extend(dataset.category_to_class_map[category])
external_indices.extend(dataset.category_to_external_class_map[category])
category_ttl += ' - '+category
print(' indices : ', len(indices), indices)
print(' ext_indices: ', len(external_indices), external_indices)
show_labels = [str(idx)+'('+str(ext_idx)+')-'+dataset.class_names[idx] for idx, ext_idx in zip(indices, external_indices)]
show_matrix = (pairwise_matrix[indices])[:,indices]
num_classes = len(indices)
# print(' number of categories:', len(categories))
# print(' matrix shp : ', show_matrix.shape)
# print(' labels : ', len(show_labels), show_labels)
# print(' num classes: ', num_classes, 'matrix shape: ',show_matrix.shape)
fig, ax = plt.subplots(1,1,figsize=(num_classes,num_classes))
im = ax.imshow(show_matrix, cmap=colormap)
# cmap=cm.bone, # cmap=cm.Dark2 # cmap = cm.coolwarm # cmap=cm.YlOrRd
# We want to show all ticks...
ax.set_xticks(np.arange(num_classes))
ax.set_yticks(np.arange(num_classes))
# ... and label them with the respective list entries
ax.set_xticklabels(show_labels, size = 9)
ax.set_yticklabels(show_labels, size = 9)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",rotation_mode="anchor")
plt.setp(ax.get_yticklabels(), rotation=45, ha="right",rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(num_classes):
for j in range(num_classes):
text = ax.text(j, i, show_matrix[i, j],
ha="center", va="center", color="w")
# if i != j:
# text = ax.text(j, i, show_matrix[i, j],
# ha="center", va="center", color="w")
# else:
# text = ax.text(j, i, ' --',
# ha="center", va="center", color="w")
ax.set_title("pairwise relations "+category_ttl.upper())
# fig.tight_layout()
plt.show()
return fig
<jupyter_output><empty_output><jupyter_text>#### Display pairwise relations for all classes<jupyter_code># indices = []
# external_indices = []
# category_ttl = ''
# show_class_ids = dataset.class_ids ## [57, 58, 61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73]
# print(show_class_ids)
# show_labels = [str(index)+'-'+dataset.class_names[index] for index in show_class_ids]
# show_matrix = (pairwise_matrix[show_class_ids])[:,show_class_ids]
# hmfig = display_pairwise_heatmap(show_matrix, show_labels, ' ACTIVE CLASSES')
# hmfig.savefig("E:\\Users\\Kevin.Bardool\\Desktop\\NN Related files\\ALL_CLASSES")<jupyter_output><empty_output><jupyter_text>#### FURNITURE - ELECTRONIC<jupyter_code>hmfig = display_pairwise_heatmap_2(dataset, ['furniture', 'electronic' ], pairwise_matrix)<jupyter_output>furniture [57, 58, 59, 60, 61, 62]
electronic [63, 64, 65, 66, 67, 68]
indices : 12 [57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68]
ext_indices: 12 [62, 63, 64, 65, 67, 70, 72, 73, 74, 75, 76, 77]
<jupyter_text>#### FURNITURE-FOOD<jupyter_code>hmfig = display_pairwise_heatmap_2(dataset, ['furniture', 'food' ], pairwise_matrix)<jupyter_output>furniture [57, 58, 59, 60, 61, 62]
food [47, 48, 49, 50, 51, 52, 53, 54, 55, 56]
indices : 16 [57, 58, 59, 60, 61, 62, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56]
ext_indices: 16 [62, 63, 64, 65, 67, 70, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61]
<jupyter_text>#### FURNITURE-INDOOR<jupyter_code>hmfig = display_pairwise_heatmap_2(dataset, ['furniture', 'indoor' ], pairwise_matrix)<jupyter_output>furniture [57, 58, 59, 60, 61, 62]
indoor [74, 75, 76, 77, 78, 79, 80]
indices : 13 [57, 58, 59, 60, 61, 62, 74, 75, 76, 77, 78, 79, 80]
ext_indices: 13 [62, 63, 64, 65, 67, 70, 84, 85, 86, 87, 88, 89, 90]
<jupyter_text>#### APPLIANCE - KITCHEN - SPORTS<jupyter_code>hmfig = display_pairwise_heatmap_2(dataset, ['appliance', 'kitchen' , 'sports'], pairwise_matrix)
# hmfig.savefig("E:\\Users\\Kevin.Bardool\\Desktop\\NN Related files\\"+category_ttl)<jupyter_output>appliance [69, 70, 71, 72, 73]
kitchen [40, 41, 42, 43, 44, 45, 46]
sports [30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
indices : 22 [69, 70, 71, 72, 73, 40, 41, 42, 43, 44, 45, 46, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
ext_indices: 22 [78, 79, 80, 81, 82, 44, 46, 47, 48, 49, 50, 51, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
<jupyter_text>#### APPLIANCE - KITCHEN - OUTDOOR<jupyter_code>hmfig = display_pairwise_heatmap_2(dataset,['appliance', 'kitchen' , 'outdoor'], pairwise_matrix)
# hmfig.savefig("E:\\Users\\Kevin.Bardool\\Desktop\\NN Related files\\"+category_ttl)<jupyter_output>appliance [69, 70, 71, 72, 73]
kitchen [40, 41, 42, 43, 44, 45, 46]
outdoor [10, 11, 12, 13, 14]
indices : 17 [69, 70, 71, 72, 73, 40, 41, 42, 43, 44, 45, 46, 10, 11, 12, 13, 14]
ext_indices: 17 [78, 79, 80, 81, 82, 44, 46, 47, 48, 49, 50, 51, 10, 11, 13, 14, 15]
<jupyter_text>#### APPLIANCE - KITCHEN - FURNITURE<jupyter_code>hmfig = display_pairwise_heatmap_2(dataset,['appliance', 'kitchen' , 'furniture'], pairwise_matrix)
# hmfig.savefig("E:\\Users\\Kevin.Bardool\\Desktop\\NN Related files\\"+category_ttl)<jupyter_output>appliance [69, 70, 71, 72, 73]
kitchen [40, 41, 42, 43, 44, 45, 46]
furniture [57, 58, 59, 60, 61, 62]
indices : 18 [69, 70, 71, 72, 73, 40, 41, 42, 43, 44, 45, 46, 57, 58, 59, 60, 61, 62]
ext_indices: 18 [78, 79, 80, 81, 82, 44, 46, 47, 48, 49, 50, 51, 62, 63, 64, 65, 67, 70]
<jupyter_text>## HeatmapDataset<jupyter_code>HEATMAP_PATH = os.path.join(paths.DIR_DATASET,'coco2014_heatmaps')
print(HEATMAP_PATH)
from mrcnn.heatmap import HeatmapDataset
dataset = HeatmapDataset()
dataset.load_heatmap(mrcnn_config.COCO_DATASET_PATH, HEATMAP_PATH, 'minival')
# def load_heatmap(self, dataset_dir, heatmap_dataset_dir, subset, class_ids=None,class_map=None, return_coco=False):
dataset.prepare()<jupyter_output><empty_output><jupyter_text>#### simulate `load_heatmap()`<jupyter_code>dataset_dir =mrcnn_config.COCO_DATASET_PATH
heatmap_dataset_dir = HEATMAP_PATH
subset = 'minival'
class_ids=None
class_map=None
return_coco=False
image_dir = os.path.join(dataset_dir, "train2014" if subset == "train" else "val2014")
heatmap_dir = os.path.join(heatmap_dataset_dir, "train2014" if subset == "train" else "val2014")
# image_dir = os.path.join(dataset_dir, "train2017" if subset == "train" lse "val2017")
print(image_dir,'\n', heatmap_dir)
# Create COCO object
json_path_dict = {
"train" : "annotations/instances_train2014.json",
"val" : "annotations/instances_val2014.json",
"minival": "annotations/instances_minival2014.json",
"val35k" : "annotations/instances_valminusminival2014.json",
"test" : "annotations/image_info_test2014.json"
}
print('subset: ', subset, 'json_path_dir: ', json_path_dict[subset])
coco = COCO(os.path.join(dataset_dir, json_path_dict[subset]))
# Load all classes or a subset?
if not class_ids:
# All classes
class_ids = sorted(coco.getCatIds())
print(' ClassIds :', class_ids)
##--------------------------------------------------------------
## Get image ids - using COCO
##--------------------------------------------------------------
#All images or a subset?
if class_ids:
print(' Subset of classes')
image_ids = []
for id in class_ids:
image_ids.extend(list(coco.getImgIds(catIds=[id])))
# Remove duplicates
image_ids = list(set(image_ids))
else:
# All images
class_ids = sorted(coco.getCatIds())
print(' All classes')
image_ids = list(coco.imgs.keys())
print(' ClassIds : ', len(class_ids))
print(' Image ids : ', len(image_ids))
# # Add classes to dataset.class_info structure
for i in class_ids:
dataset.add_class("coco", i, coco.loadCats(i)[0]["name"])
len(dataset.class_info)
# image_ids[:20]
# # print(' ClassIds :', class_ids)
# # Add images to dataset.image_info structure
dataset.image_info = []
heatmap_notfound= heatmap_found = 0
print(heatmap_notfound, heatmap_found)
for i in image_ids:
print('image id: ',i)
heatmap_filename = 'hm_{:012d}.npz'.format(i)
heatmap_path = os.path.join(heatmap_dir, heatmap_filename)
## Only load image_info data structure for images where the corrsponding
## heatmap .npz file exist
if not os.path.isfile(heatmap_path):
print('file not found:::',heatmap_filename)
heatmap_notfound += 1
else:
dataset.add_image(
"coco", image_id=i,
path=os.path.join(image_dir, coco.imgs[i]['file_name']),
width=coco.imgs[i]["width"],
height=coco.imgs[i]["height"],
heatmap_path=heatmap_path
)
heatmap_found += 1
# annotations=coco.loadAnns(coco.getAnnIds(imgIds=[i], catIds=class_ids, iscrowd=None)))
print(' Images ids :', len(image_ids))
print(' Corresponding heatmap found :' , heatmap_found)
print(' Corresponding heatmap not found :' , heatmap_notfound)
print(' Total :', heatmap_found + heatmap_notfound)
print(len(dataset.image_ids))
print(len(dataset.image_info))
print(dataset.image_info[0])
# print(dataset.image_info[5000])
##--------------------------------------------------------------
## Get image ids - using walk on HEATMAP_PATH
##--------------------------------------------------------------
print(' image dir : ', image_dir)
print(' json_path_dir : ', os.path.join(dataset_dir, json_path_dict[subset]))
regex = re.compile(".*/\w+(\d{12})\.jpg")
image_ids = []
heatmap_files = next(os.walk(heatmap_dir))[2]
print('heat ap dir :' , heatmap_dir)
for hm_file in heatmap_files:
print(' Processing file: ', hm_file)
heatmap_path=os.path.join(heatmap_dir, hm_file)
i = int(os.path.splitext(hm_file.lstrip('hm_'))[0])
loaddata = np.load(heatmap_path)
print(loaddata['coco_info'])
coco_id = loaddata['coco_info'][0]
coco_filename = loaddata['coco_info'][1]
input_image_meta = loaddata['input_image_meta']
loaddata.close()
dataset.add_image(
"coco",
image_id=i,
path=os.path.join(image_dir, coco.imgs[i]['file_name']),
width=coco.imgs[i]["width"],
height=coco.imgs[i]["height"],
heatmap_path=os.path.join(heatmap_dir, 'hm_{:012d}'.format(i))
)
# print(input_filename, type(input_filename), len(input_filename))
# coco_filename = input_filename.replace('\\' , "/")
# print(coco_filename)
# regex_match = regex.match(input_filename)
# # Add images to dataset.image_info structure
# if regex_match:
# coco_id = int(regex_match.group(1))
# print(i, input_image_meta[:8],' ', input_filename, ' coco_id : ',coco_id)
# self.add_image(
# "coco",
# image_id=i,
# path = input_filename,
# height=input_image_meta[1],
# width= input_image_meta[2],
# heatmap_path=heatmap_path
# )
# image_ids.append(i)
# # annotations=coco.loadAnns(coco.getAnnIds(imgIds=[i], catIds=class_ids, iscrowd=None)))
# print(' number of images : ', len(image_ids))<jupyter_output><empty_output><jupyter_text>#### Define data generator<jupyter_code>from mrcnn.datagen_fcn import fcn_data_generator, fcn_data_gen_simulate
##--------------------------------------------------------------------------------
## Data generators
##--------------------------------------------------------------------------------
generator = fcn_data_generator(dataset, mrcnn_config, shuffle=True,
batch_size=mrcnn_config.BATCH_SIZE)
# val_generator = data_generator(dataset_val, mrcnn_model.config, shuffle=True,
# batch_size=mrcnn_config.BATCH_SIZE,
# augment=False)
train_batch_x, train_batch_y = next(generator)
for i in train_batch_x:
print(type(i), i.shape, i.dtype)
for i in train_batch_y:
print(type(i), i.shape)
print(train_batch_y)
# imgmeta_idx = mrcnn_model.keras_model.input_names.index('input_image_meta')
from mrcnn.visualize import plot_2d_heatmap_compare
import mrcnn.utils as utils
# def plot_2d_heatmap_compare( Z1, Z2, boxes, image_idx, class_ids, size = None,
# num_bboxes = 0, class_names=None, scale = 1,
# title = '2D Comparison between 2d heatmaps w/ bboxes'):
train_batch_x, train_batch_y = fcn_data_gen_simulate(dataset, mrcnn_config, [210])
img_meta = train_batch_x[1]
class_names = dataset.class_names
print(img_meta.shape)
for img_idx in range(mrcnn_config.BATCH_SIZE):
print(img_meta[img_idx])
image_id = img_meta[img_idx,0]
image = dataset.load_image(image_id)
timg = train_batch_x[0][img_idx]
print(' image from train_batch_x :', timg.shape, timg.dtype, np.min(timg), np.max(timg))
print(' image from dataset load :', image.shape, image.dtype, np.min(image), np.max(image))
## Display image, and mean-subtracted image
visualize.display_image_bw(image)
# visualize.display_images([image, train_batch_x[0][img_idx]], cols = 2, width = 18)
## display masks and bounding boxes
mask, class_ids = dataset.load_mask(image_id)
print('class_ids:', class_ids)
bbox = utils.extract_bboxes(mask)
visualize.display_top_masks(image, mask, class_ids, dataset.class_names)
visualize.display_instances_with_mask(image, bbox, mask, class_ids, dataset.class_names, figsize =(8,8))
## display ground truth heatmaps
gt_class_ids = np.unique(train_batch_x[5][img_idx,:,:,4]).astype(int).tolist()
print('Image : {} GT ClassIds: {}'.format(img_idx, gt_class_ids))
# visualize.plot_2d_heatmap_no_bboxes(train_batch_x[4],img_idx, columns = 4, class_names=class_names)
visualize.plot_2d_heatmap_no_bboxes(train_batch_x[4], img_idx,class_ids = gt_class_ids, columns = 4, class_names=class_names)
## display predicted heatmaps
pr_class_ids = np.unique(train_batch_x[3][img_idx,:,:,4]).astype(int).tolist()
print('Image : {} PR ClassIds: {}'.format(img_idx, pr_class_ids))
# visualize.plot_2d_heatmap_no_bboxes(train_batch_x[2],img_idx, columns = 4, class_names=class_names)
visualize.plot_2d_heatmap_no_bboxes(train_batch_x[2], img_idx,class_ids = gt_class_ids, columns = 4, class_names=class_names)
from mrcnn.utils import unresize_image
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import skimage.util
import skimage.io
paths.display()
# ds = os.path.join(paths.COCO_DATASET_PATH,'val2014/COCO_val2014_000000017031.jpg')
ds = dataset.image_info[210]['path']
print(ds)
im = skimage.io.imread(ds)
print(im.shape, im.dtype, np.min(im), np.max(im))
# im = skimage.io.imread(ds)
# im = Image.open(ds)
print('image : ', image.shape, image.dtype)
image_bw = Image.fromarray(image).convert(mode='L')
# print('image_bw : ', image_bw.shape, image_bw.dtype)
molded_image, window, scale, padding = utils.resize_image(
image,
min_dim=mrcnn_config.IMAGE_MIN_DIM,
max_dim=mrcnn_config.IMAGE_MAX_DIM,
padding=mrcnn_config.IMAGE_PADDING)
print('molded_image : ', molded_image.shape, molded_image.dtype)
print(' image meta :', train_batch_x[1][0])
unresized_image = unresize_image(molded_image,train_batch_x[1][0])
print('unresized_image : ', unresized_image.shape, unmolded_image.dtype)
unresized_image_bw = np.asarray(Image.fromarray(unresized_image).convert(mode='L'))
print('unresized_image_bw: ',unresized_image_bw.shape, unresized_image_bw.dtype)
unmolded_image = utils.unmold_image(molded_image, mrcnn_config)
print('unmolded_image : ', unmolded_image.shape, unmolded_image.dtype)
unmolded_image_bw = np.asarray(Image.fromarray(unmolded_image).convert(mode='L'))
print('unmolded_image_bw : ', unmolded_image_bw.shape, unmolded_image_bw.dtype)
unmolded_heatmap = unresize_image(train_batch_x[2][0,:,:,24],train_batch_x[1][0], upscale = mrcnn_config.HEATMAP_SCALE_FACTOR)
print('unmolded_heatmap : ', unmolded_heatmap.shape, unmolded_heatmap.dtype)
print(train_batch_x[2][0,:,:,24].dtype)
import matplotlib.pyplot as plt
print('Orig image shape: ', image.shape)
print('Image window is : ', window)
print('Scale is : ', scale)
print(train_batch_x[1][0,:8])
print('Padding is :', padding)
fig = plt.figure(frameon=False, figsize=(10,10))
im1 = plt.imshow(molded_image)
fig = plt.figure(frameon=False, figsize=(10,10))
im1 = plt.imshow(unresized_image)
fig = plt.figure(frameon=False, figsize=(10,10))
im1 = plt.imshow(unresized_image_bw, cmap=plt.cm.gray)
fig = plt.figure(frameon=False, figsize=(10,10))
im1 = plt.imshow(unmolded_image)
fig = plt.figure(frameon=False, figsize=(10,10))
im1 = plt.imshow(unmolded_image_bw, cmap=plt.cm.gray)
# fig = plt.figure(frameon=False, figsize=(10,10))
# im1 = plt.imshow(unmolded_image_bw)
# fig = plt.figure(frameon=False, figsize=(10,10))
# im1 = plt.imshow(unmolded_heatmap,cmap = cm.YlOrRd)
# fig = plt.figure(frameon=False, figsize=(10,10))
# im1 = plt.imshow(unmolded_image , cmap=plt.cm.gray)
# im1 = plt.imshow(unmolded_heatmap, alpha = 0.6,cmap=cm.YlOrRd)
# fig = plt.figure(frameon=False, figsize=(10,10))
# im1 = plt.imshow(image_bw , cmap=plt.cm.gray)
# im1 = plt.imshow(unmolded_heatmap, alpha = 0.6,cmap=cm.YlOrRd)
plt.show()
from mrcnn.visualize import display_heatmaps, display_heatmaps_fcn, display_heatmaps_mrcnn
# visualize.display_image_bw(image)
display_heatmaps(train_batch_x, 0, hm = 'pr', config = mrcnn_config, class_ids = [0,1,2,3,4,5,24], class_names = dataset.class_names)
# display_heatmaps(train_batch_x, 0, hm = 'gt', config = mrcnn_config, class_ids = [0,1,2,3,4,5,24], class_names = dataset.class_names)<jupyter_output><empty_output>
|
permissive
|
/notebooks/dev - dataset - COCO Dataset information- Load subset of classes.ipynb
|
amirunpri2018/mrcnn3
| 24 |
<jupyter_start><jupyter_text>### Fit a Plane into the map points using a RANSAC scheme<jupyter_code>import numpy as np
import numpy.linalg as la
eps = 0.00001
def svd(A):
u, s, vh = la.svd(A)
S = np.zeros(A.shape)
S[:s.shape[0], :s.shape[0]] = np.diag(s)
return u, S, vh
def fit_plane_LSE(points):
# points: Nx4 homogeneous 3d points
# return: 1d array of four elements [a, b, c, d] of
# ax+by+cz+d = 0
assert points.shape[0] >= 3 # at least 3 points needed
U, S, Vt = svd(points)
null_space = Vt[-1, :]
return null_space
def get_point_dist(points, plane):
# return: 1d array of size N (number of points)
dists = np.abs(points @ plane) / np.sqrt(plane[0]**2 + plane[1]**2 + plane[2]**2)
return dists
def fit_plane_LSE_RANSAC(points, iters=1000, inlier_thresh=0.05, num_support_points=None, return_outlier_list=False):
# points: Nx4 homogeneous 3d points
# num_support_points: If None perform LSE fit with all points, else pick `num_support_points` random points for fitting
# return:
# plane: 1d array of four elements [a, b, c, d] of ax+by+cz+d = 0
# inlier_list: 1d array of size N of inlier points
max_inlier_num = -1
max_inlier_list = None
N = points.shape[0]
assert N >= 3
for i in range(iters):
chose_id = np.random.choice(N, 3, replace=False)
chose_points = points[chose_id, :]
tmp_plane = fit_plane_LSE(chose_points)
dists = get_point_dist(points, tmp_plane)
tmp_inlier_list = np.where(dists < inlier_thresh)[0]
tmp_inliers = points[tmp_inlier_list, :]
num_inliers = tmp_inliers.shape[0]
if num_inliers > max_inlier_num:
max_inlier_num = num_inliers
max_inlier_list = tmp_inlier_list
#print('iter %d, %d inliers' % (i, max_inlier_num))
final_points = points[max_inlier_list, :]
if num_support_points:
max_support_points = np.min((num_support_points, final_points.shape[0]))
support_idx = np.random.choice(np.arange(0, final_points.shape[0]), max_support_points, replace=False)
support_points = final_points[support_idx, :]
else:
support_points = final_points
print(final_points.shape)
plane = fit_plane_LSE(support_points)
fit_variance = np.var(get_point_dist(final_points, plane))
print('RANSAC fit variance: %f' % fit_variance)
print(plane)
dists = get_point_dist(points, plane)
select_thresh = inlier_thresh * 1
inlier_list = np.where(dists < select_thresh)[0]
if not return_outlier_list:
return plane, inlier_list
else:
outlier_list = np.where(dists >= select_thresh)[0]
return plane, inlier_list, outlier_list
map_points_h = cv2.convertPointsToHomogeneous(map_points).reshape(-1, 4)
import time
t0 = time.perf_counter()
plane, inlier_list = fit_plane_LSE_RANSAC(map_points_h, iters=1000, inlier_thresh=1, num_support_points=3000, return_outlier_list=False)
dt = time.perf_counter() - t0
print(inlier_list.shape, dt)<jupyter_output>(29715, 4)
RANSAC fit variance: 0.047951
[-3.94451044e-04 7.84094925e-05 -4.98685012e-02 9.98755711e-01]
(29230,) 0.9602169501595199
<jupyter_text>### Invert plane if normal points in the "wrong" direction
During fitting the plane can be either fitted in a way that the normal points towards the world origin (parameter d > 0) or in the opposite direction (d < 0). The following calculations assume the plane normal to point into the direction of the world origin. So, in case the plane is fitted with d < 0, the plane normal has to be inverted, that is "flipped" by 180° along its own axis.<jupyter_code>if plane[-1] < 0:
plane *= -1<jupyter_output><empty_output><jupyter_text>### Project map points onto plane along the plane normal<jupyter_code># project all map points onto the plane
def plane_to_hessian(plane):
"""Convert plane to Hessian normal form (n.x + p = 0)"""
a, b, c, d = plane
nn = np.sqrt(a**2+b**2+c**2)
n = np.array([a/nn, b/nn, c/nn])
p = d/nn
return n, p
def project_points(plane, points):
"""Project points with shape (-1, 3) onto plane (given as coefficients a, b, c, d with ax+by+cz+d=0)."""
n, p = plane_to_hessian(plane)
p_orig = -n*p
points_proj = points.reshape(-1, 3) - (np.sum(n*(points.reshape(-1, 3) - p_orig.reshape(1, 3)), axis=1)*n.reshape(3, 1)).T
return points_proj
map_points_proj = project_points(plane, map_points[inlier_list])
map_points_proj<jupyter_output><empty_output><jupyter_text>### Transform map points from world coordinates to plane coordinates
Choose a random orthonormal base inside the plane. Find an affine transformation M that maps 3D world coordinates of the projected map points (projected onto the plane) to 2D plane coordinates. These plane coordinates are defined with respect to the orthonormal base. Based on this stackoverflow question: https://stackoverflow.com/a/18522281<jupyter_code># transform points from world to plane coordinates
def get_world2plane_transformation(plane, points, return_base=True):
"""Yields an affine transformation M from 3D world to 2D plane coordinates.
Args:
plane (`numpy.ndarray`): Shape (4,). Plane coefficients [a, b, c, d] which fullfill
ax + by + cz + d = 0.
points (`numpy.ndarray`): Shape (-1, 3). 3D points on the plane in (x, y, z) world coordinates.
return_base (`bool`): Wether to return the orthonormal base or not.
Returns:
M (`numpy.ndarray`): Shape (4, 4). Affine transformation matrix which maps points on the plane
from 3D world coordinates to 2D points with respect to a randomly chosen orthonormal base on the plane.
Compute point2D = M @ point3D to map a 3D point to 2D coordinates. To retrieve a 3D point from 2D
planes coordinates compute point3D = inv(M) @ point2D.
base (`numpy.ndarray`) Shape (3, 4). right-handend orthonormal base in which 2D plane points are
expressed. Column vectors of the array correspond to the origin point O of the base, the first and
second base vector u, v and the third base vector n which is the plane normal.
"""
# pick a random point on the plane as origin and another one to form first base vector
point_idx = np.arange(0, points.shape[0])
np.random.seed(0)
plane_O, plane_U = points[np.random.choice(point_idx, 2, replace=False), :]
u = (plane_U - plane_O)/np.linalg.norm(plane_U - plane_O)
n, _ = plane_to_hessian(plane) # plane normal
# compute third base vector
v = np.cross(u, n)/np.linalg.norm(np.cross(u, n))
# get end points of base vectors
U = plane_O + u
V = plane_O + v
N = plane_O + n
# form base quadruplet
D = np.array([[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[1, 1, 1, 1]])
# form transformation matrix S with M * S = D
S = np.stack((np.append(plane_O, 1), np.append(U, 1), np.append(V, 1), np.append(N, 1)), axis=1)
# compute affine transformation M which maps points from world to plane coordinates
M = np.matmul(D, np.linalg.inv(S))
if return_base:
base = np.stack([plane_O.T, u.T, v.T, n.T], axis=1)
return M, base
else:
return M
def map_points_world2plane(points_world, M):
"""Transforms 3D points on a plane to 2D plane coordinates given the transformation matrix M.
Args:
points_world (`numpy.ndarrray`): Shape (-1, 3). 3D points on the plane in (x, y, z) world coordinates.
M (`nnumpy.ndarray`): Shape (4, 4). Affine transformation matrix computed with `get_world2plane_transformation`.
Returns:
points_plane (`numpy.ndarrray`): Shape (-1, 2). 2D points on the plane in (xp, yp, z=0) plane coordinates
w.r.t. to a randomly chosen orthonormal base.
"""
points_world_h = cv2.convertPointsToHomogeneous(points_world).reshape(-1, 4)
points_plane_h = (M @ points_world_h.T).T
points_plane = cv2.convertPointsFromHomogeneous(points_plane_h).reshape(-1, 3)
points_plane = points_plane[:, :2]
return points_plane
def map_points_plane2world(points_plane, M):
"""Transforms 2D plane points into 3D world coordinates given the transformation matrix M.
Args:
points_plane (`numpy.ndarrray`): Shape (-1, 2). 2D points on the plane in (xp, yp, z=0) plane coordinates
w.r.t. to a randomly chosen orthonormal base.
M (`nnumpy.ndarray`) Shape (4, 4). Affine transformation matrix computed with `get_world2plane_transformation`.
Returns:
points_world (`numpy.ndarrray`): Shape (-1, 3). 3D points on the plane in (x, y, z) world coordinates.
"""
points_plane_tmp = np.zeros((points_plane.shape[0], 3))
points_plane_tmp[:, :2] = points_plane
points_plane_h = cv2.convertPointsToHomogeneous(points_plane_tmp).reshape(-1, 4)
points_world_h = (np.linalg.inv(M) @ points_plane_h.T).T
points_world = cv2.convertPointsFromHomogeneous(points_world_h).reshape(-1, 3)
return points_world
# test functions
points_world = map_points_proj
print(points_world)
M, base = get_world2plane_transformation(plane, points_world)
points_plane = map_points_world2plane(points_world, M)
print(points_plane)
#points_world = map_points_plane2world(points_plane, M) # yields the original points which shows the mapping is correct
#print(points_world)
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(points_world[:1000, 0], points_world[:1000, 1], points_world[:1000, 2], s=1, c="red")
ax.scatter(points_plane[:1000, 0], points_plane[:1000, 1], s=1, c="green")
ax.set_xlim([-20,20])
ax.set_ylim([-20,20])
ax.set_zlim([0,40])
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_aspect(1.0)
# plot base points (point_O, U, V, N)
ax.scatter(base[0, 0], base[1, 0], base[2, 0], c="orange")
ax.scatter(base[0, 0]+base[0, 1], base[1, 0]+base[1, 1], base[2, 0]+base[2, 1], c="red")
ax.scatter(base[0, 0]+base[0, 2], base[1, 0]+base[1, 2], base[2, 0]+base[2, 2], c="green")
ax.scatter(base[0, 0]+base[0, 3], base[1, 0]+base[1, 3], base[2, 0]+base[2, 3], c="blue")
ax.scatter(0, 0, 0, c="black", s=5)
plt.show()
fig = plt.figure(figsize=(6, 4))
ax = fig.add_subplot(111)
ax.scatter(points_world[:1000, 0], points_world[:1000, 1], s=1, c="red")
ax.scatter(points_plane[:1000, 0], points_plane[:1000, 1], s=1, c="green")
ax.set_xlim([-30,20])
ax.set_ylim([-20,20])
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_aspect(1.0)
ax.scatter(base[0, 0], base[1, 0], c="orange")
ax.scatter(base[0, 0]+base[0, 1], base[1, 0]+base[1, 1], c="red")
ax.scatter(base[0, 0]+base[0, 2], base[1, 0]+base[1, 2], c="green")
ax.scatter(base[0, 0]+base[0, 3], base[1, 0]+base[1, 3], c="blue")
ax.scatter(0, 0, c="black", s=5)
# base [[Ox, ux, vx, nx],
# [Oy, uy, vy, ny],
# [Oz, uz, vz, nz]]
ax.grid()
plt.show() <jupyter_output><empty_output><jupyter_text>### Convert camera poses from world to plane coordinates<jupyter_code>def to_twist(R, t):
"""Convert a 3x3 rotation matrix and translation vector (shape (3,))
into a 6D twist coordinate (shape (6,))."""
r, _ = cv2.Rodrigues(R)
twist = np.zeros((6,))
twist[:3] = r.reshape(3,)
twist[3:] = t.reshape(3,)
return twist
def from_twist(twist):
"""Convert a 6D twist coordinate (shape (6,)) into a 3x3 rotation matrix
and translation vector (shape (3,))."""
r = twist[:3].reshape(3, 1)
t = twist[3:].reshape(3, 1)
R, _ = cv2.Rodrigues(r)
return R, t
# first compute R_plane, t_plane from given coordinate systems
# see: https://math.stackexchange.com/questions/1125203/finding-rotation-axis-and-angle-to-align-two-3d-vector-bases
def get_plane_pose(plane_O, plane_u, plane_v, plane_n):
"""Compute the plane pose R_plane, t_plane in world coordinates.
Assumes the world coordinate system to have rotation R = I and zero translation.
Args:
plane_O (`numpy.ndarray`): Shape (3,). Origin of the plane coordinate base.
plane_u (`numpy.ndarray`): Shape (3,). First base vector of the plane coordinate system.
plane_v (`numpy.ndarray`): Shape (3,). Second base vector of the plane coordinate system.
plane_n (`numpy.ndarray`): Shape (3,). Third base vector of the plane coordinate system,
corresponds to plane normal.
Returns:
R_plane (`numpy.ndarray`), t_plane (`numpy.ndarray`): Rotation matrix with shape (3, 3) and
translation vector with shape (3,) which describe the pose of the plane coordinate base in
the world coordinate frame.
"""
t_plane = plane_O
world_x = np.array([1, 0, 0])
world_y = np.array([0, 1, 0])
world_z = np.array([0, 0, 1])
R_plane = np.outer(plane_u, world_x) + np.outer(plane_v, world_y) + np.outer(plane_n, world_z)
return R_plane, t_plane
R_plane, t_plane = get_plane_pose(base[:, 0], base[:, 1], base[:, 2], base[:, 3])
print(R_plane, t_plane)<jupyter_output>[[-0.94581808 -0.32339583 -0.02903961]
[ 0.32171873 -0.94547275 0.05077744]
[ 0.04387737 -0.03868363 -0.99828771]] [29.46450064 33.416561 20.58756705]
<jupyter_text>def invert_pose(R, t):
"""Inverts a pose described by 3x3 rotation matrix R and 3d-vector t."""
R_inv = R.T
t_inv = -np.matmul(R.T, t.reshape(3,))
return R_inv, t_invdef pose_world_to_plane(R_plane, t_plane, R_pose, t_pose):
""""""
t_plane = t_plane.reshape(3,)
t_pose = t_pose.reshape(3,)
# invert mapping to retrieve a mapping from world to plane coordinates
R_plane_inv, t_plane_inv = invert_pose(R_plane, t_plane)
# now map camera pose from world to plane coordinates
R_pose_plane = R_plane_inv @ R_pose
t_pose_plane = t_pose.reshape(3,) + R_pose @ t_plane_inv
return R_pose_plane, t_pose_plane<jupyter_code>def pose_world_to_plane(R_plane, t_plane, R_pose, t_pose):
"""Map keyframe poses from the world to plane coordinate frame.
Args:
R_plane (`numpy.ndarray`): Shape (3, 3). Rotation matrix of plane coordinate system in world coordinate frame.
t_plane (`numpy.ndarray`): Shape (3,). Translation vector of plane coordinate system in world coordinate frame.
R_pose (`numpy.ndarray`): Shape (3, 3). Rotation matrix of keyframe in world coordinate frame.
t_pose (`numpy.ndarray`): Shape (3,). Translation vector of keyframe in world coordinate frame.
Returns:
R_pose_plane (`numpy.ndarray`), t_pose_plane (`numpy.ndarray`): Rotation matrix with shape (3, 3) and
translation vector with shape (3,) of the keyframe in plane coordinate frame.
"""
t_pose_plane = t_plane.reshape(3,) + R_plane @ t_pose.reshape(3,)
R_pose_plane = R_plane @ R_pose
return R_pose_plane, t_pose_plane
# convert all keyframe poses to plane coordinates
kf_poses_plane = []
for pose in kf_poses:
R_pose, t_pose = from_twist(pose)
R_kf_plane, t_kf_plane = pose_world_to_plane(R_plane, t_plane, R_pose, t_pose)
kf_poses_plane.append((R_kf_plane, t_kf_plane))
R_world = np.eye(3)
t_world = np.zeros((3,))
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
plot_basis(ax, R_world, t_world)
plot_basis(ax, R_plane, t_plane)
R_0, t_0 = from_twist(kf_poses[0])
plot_basis(ax, R_0, t_0.reshape(3,))
R_1, t_1 = from_twist(kf_poses[1])
plot_basis(ax, R_1, t_1.reshape(3,))
R_0, t_0 = kf_poses_plane[0]
plot_basis(ax, R_0, t_0.reshape(3,))
R_1, t_1 = kf_poses_plane[1]
plot_basis(ax, R_1, t_1.reshape(3,))
ax.scatter(base[0, 0], base[1, 0], base[2, 0], c="orange")
ax.scatter(base[0, 0]+base[0, 1], base[1, 0]+base[1, 1], base[2, 0]+base[2, 1], c="red")
ax.scatter(base[0, 0]+base[0, 2], base[1, 0]+base[1, 2], base[2, 0]+base[2, 2], c="green")
ax.scatter(base[0, 0]+base[0, 3], base[1, 0]+base[1, 3], base[2, 0]+base[2, 3], c="blue")
ax.set_xlim([-20, 20])
ax.set_ylim([-20, 40])
ax.set_zlim([0, 40])
ax.set_aspect(1.0)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
plt.show()<jupyter_output><empty_output><jupyter_text>### Project image corners onto plane<jupyter_code>w = 1920
h = 1080
fx = 1184.51770
fy = 1183.63810
cx = 978.30778
cy = 533.85598
# corners of the image
img_points = np.array([[0, 0],
[w, 0],
[0, h],
[w, h]])
def unproject(img_points, fx, fy, cx, cy):
"""Unproject image points with shape (-1, 2) to camera coordinates."""
camera_points = np.zeros((img_points.shape[0], 3))
camera_points[:, 0] = (img_points[:, 0]-cx)/fx
camera_points[:, 1] = (img_points[:, 1]-cy)/fy
camera_points[:, 2] = 1.0
return camera_points
camera_points = unproject(img_points, fx, fy, cx, cy)
camera_points # image corners in camera coordinates
# adapted from: https://github.com/zdzhaoyong/Map2DFusion/blob/master/src/Map2DRender.cpp
warped_frames = [None for _ in range(len(kf_frames))]
warped_masks = [None for _ in range(len(kf_frames))]
length_pixel = 0.01
view_min = np.array([1e6, 1e6])
view_max = np.array([-1e6, -1e6])
sizes = np.zeros((len(kf_frames), 2), dtype=np.int)
corners_world = np.zeros((len(kf_frames), 2))
for idx, (pose, frame) in enumerate(zip(kf_poses_plane, kf_frames)):
frame_okay = True
cur_view_min = np.array([1e6, 1e6])
cur_view_max = np.array([-1e6, -1e6])
# generally applicable because the world coordinate base coincides with the first keyframe and plane
# points will always have a positive z_world coordinate. Further, we ensure the plane normal to point
# towards the world origin.
downlook = np.array([0,0,-1])
# project image corners from camera to plane coordinates
plane_points = np.zeros((camera_points.shape[0], 2))
for i, camera_point in enumerate(camera_points):
axis = pose[0] @ camera_point
if np.dot(axis, downlook) < 0.4:
print("Camera axis is deviating too much from 'down' direction. Skipping to next keyframe.")
frame_okay = False
break
axis = pose[1] - axis*(pose[1][-1]/axis[-1])
plane_points[i, :] = axis[:2]
if not frame_okay:
continue
# expand viewport of current frame
for i, plane_point in enumerate(plane_points):
if plane_point[0] < cur_view_min[0]:
cur_view_min[0] = plane_point[0]
if plane_point[1] < cur_view_min[1]:
cur_view_min[1] = plane_point[1]
if plane_point[0] > cur_view_max[0]:
cur_view_max[0] = plane_point[0]
if plane_point[1] > cur_view_max[1]:
cur_view_max[1] = plane_point[1]
# expand overall viewport if necessary
if cur_view_min[0] < view_min[0]:
view_min[0] = cur_view_min[0]
if cur_view_min[1] < view_min[1]:
view_min[1] = cur_view_min[1]
if cur_view_max[0] > view_max[0]:
view_max[0] = cur_view_max[0]
if cur_view_max[1] > view_max[1]:
view_max[1] = cur_view_max[1]
corners_world[idx, :] = cur_view_min
sizes[idx, :] = ((cur_view_max - cur_view_min)/length_pixel)
dst_points = (plane_points - cur_view_min)/length_pixel
# find homography between camera and ground plane points
transmtx = cv2.getPerspectiveTransform(img_points.astype(np.float32), dst_points.astype(np.float32))
mask = np.full(frame.shape[0:2], 255, dtype=np.uint8)
# warp image & mask
warped_frame = cv2.warpPerspective(frame, transmtx, tuple(sizes[idx, :]), cv2.INTER_CUBIC, cv2.BORDER_REFLECT)
warped_mask = cv2.warpPerspective(mask, transmtx, tuple(sizes[idx, :]), cv2.INTER_NEAREST)
warped_frames[idx] = warped_frame
warped_masks[idx] = warped_mask
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4))
ax1.imshow(warped_frames[0][:, :, ::-1])
ax2.imshow(warped_masks[0][:, :])
plt.show()
R_world = np.eye(3)
t_world = np.zeros((3,))
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
plot_basis(ax, R_world, t_world)
plot_basis(ax, R_plane, t_plane)
R_0, t_0 = from_twist(kf_poses[0])
plot_basis(ax, R_0, t_0.reshape(3,))
R_1, t_1 = from_twist(kf_poses[1])
plot_basis(ax, R_1, t_1.reshape(3,))
R_0, t_0 = kf_poses_plane[0]
plot_basis(ax, R_0, t_0.reshape(3,))
R_1, t_1 = kf_poses_plane[1]
plot_basis(ax, R_1, t_1.reshape(3,))
ax.scatter(points_world[:1000, 0], points_world[:1000, 1], points_world[:1000, 2], s=1, c="red")
ax.scatter(points_plane[:1000, 0], points_plane[:1000, 1], s=1, c="green")
ax.scatter(camera_points[:, 0], camera_points[:, 1], camera_points[:, 2], c="cyan")
ax.scatter(plane_points[:, 0], plane_points[:, 1], c="magenta")
ax.scatter(base[0, 0], base[1, 0], base[2, 0], c="orange")
ax.scatter(base[0, 0]+base[0, 1], base[1, 0]+base[1, 1], base[2, 0]+base[2, 1], c="red")
ax.scatter(base[0, 0]+base[0, 2], base[1, 0]+base[1, 2], base[2, 0]+base[2, 2], c="green")
ax.scatter(base[0, 0]+base[0, 3], base[1, 0]+base[1, 3], base[2, 0]+base[2, 3], c="blue")
ax.set_xlim([-20, 20])
ax.set_ylim([-20, 40])
ax.set_zlim([0, 40])
ax.set_aspect(1.0)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
plt.show()<jupyter_output><empty_output><jupyter_text>### Stitch frames together using OpenCV Blenders<jupyter_code>corners_images = (corners_world - view_min)/length_pixel
corners_images = corners_images.astype(np.int)
# define ROI of stitched mosaic
bottom_right = np.max(corners_images, axis=0) + np.array([sizes[np.argmax(corners_images, axis=0)[0], 0], sizes[np.argmax(corners_images, axis=0)[1], 1]])
result_roi = np.array([0, 0, bottom_right[0], bottom_right[1]])
print("result ROI:", result_roi)
# compute number of channels
blend_strength = 5
result_roi_area = (result_roi[2] - result_roi[0]) * (result_roi[3] - result_roi[1])
blend_width = np.sqrt(result_roi_area) * blend_strength/100
print(blend_width)<jupyter_output>605.0459052832273
<jupyter_text>#### Multiband Blender<jupyter_code>num_bands = int(np.ceil(np.log(blend_width)/np.log(2)) - 1)
print("Using bands:", num_bands)
blender = cv2.detail_MultiBandBlender(try_gpu=False, num_bands=num_bands)
blender.prepare(result_roi)
for idx, (frame, mask) in enumerate(zip(warped_frames, warped_masks)):
if frame is not None:
blender.feed(frame, mask, tuple(corners_images[idx]))
result, result_mask = blender.blend(None, None)
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
ax.imshow(result[:, :, ::-1])
plt.show()
cv2.imwrite("result.jpg", result)<jupyter_output><empty_output><jupyter_text>#### Feather Blender
(result looks better than with multiband blender)<jupyter_code>blender = cv2.detail_FeatherBlender(sharpness=1./blend_width)
blender.prepare(result_roi)
for idx, (frame, mask) in enumerate(zip(warped_frames, warped_masks)):
if frame is not None:
blender.feed(frame.astype(np.int16), mask, tuple(corners_images[idx]))
result, result_mask = blender.blend(None, None)
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
ax.imshow(result[:, :, ::-1])
plt.show()
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
ax.imshow(result_mask)
plt.show()
cv2.imwrite("result_feather.jpg", result)
# TODO:
# - apply weighted image as in Map2DFusion
# - make background white <jupyter_output><empty_output>
|
no_license
|
/experiments/fit_ground_plane2.ipynb
|
YaoFushan/DroneMapper
| 10 |
<jupyter_start><jupyter_text># stage2<jupyter_code>import pandas as pd
import numpy as np
df_train = pd.read_csv('./data/train.csv')
df_train.head()
df_test = pd.read_csv('./data/test.csv')
df_test.head()<jupyter_output><empty_output><jupyter_text>## Age의 빈칸 채우기<jupyter_code>df_train.isna().sum()
df_train['Age'].fillna(df_train['Age'].mean(), inplace=True)
df_train.head()
df_test['Age'].fillna(df_test['Age'].mean(), inplace=True)
df_test.head()<jupyter_output><empty_output><jupyter_text>## Age를 세대별로 묶기<jupyter_code>df_train.loc[df_train['Age'] < 10, 'Age'] = 0
df_train.loc[(df_train['Age'] >= 10) & (df_train['Age'] < 20), 'Age'] = 1
df_train.loc[(df_train['Age'] >= 20) & (df_train['Age'] < 30), 'Age'] = 2
df_train.loc[(df_train['Age'] >= 30) & (df_train['Age'] < 40), 'Age'] = 3
df_train.loc[(df_train['Age'] >= 40) & (df_train['Age'] < 50), 'Age'] = 4
df_train.loc[df_train['Age'] > 50, 'Age'] = 5
df_train.head()
df_test.loc[df_test['Age'] < 10, 'Age'] = 0
df_test.loc[(df_test['Age'] >= 10) & (df_test['Age'] < 20), 'Age'] = 1
df_test.loc[(df_test['Age'] >= 20) & (df_test['Age'] < 30), 'Age'] = 2
df_test.loc[(df_test['Age'] >= 30) & (df_test['Age'] < 40), 'Age'] = 3
df_test.loc[(df_test['Age'] >= 40) & (df_test['Age'] < 50), 'Age'] = 4
df_test.loc[df_test['Age'] > 50, 'Age'] = 5
df_test.head()<jupyter_output><empty_output><jupyter_text>## Family Size 만들기<jupyter_code># 형재자매수 + 부모님수 = 가족수
df_train['FamilySize'] = df_train['SibSp'] + df_train['Parch']
df_train.head()
# 형재자매수 + 부모님수 = 가족수
df_test['FamilySize'] = df_test['SibSp'] + df_test['Parch']
df_test.head()<jupyter_output><empty_output><jupyter_text>## train, test 묶기 1차<jupyter_code>train = df_train[['Survived', 'Sex', 'Age', 'FamilySize']]
test = df_test[['Sex', 'Age', 'FamilySize']]
train.head()<jupyter_output><empty_output><jupyter_text>## Fare 빈칸을 채우기<jupyter_code>print(df_train.isna().sum())
print('='*50)
print(df_test.isna().sum())
df_test['Fare'].fillna(df_test['Fare'].mean(), inplace=True)
df_train['Embarked'].value_counts()
df_train['Sex'].value_counts()<jupyter_output><empty_output><jupyter_text>## Embarked 빈칸 채우기<jupyter_code>df_train['Embarked'].fillna('S', inplace=True)
df_train.head()
df_test['Embarked'].fillna('S', inplace=True)
df_test.head()
print(df_train.isna().sum())
print('='*50)
print(df_test.isna().sum())<jupyter_output>PassengerId 0
Survived 0
Pclass 0
Name 0
Sex 0
Age 0
SibSp 0
Parch 0
Ticket 0
Fare 0
Cabin 687
Embarked 0
FamilySize 0
dtype: int64
==================================================
PassengerId 0
Pclass 0
Name 0
Sex 0
Age 0
SibSp 0
Parch 0
Ticket 0
Fare 0
Cabin 327
Embarked 0
FamilySize 0
dtype: int64
<jupyter_text>## train, test 묶기 2차<jupyter_code>train = df_train[['Survived', 'Sex', 'Age', 'FamilySize', 'Fare', 'Embarked']]
test = df_test[['Sex', 'Age', 'FamilySize', 'Fare', 'Embarked']]
train.head()<jupyter_output><empty_output><jupyter_text>## Challenge 1<jupyter_code># 성별
binary_sex = {'male':0, 'female':1}
df_train['Sex'] = df_train['Sex'].map(binary_sex)
df_test['Sex'] = df_test['Sex'].map(binary_sex)
df_train.head()
# 선착장
binary_embarked = {'S':0, 'C':1, 'Q':2}
df_train['Embarked'] = df_train['Embarked'].map(binary_embarked)
df_test['Embarked'] = df_test['Embarked'].map(binary_embarked)
df_train.head()
train = df_train[['Survived', 'Sex', 'Age', 'FamilySize', 'Fare', 'Embarked']]
test = df_test[['Sex', 'Age', 'FamilySize', 'Fare', 'Embarked']]
train.head()<jupyter_output><empty_output><jupyter_text>## X, y 데이터로 분리<jupyter_code>y_train = train['Survived']
X_train = train.drop(['Survived'], axis=1)
X_test = test<jupyter_output><empty_output><jupyter_text>## 모델링<jupyter_code>from sklearn.tree import DecisionTreeClassifier
dt_clf = DecisionTreeClassifier()
dt_clf.fit(X_train, y_train)
dt_clf.score(X_train, y_train)<jupyter_output><empty_output><jupyter_text>## 예측<jupyter_code>dt_clf.predict([ [1, 2.0, 0, 100, 0] ])
pred = dt_clf.predict(X_test)
submit = pd.DataFrame({
'PassengerId': df_test['PassengerId'],
'Survived': pred
})
submit.to_csv('./data/submission.csv')<jupyter_output><empty_output>
|
no_license
|
/part2-data-science/week3/.ipynb_checkpoints/stage2-checkpoint.ipynb
|
CoodingPenguin/coala-univ-2
| 12 |
<jupyter_start><jupyter_text># HubSpot - Retrieve communication details
Give Feedback | Bug report**Tags:** #hubspot #get #read #communication #snippet**Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel)**Last update:** 2023-08-17 (Created: 2023-08-17)**Description:** This notebook fetches detailed information for a specific communication (Linkedin, SMS or WhatsApp message). It can be helpful in obtaining further details from a communication ID, which can be acquired by extraction from a contact.**References:**
- [HubSpot API - Communications](https://developers.hubspot.com/docs/api/crm/communications)## Input### Import libraries<jupyter_code>import requests
import naas
from pprint import pprint<jupyter_output><empty_output><jupyter_text>### Setup variables
**Mandatory**
[Get your HubSpot Access token](https://knowledge.hubspot.com/articles/kcs_article/integrations/how-do-i-get-my-hubspot-api-key)
- `hs_access_token`: This variable stores an access token used for accessing the HubSpot API.
- `object_id`: This variable stores the object ID.
**Optional**
- `properties`: This variable stores the list of properties to be retrieve from the meeting.<jupyter_code># Mandatory
hs_access_token = naas.secret.get("HS_ACCESS_TOKEN") or "YOUR_HS_ACCESS_TOKEN"
object_id = 38798231910
# Optional
properties = ["hs_communication_channel_type", "hs_communication_body"]<jupyter_output><empty_output><jupyter_text>## Model### Retrieve details<jupyter_code>def retrieve_object_details(
hs_access_token,
object_id,
object_type,
properties=None,
):
# Init
data = []
params = {
"archived": "false"
}
# Requests
if properties:
params["properties"] = properties
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {hs_access_token}"
}
url = f"https://api.hubapi.com/crm/v3/objects/{object_type}/{object_id}"
# Response
res = requests.get(url, headers=headers, params=params)
if res.status_code == 200:
data = res.json()
else:
print(res.text)
return data
data = retrieve_object_details(hs_access_token, object_id, "Communications", properties)
data<jupyter_output><empty_output><jupyter_text>## Output### Display result<jupyter_code>if len(data) > 0:
pprint(data.get("properties"))<jupyter_output><empty_output>
|
permissive
|
/HubSpot/HubSpot_Retrieve_communication_details.ipynb
|
jupyter-naas/awesome-notebooks
| 4 |
<jupyter_start><jupyter_text># Jupyter Notebookで始めるプログラミング 演習問題 (8章)## 8-1
<jupyter_code>cards = ["HA", "H2", "H3", "H4", "H5", "H6", "H7", "H8", "H9", "HJ", "HQ", "HK",
"SA", "S2", "S3", "S4", "S5", "S6", "S7", "S8", "S9", "SJ", "SQ", "SK",
"CA", "C2", "C3", "C4", "C5", "C6", "C7", "C8", "C9", "CJ", "CQ", "CK",
"DA", "D2", "D3", "D4", "D5", "D6", "D7", "D8", "D9", "DJ", "DQ", "DK",
"J"]
# 線形探索で任意のカードを探す
def findCard(c, l):
i = 0
while (i < len(l)):
if l[i] == c:
print("発見")
break
i += 1
import random
print(random.sample(cards, 13))
spl = random.sample(cards, 13)
spl
findCard("HA",spl)<jupyter_output><empty_output><jupyter_text>## 8-2二分探索のプログラムを改造して、パラメータで与えた任意のカードがあるかどうか探すプログラム$findCard2()$を作成しなさい<jupyter_code># 二分探索で任意のカードを探す
def findCard2(c, l):
min_idx = 0
max_idx = len(l)
while min_idx <= max_idx:
mid_idx = int((min_idx + max_idx) / 2)
if l[mid_idx] == c:
print("発見")
return
if l[mid_idx] > c:
max_idx = mid_idx-1
else:
min_idx = mid_idx+1
return #未発見
spl.sort() # 予めソートしておく
spl
findCard2("HA",spl)<jupyter_output><empty_output><jupyter_text>## 8-3<jupyter_code>import random
random.choices(cards, k=1000)
# 線形探索
def findJ(l):
i = 0
while (i < len(l)):
if l[i] == "J":
print("発見")
break
i += 1
# 二分探索
def findJ2(l):
min_idx = 0
max_idx = len(l)-1
while min_idx <= max_idx:
mid_idx = int((min_idx + max_idx) / 2)
if l[mid_idx] == "J":
print("発見")
return
if l[mid_idx] > "J":
max_idx = mid_idx-1
else:
min_idx = mid_idx+1
return # 未発見
f = random.choices(cards, k=50000000)
%time
findJ(f)
f.sort()
%time
findJ2(f)<jupyter_output>CPU times: user 1 µs, sys: 0 ns, total: 1 µs
Wall time: 4.05 µs
発見
|
no_license
|
/章末問題/chapter08A.ipynb
|
muroran-it/jupyter-programming
| 3 |
<jupyter_start><jupyter_text># 0. 라이브러리 임포트<jupyter_code>pip install mglearn
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# plt.style.use('seaborn-whitegrid')
%matplotlib inline
import matplotlib.font_manager as fm
# 시각화 시 한글 폰트 나오도록
!apt -qq -y install fonts-nanum
fontpath = '/usr/share/fonts/truetype/nanum/NanumBarunGothic.ttf'
font = fm.FontProperties(fname=fontpath, size=9)
plt.rc('font', family='NanumBarunGothic')
mpl.font_manager._rebuild()
import seaborn as sns
# ML
import sklearn
import scipy as sp
import mglearn
from sklearn.model_selection import train_test_split
# 경고 무시
import warnings
warnings.filterwarnings(action='ignore')<jupyter_output>fonts-nanum is already the newest version (20170925-1).
0 upgraded, 0 newly installed, 0 to remove and 25 not upgraded.
<jupyter_text># 1. 비지도 학습### 1-1. 스케일 조정<jupyter_code>mglearn.plots.plot_scaling()
<jupyter_output><empty_output><jupyter_text>### 1-2. 주성분 분석(PCA)<jupyter_code>mglearn.plots.plot_pca_illustration()<jupyter_output><empty_output><jupyter_text>### 1-3. 비음수 행렬 분해(NMF)<jupyter_code>mglearn.plots.plot_nmf_illustration()<jupyter_output><empty_output><jupyter_text>### 1-4. k-평균 군집(k-means clustering)<jupyter_code># 알고리즘 작동 단계
mglearn.plots.plot_kmeans_algorithm()
# k-means 영역
mglearn.plots.plot_kmeans_boundaries()<jupyter_output><empty_output><jupyter_text>### 1-5. 병합 군집(agglomerative clustering)<jupyter_code>mglearn.plots.plot_agglomerative_algorithm()<jupyter_output><empty_output><jupyter_text>### 1-6. 계층적 군집(hierarchical clustering)<jupyter_code>mglearn.plots.plot_agglomerative()<jupyter_output><empty_output><jupyter_text>### 1-7. DBSCAN<jupyter_code>mglearn.plots.plot_dbscan()<jupyter_output>min_samples: 2 eps: 1.000000 cluster: [-1 0 0 -1 0 -1 1 1 0 1 -1 -1]
min_samples: 2 eps: 1.500000 cluster: [0 1 1 1 1 0 2 2 1 2 2 0]
min_samples: 2 eps: 2.000000 cluster: [0 1 1 1 1 0 0 0 1 0 0 0]
min_samples: 2 eps: 3.000000 cluster: [0 0 0 0 0 0 0 0 0 0 0 0]
min_samples: 3 eps: 1.000000 cluster: [-1 0 0 -1 0 -1 1 1 0 1 -1 -1]
min_samples: 3 eps: 1.500000 cluster: [0 1 1 1 1 0 2 2 1 2 2 0]
min_samples: 3 eps: 2.000000 cluster: [0 1 1 1 1 0 0 0 1 0 0 0]
min_samples: 3 eps: 3.000000 cluster: [0 0 0 0 0 0 0 0 0 0 0 0]
min_samples: 5 eps: 1.000000 cluster: [-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]
min_samples: 5 eps: 1.500000 cluster: [-1 0 0 0 0 -1 -1 -1 0 -1 -1 -1]
min_samples: 5 eps: 2.000000 cluster: [-1 0 0 0 0 -1 -1 -1 0 -1 -1 -1]
min_samples: 5 eps: 3.000000 cluster: [0 0 0 0 0 0 0 0 0 0 0 0]
<jupyter_text>### 1-8. <jupyter_code>1<jupyter_output><empty_output><jupyter_text># 2. 유방암 데이터세트<jupyter_code>from sklearn.datasets import load_breast_cancer<jupyter_output><empty_output><jupyter_text>### 2-1. 데이터 전처리, 스케일 조정<jupyter_code>cancer = load_breast_cancer()
cancer.keys()
X_train, X_test, y_train, y_test = train_test_split(cancer['data'], cancer['target'], random_state=1)
print(X_train.shape)
print(X_test.shape)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
# 데이터 변환
X_train_scaled = scaler.transform(X_train)
# 스케일이 조정된 후 데이터셋의 속성 출력
print('변환 전 크기: {}'.format(X_train.shape))
print('변환된 후 크기: {}'.format(X_train_scaled.shape))
print('- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ')
print('스케일 조정 전 특성별 최솟값:\n {}\n'.format(X_train.min(axis=0)))
print('스케일 조정 전 특성별 최댓값:\n {}\n'.format(X_train.max(axis=0)))
print('스케일 조정 후 특성별 최솟값:\n {}\n'.format(X_train_scaled.min(axis=0)))
print('스케일 조정 후 특성별 최댓값:\n {}'.format(X_train_scaled.max(axis=0)))
X_test_scaled = scaler.transform(X_test)
print('변환 전 크기: {}'.format(X_test.shape))
print('변환된 후 크기: {}'.format(X_test_scaled.shape))
print('- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ')
print('스케일 조정 전 특성별 최솟값:\n {}\n'.format(X_test.min(axis=0)))
print('스케일 조정 전 특성별 최댓값:\n {}\n'.format(X_test.max(axis=0)))
print('스케일 조정 후 특성별 최솟값:\n {}\n'.format(X_test_scaled.min(axis=0)))
print('스케일 조정 후 특성별 최댓값:\n {}'.format(X_test_scaled.max(axis=0)))<jupyter_output>변환 전 크기: (143, 30)
변환된 후 크기: (143, 30)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
스케일 조정 전 특성별 최솟값:
[7.691e+00 1.038e+01 4.834e+01 1.704e+02 6.828e-02 3.116e-02 0.000e+00
0.000e+00 1.365e-01 4.996e-02 1.115e-01 3.871e-01 8.484e-01 7.228e+00
2.866e-03 3.746e-03 0.000e+00 0.000e+00 7.882e-03 1.087e-03 8.678e+00
1.420e+01 5.449e+01 2.236e+02 8.774e-02 5.131e-02 0.000e+00 0.000e+00
1.565e-01 5.504e-02]
스케일 조정 전 특성별 최댓값:
[2.722e+01 3.381e+01 1.821e+02 2.250e+03 1.425e-01 3.454e-01 3.754e-01
1.878e-01 2.906e-01 9.744e-02 1.292e+00 2.612e+00 1.012e+01 1.587e+02
1.604e-02 1.006e-01 3.038e-01 3.322e-02 7.895e-02 1.220e-02 3.312e+01
4.178e+01 2.208e+02 3.216e+03 2.098e-01 1.058e+00 1.252e+00 2.688e-01
6.638e-01 2.075e-01]
스케일 조정 후 특성별 최솟값:
[ 0.0336031 0.0226581 0.03144219 0.01141039 0.14128374 0.04406704
0. 0. 0.1540404 -0.00615249 -0.00137796 0.00594501
0.00430665 0[...]<jupyter_text>### 2-2. 지도 학습에서 데이터 전처리 효과
*MinMaxScaler, SVC*<jupyter_code># 스케일 조정 전 원본 SVC 모델
from sklearn.svm import SVC
X_train, X_test, y_train, y_test = train_test_split(cancer['data'], cancer['target'], random_state=0)
svm = SVC(C=100)
svm.fit(X_train, y_train)
print('훈련 세트 정확도: {:.3f}'.format(svm.score(X_train, y_train)))
print('테스트 세트 정확도: {:.3f}'.format(svm.score(X_test, y_test)))
# MinMax스케일 조정
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# 조정된 세트로 SVM 학습
svm.fit(X_train_scaled, y_train)
# 정확도 테스트
print('SVM 훈련 정확도: {:.3f}'.format(svm.score(X_train_scaled, y_train)))
print('SVM 테스트 정확도: {:.3f}'.format(svm.score(X_test_scaled, y_test)))<jupyter_output>SVM 훈련 정확도: 1.000
SVM 테스트 정확도: 0.965
<jupyter_text>### 2-3. PCA를 적용하여 데이터셋 시각화<jupyter_code># 고차원 데이터를 시각화해보기
## PCA 전임
fig, axes = plt.subplots(15, 2, figsize=(10, 20))
malignant = cancer.data[cancer.target == 0]
benign = cancer.data[cancer.target == 1]
ax = axes.ravel()
for i in range(30):
_, bins = np.histogram(cancer.data[:, i], bins=50)
ax[i].hist(malignant[:, i], bins=bins, color=mglearn.cm3(0), alpha=.5)
ax[i].hist(benign[:, i], bins=bins, color=mglearn.cm3(2), alpha=.5)
ax[i].set_title(cancer.feature_names[i])
ax[i].set_yticks(())
ax[0].set_xlabel("특성 크기")
ax[0].set_ylabel("빈도")
ax[0].legend(["악성", "양성"], loc="best")
fig.tight_layout()
# 데이터 스케일 조정 - StandardScaler
## 평균 0, 분산 1
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(cancer['data'])
# PCA
from sklearn.decomposition import PCA
#데이터의 처음 두 개의 주성분만 유지
pca = PCA(n_components=2)
# PCA 모델 만들기
pca.fit(X_scaled)
# 처음 두 개의 주성분으로 데이터 변환
X_pca = pca.transform(X_scaled)
print('원본 데이터 형태: {}'.format(str(X_scaled.shape)))
print('축소 데이터 형태: {}'.format(str(X_pca.shape)))
plt.figure(figsize=(8,8))
mglearn.discrete_scatter(X_pca[:, 0], X_pca[:, 1], cancer['target'])
plt.legend(['악성', '양성'])
plt.gca().set_aspect('equal')
plt.xlabel('첫 번째 주성분')
plt.ylabel('두 번째 주성분')
plt.show()
print('PCA 주성분 형태: {}'.format(pca.components_.shape))
print('PCA 주성분:\n{}'.format(pca.components_))
plt.matshow(pca.components_, cmap='viridis')
plt.yticks([0, 1], ['첫 번째 주성분', '두 번째 주성분'])
plt.xticks(range(len(cancer['feature_names'])), cancer['feature_names'], rotation=60, ha='left')
plt.colorbar()
plt.xlabel('특성')
plt.ylabel('주성분')<jupyter_output><empty_output><jupyter_text># 3. Blob<jupyter_code>from sklearn.datasets import make_blobs<jupyter_output><empty_output><jupyter_text>### 3-1. 데이터 스케일 조정
*훈련 데이터와 테스트 데이터가 각각의 통계값으로 조정됐을 때 발생하는 상황*<jupyter_code>X, _ = make_blobs(n_samples=50, centers=5, random_state=4, cluster_std=2)
# 폴드아웃샘플 생성
X_train, X_test = train_test_split(X, random_state=5, test_size=.1)
# 훈련 세트와 테스트 세트의 산점도
fig, axes = plt.subplots(1, 3, figsize=(13, 4))
axes[0].scatter(X_train[:, 0], X_train[:, 1],
c=mglearn.cm2.colors[0], label="훈련 세트", s=60)
axes[0].scatter(X_test[:, 0], X_test[:, 1], marker='^',
c=mglearn.cm2.colors[1], label="테스트 세트", s=60)
axes[0].legend(loc='upper left')
axes[0].set_title("원본 데이터")
# MinMaxScaler를 사용해 스케일을 조정
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# 스케일이 조정된 데이터의 산점도
axes[1].scatter(X_train_scaled[:, 0], X_train_scaled[:, 1],
c=mglearn.cm2.colors[0], label="훈련 세트", s=60)
axes[1].scatter(X_test_scaled[:, 0], X_test_scaled[:, 1], marker='^',
c=mglearn.cm2.colors[1], label="테스트 세트", s=60)
axes[1].set_title("스케일 조정된 데이터")
# 테스트 세트의 스케일을 따로 조정
test_scaler = MinMaxScaler()
test_scaler.fit(X_test)
X_test_scaled_badly = test_scaler.transform(X_test)
# 잘못 조정된 데이터의 산점도 생성
axes[2].scatter(X_train_scaled[:, 0], X_train_scaled[:, 1],
c=mglearn.cm2.colors[0], label="training set", s=60)
axes[2].scatter(X_test_scaled_badly[:, 0], X_test_scaled_badly[:, 1],
marker='^', c=mglearn.cm2.colors[1], label="test set", s=60)
axes[2].set_title("잘못 조정된 데이터")
for ax in axes:
ax.set_xlabel("특성 0")
ax.set_ylabel("특성 1")
fig.tight_layout()
plt.show()<jupyter_output><empty_output><jupyter_text>### 3-2. k-means clustering<jupyter_code>from sklearn.cluster import KMeans
X, y = make_blobs(random_state=1)
# 군집 모델 생성
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
# 레이블 확인
print('클러스터 레이블:\n{}'.format(kmeans.labels_))
# 예측하기, 기존 모델의 학습 결과를 변경하지 않음
print(kmeans.predict(X))
mglearn.discrete_scatter(X[:, 0], X[:, 1], kmeans.labels_, markers='.')
mglearn.discrete_scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], [0,1,2],markers='^', markeredgewidth=2)
fig, axes = plt.subplots(1, 2, figsize=(10,5))
# 클러스터 2개
kmeans = KMeans(n_clusters=2)
kmeans.fit(X)
assignments = kmeans.labels_
# 시각화
mglearn.discrete_scatter(X[:, 0], X[:, 1], assignments, ax=axes[0])
# 클러스터 5개
kmeans = KMeans(n_clusters=5)
kmeans.fit(X)
assignments = kmeans.labels_
# 시각화
mglearn.discrete_scatter(X[:, 0], X[:, 1], assignments, ax=axes[1])
# 클러스터 실패 사례 1
## 중심에서 떨어진 데이터도 포함
X_varied, y_varied = make_blobs(n_samples=200,
cluster_std=[1.0, 2.5, 0.5],
random_state=170)
y_pred = KMeans(n_clusters=3, random_state=0).fit_predict(X_varied)
f, ax = plt.subplots(1, 2, figsize=(10, 5))
mglearn.discrete_scatter(X_varied[:, 0], X_varied[:, 1], ax=ax[0])
mglearn.discrete_scatter(X_varied[:, 0], X_varied[:, 1], y_pred, ax=ax[1])
ax[1].legend(["클러스터 0", "클러스터 1", "클러스터 2"], loc='best')
for a in ax:
a.set_xlabel("특성 0")
a.set_ylabel("특성 1")
# 클러스터 실패 사례 2
## 모든 방향을 중요시
# 무작위로 클러스터 데이터 생성
X, y = make_blobs(random_state=170, n_samples=600)
rng = np.random.RandomState(74)
# 데이터가 길게 늘어지도록 변경
transformation = rng.normal(size=(2, 2))
X = np.dot(X, transformation)
# 세 개의 클러스터로 데이터에 KMeans 알고리즘을 적용
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
y_pred = kmeans.predict(X)
# 클러스터 할당과 클러스터 중심을 나타냄
mglearn.discrete_scatter(X[:, 0], X[:, 1], kmeans.labels_, markers='.')
mglearn.discrete_scatter(
kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], [0, 1, 2],
markers='^', markeredgewidth=2)
plt.xlabel("특성 0")
plt.ylabel("특성 1")<jupyter_output><empty_output><jupyter_text>### 3-3. 병합 군집(agglomerativeClustering)<jupyter_code>from sklearn.cluster import AgglomerativeClustering
X, y = make_blobs(random_state=1)
agg = AgglomerativeClustering(n_clusters=3)
assignment = agg.fit_predict(X)
mglearn.discrete_scatter(X[:, 0], X[:, 1], assignment)
plt.legend(['c1', 'c2', 'c3'])
plt.xlabel('특성 0')
plt.ylabel('특성 1')
# SciPy에서 ward 군집 함수와 덴드로그램 함수를 임포트
from scipy.cluster.hierarchy import dendrogram, ward
X, y = make_blobs(random_state=0, n_samples=12)
# 데이터 배열 X 에 ward 함수를 적용
# SciPy의 ward 함수는 병합 군집을 수행할 때 생성된 거리 정보가 담긴 배열을 리턴
linkage_array = ward(X)
# 클러스터 간의 거리 정보가 담긴 linkage_array를 사용해 덴드로그램 생성
dendrogram(linkage_array)
# 두 개와 세 개의 클러스터를 구분하는 커트라인을 표시
ax = plt.gca()
bounds = ax.get_xbound()
ax.plot(bounds, [7.25, 7.25], '--', c='k')
ax.plot(bounds, [4, 4], '--', c='k')
ax.text(bounds[1], 7.25, ' 두 개 클러스터', va='center', fontdict={'size': 15})
ax.text(bounds[1], 4, ' 세 개 클러스터', va='center', fontdict={'size': 15})
plt.xlabel("샘플 번호")
plt.ylabel("클러스터 거리")<jupyter_output><empty_output><jupyter_text>### 3-4. DBSCAN<jupyter_code># 작은 데이터에 적합하지 않은 기본값으로 인해 모두 노이즈로 구분됨 (-1)
from sklearn.cluster import DBSCAN
X, y = make_blobs(random_state=0, n_samples=12)
dbscan = DBSCAN()
clusters = dbscan.fit_predict(X)
print('클러스터 레이블:\n{}'.format(clusters))
<jupyter_output><empty_output><jupyter_text># 4. LFW### 4-1. PCA 화이트닝<jupyter_code>from sklearn.datasets import fetch_lfw_people
people = fetch_lfw_people(min_faces_per_person=20, resize=0.7)
image_shape = people.images[0].shape
fig, axes = plt.subplots(2, 5, figsize=(15, 8),
subplot_kw={'xticks': (), 'yticks': ()})
for target, image, ax in zip(people.target, people.images, axes.ravel()):
ax.imshow(image)
ax.set_title(people.target_names[target])
print('people.images.shape: {}'.format(people.images.shape))
print('클래스 수: {}'.format(len(people.target_names)))
# 각 타깃이 나타난 횟수 계산
counts = np.bincount(people.target)
# 타깃별 이름과 횟수 출력
for i, (count, name) in enumerate(zip(counts, people.target_names)):
print("{0:25} {1:3}".format(name, count), end=' ')
if (i + 1) % 3 == 0:
print()
mask = np.zeros(people.target.shape, dtype=np.bool)
for target in np.unique(people.target):
mask[np.where(people.target == target)[0][:50]] = 1
X_people = people.data[mask]
y_people = people.target[mask]
# 0~255 사이의 흑백 이미지의 픽셀 값을 0~1 사이로 스케일 조정
# MinMaxScaler를 적용하는 것과 거의 동일하다고 함
X_people = X_people / 255.
from sklearn.neighbors import KNeighborsClassifier
# 데이터를 훈련 세트와 테스트 세트로 나눔
X_train, X_test, y_train, y_test = train_test_split(
X_people, y_people, stratify=y_people, random_state=0)
# 이웃 개수를 한 개로 하여 KNeighborsClassifier 모델을 만들기
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
print("1-최근접 이웃의 테스트 세트 점수: {:.2f}".format(knn.score(X_test, y_test)))
mglearn.plots.plot_pca_whitening()
pca = PCA(n_components=100, whiten=True, random_state=0).fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print('X_train_pca.shape: {}'.format(X_train_pca.shape))
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train_pca, y_train)
print('훈련 세트 정확도: {:.2f}'.format(knn.score(X_train_pca, y_train)))
print('테스트 세트 정확도: {:.2f}'.format(knn.score(X_test_pca, y_test)))
print('pca.components_.shape: {}'.format(pca.components_.shape))
fig, axes = plt.subplots(3, 5, figsize=(15, 12),
subplot_kw={'xticks': (), 'yticks': ()})
for i, (component, ax) in enumerate(zip(pca.components_, axes.ravel())):
ax.imshow(component.reshape(image_shape), cmap='viridis')
ax.set_title("주성분 {}".format((i + 1)))
mglearn.plots.plot_pca_faces(X_train, X_test, image_shape)
# 뭉쳐서 구분이 잘 안됨
## 위에서 10개를 썼을때에도 잘 구별이 안되었으니 사실 당연 (산점도는 2개 주성분)
mglearn.discrete_scatter(X_train_pca[:, 0], X_train_pca[:, 1], y_train)
plt.xlabel('첫 번째 주성분')
plt.ylabel('두 번째 주성분')<jupyter_output><empty_output><jupyter_text>### 4-2. NMF 적용<jupyter_code>from sklearn.decomposition import NMF
nmf = NMF(n_components=15, random_state=0)
nmf.fit(X_train)
X_train_nmf = nmf.transform(X_train)
X_test_nmf = nmf.transform(X_test)
fig, axes = plt.subplots(3, 5, figsize=(15, 12),
subplot_kw={'xticks': (), 'yticks': ()})
for i, (component, ax) in enumerate(zip(nmf.components_, axes.ravel())):
ax.imshow(component.reshape(image_shape))
ax.set_title("성분 {}".format(i))
compn = 3
# 4번째 성분으로 정렬하여 처음 10개 이미지를 출력합니다
inds = np.argsort(X_train_nmf[:, compn])[::-1]
fig, axes = plt.subplots(2, 5, figsize=(15, 8),
subplot_kw={'xticks': (), 'yticks': ()})
for i, (ind, ax) in enumerate(zip(inds, axes.ravel())):
ax.imshow(X_train[ind].reshape(image_shape))
compn = 7
# 8번째 성분으로 정렬하여 처음 10개 이미지를 출력합니다
inds = np.argsort(X_train_nmf[:, compn])[::-1]
fig, axes = plt.subplots(2, 5, figsize=(15, 8),
subplot_kw={'xticks': (), 'yticks': ()})
for i, (ind, ax) in enumerate(zip(inds, axes.ravel())):
ax.imshow(X_train[ind].reshape(image_shape))<jupyter_output><empty_output><jupyter_text>### 4-3. PCA, NMF, k-means 비교<jupyter_code>X_train, X_test, y_train, y_test = train_test_split(
X_people, y_people, stratify=y_people, random_state=42)
nmf = NMF(n_components=100, random_state=0)
nmf.fit(X_train)
pca = PCA(n_components=100, random_state=0)
pca.fit(X_train)
kmeans = KMeans(n_clusters=100, random_state=0)
kmeans.fit(X_train)
X_reconstructed_pca = pca.inverse_transform(pca.transform(X_test))
X_reconstructed_kmeans = kmeans.cluster_centers_[kmeans.predict(X_test)]
X_reconstructed_nmf = np.dot(nmf.transform(X_test), nmf.components_)
fig, axes = plt.subplots(3, 5, figsize=(8, 8), subplot_kw={'xticks': (), 'yticks': ()})
fig.suptitle("추출한 성분")
for ax, comp_kmeans, comp_pca, comp_nmf in zip(
axes.T, kmeans.cluster_centers_, pca.components_, nmf.components_):
ax[0].imshow(comp_kmeans.reshape(image_shape))
ax[1].imshow(comp_pca.reshape(image_shape), cmap='viridis')
ax[2].imshow(comp_nmf.reshape(image_shape))
axes[0, 0].set_ylabel("kmeans")
axes[1, 0].set_ylabel("pca")
axes[2, 0].set_ylabel("nmf")
fig, axes = plt.subplots(4, 5, subplot_kw={'xticks': (), 'yticks': ()},
figsize=(8, 8))
fig.suptitle("재구성")
for ax, orig, rec_kmeans, rec_pca, rec_nmf in zip(
axes.T, X_test, X_reconstructed_kmeans, X_reconstructed_pca,
X_reconstructed_nmf):
ax[0].imshow(orig.reshape(image_shape))
ax[1].imshow(rec_kmeans.reshape(image_shape))
ax[2].imshow(rec_pca.reshape(image_shape))
ax[3].imshow(rec_nmf.reshape(image_shape))
axes[0, 0].set_ylabel("원본")
axes[1, 0].set_ylabel("kmeans")
axes[2, 0].set_ylabel("pca")
axes[3, 0].set_ylabel("nmf")<jupyter_output><empty_output><jupyter_text>### 4-4. 군집 알고리즘 비교하기
***k-means, DBSCAN, 병합 군집 알고리즘***<jupyter_code>from sklearn.decomposition import PCA
pca = PCA(n_components=100, whiten=True, random_state=0)
pca.fit_transform(X_people)
X_pca = pca.transform(X_people)<jupyter_output><empty_output><jupyter_text>1. DBSCAN<jupyter_code># 기본 매개변수로 DBSCAN을 적용
dbscan = DBSCAN()
labels = dbscan.fit_predict(X_pca)
print("고유한 레이블:", np.unique(labels))
dbscan = DBSCAN(min_samples=3)
labels = dbscan.fit_predict(X_pca)
print("고유한 레이블:", np.unique(labels))
dbscan = DBSCAN(min_samples=3, eps=15)
labels = dbscan.fit_predict(X_pca)
print("고유한 레이블:", np.unique(labels))
# 잡음 포인트와 클러스터에 속한 포인트 수를 셈
# bincount는 음수를 받을 수 없어서 labels에 1을 더하기
# 반환값의 첫 번째 원소는 잡음 포인트의 수
print("클러스터별 포인트 수:", np.bincount(labels + 1))
noise = X_people[labels==-1]
fig, axes = plt.subplots(3, 9, subplot_kw={'xticks': (), 'yticks': ()},
figsize=(12, 4))
for image, ax in zip(noise, axes.ravel()):
ax.imshow(image.reshape(image_shape), vmin=0, vmax=1)
for eps in [1, 3, 5, 7, 9, 11, 13]:
print("\neps=", eps)
dbscan = DBSCAN(eps=eps, min_samples=3)
labels = dbscan.fit_predict(X_pca)
print("클러스터 수:", len(np.unique(labels)))
print("클러스터 크기:", np.bincount(labels + 1))
dbscan = DBSCAN(min_samples=3, eps=7)
labels = dbscan.fit_predict(X_pca)
for cluster in range(max(labels) + 1):
mask = labels == cluster
n_images = np.sum(mask)
fig, axes = plt.subplots(1, 14, figsize=(14*1.5, 4),
subplot_kw={'xticks': (), 'yticks': ()})
i = 0
for image, label, ax in zip(X_people[mask], y_people[mask], axes):
ax.imshow(image.reshape(image_shape), vmin=0, vmax=1)
ax.set_title(people.target_names[label].split()[-1])
i += 1
for j in range(len(axes) - i):
axes[j+i].imshow(np.array([[1]*65]*87), vmin=0, vmax=1)
axes[j+i].axis('off')<jupyter_output><empty_output><jupyter_text>2. K-means<jupyter_code>n_clusters = 10
# k-평균으로 클러스터를 추출
km = KMeans(n_clusters=n_clusters, random_state=0)
labels_km = km.fit_predict(X_pca)
print("k-평균의 클러스터 크기:", np.bincount(labels_km))
fig, axes = plt.subplots(2, 5, subplot_kw={'xticks': (), 'yticks': ()},
figsize=(12, 4))
for center, ax in zip(km.cluster_centers_, axes.ravel()):
ax.imshow(pca.inverse_transform(center).reshape(image_shape),
vmin=0, vmax=1)
mglearn.plots.plot_kmeans_faces(km, pca, X_pca, X_people,
y_people, people.target_names)<jupyter_output><empty_output><jupyter_text>3. 병합 군집<jupyter_code># 병합 군집으로 클러스터를 추출
agglomerative = AgglomerativeClustering(n_clusters=10)
labels_agg = agglomerative.fit_predict(X_pca)
print("병합 군집의 클러스터 크기:",
np.bincount(labels_agg))
print("ARI: {:.2f}".format(adjusted_rand_score(labels_agg, labels_km)))
linkage_array = ward(X_pca)
# 클러스터 사이의 거리가 담겨있는 linkage_array로 덴드로그램 그리기
plt.figure(figsize=(20, 5))
dendrogram(linkage_array, p=7, truncate_mode='level', no_labels=True)
plt.xlabel("샘플 번호")
plt.ylabel("클러스터 거리")
ax = plt.gca()
bounds = ax.get_xbound()
ax.plot(bounds, [36, 36], '--', c='k')
n_clusters = 10
for cluster in range(n_clusters):
mask = labels_agg == cluster
fig, axes = plt.subplots(1, 10, subplot_kw={'xticks': (), 'yticks': ()},
figsize=(15, 8))
axes[0].set_ylabel(np.sum(mask))
for image, label, asdf, ax in zip(X_people[mask], y_people[mask],
labels_agg[mask], axes):
ax.imshow(image.reshape(image_shape), vmin=0, vmax=1)
ax.set_title(people.target_names[label].split()[-1],
fontdict={'fontsize': 9})
# 병합 군집으로 클러스터를 추출
agglomerative = AgglomerativeClustering(n_clusters=40)
labels_agg = agglomerative.fit_predict(X_pca)
print("병합 군집의 클러스터 크기:", np.bincount(labels_agg))
n_clusters = 40
for cluster in [13, 16, 23, 38, 39]: # 재미있는 클러스터 선택
mask = labels_agg == cluster
fig, axes = plt.subplots(1, 15, subplot_kw={'xticks': (), 'yticks': ()},
figsize=(15, 8))
cluster_size = np.sum(mask)
axes[0].set_ylabel("#{}: {}".format(cluster, cluster_size))
for image, label, asdf, ax in zip(X_people[mask], y_people[mask],
labels_agg[mask], axes):
ax.imshow(image.reshape(image_shape), vmin=0, vmax=1)
ax.set_title(people.target_names[label].split()[-1],
fontdict={'fontsize': 9})
for i in range(cluster_size, 15):
axes[i].set_visible(False)<jupyter_output>병합 군집의 클러스터 크기: [ 43 120 100 194 56 58 127 22 6 37 65 49 84 18 168 44 47 31
78 30 166 20 57 14 11 29 23 5 8 84 67 30 57 16 22 12
29 2 26 8]
<jupyter_text># 5. signals ### 5-1. NMF<jupyter_code>S = mglearn.datasets.make_signals()
plt.figure(figsize=(10, 2))
plt.plot(S, '-')
plt.xlabel('시간')
plt.ylabel('신호')
A = np.random.RandomState(0).uniform(size=(100, 3))
X = np.dot(S, A.T)
print('측정 데이터 형태: {}'.format(X.shape))
nmf = NMF(n_components=3, random_state=42)
S_ = nmf.fit_transform(X)
print('복원한 신호 데이터 형태: {}'.format(S_.shape))
# 비교를 위해 pca도 적용
pca = PCA(n_components=3)
H = pca.fit_transform(X)
models = [X, S, S_, H]
names = ['측정 신호(처음 3개)', '원본 신호', 'NMF 복원 신호', 'PCA 복원 신호']
fig, axes = plt.subplots(4, figsize=(8, 4), gridspec_kw={'hspace': .5}, subplot_kw={'xticks': (), 'yticks': ()})
for model, name, ax in zip(models, names, axes):
ax.set_title(name)
ax.plot(model[:, :3], '-')<jupyter_output><empty_output><jupyter_text># 6. digits### 6-1. TSNE<jupyter_code>from sklearn.datasets import load_digits
digits = load_digits()
fig, axes = plt.subplots(2, 5, figsize=(10, 5), subplot_kw={'xticks': (), 'yticks': ()})
for ax, img in zip(axes.ravel(), digits['images']):
ax.imshow(img)
# PCA 모델을 생성합니다
pca = PCA(n_components=2)
pca.fit(digits.data)
# 처음 두 개의 주성분으로 숫자 데이터를 변환합니다
digits_pca = pca.transform(digits.data)
colors = ["#476A2A", "#7851B8", "#BD3430", "#4A2D4E", "#875525",
"#A83683", "#4E655E", "#853541", "#3A3120","#535D8E"]
plt.figure(figsize=(10, 10))
plt.xlim(digits_pca[:, 0].min(), digits_pca[:, 0].max())
plt.ylim(digits_pca[:, 1].min(), digits_pca[:, 1].max())
for i in range(len(digits.data)):
# 숫자 텍스트를 이용해 산점도를 그립니다
plt.text(digits_pca[i, 0], digits_pca[i, 1], str(digits.target[i]),
color = colors[digits.target[i]],
fontdict={'weight': 'bold', 'size': 9})
plt.xlabel("첫 번째 주성분")
plt.ylabel("두 번째 주성분")
from sklearn.manifold import TSNE
tsne = TSNE(random_state=42)
digits_tsne = tsne.fit_transform(digits['data'])
plt.figure(figsize=(10, 10))
plt.xlim(digits_tsne[:, 0].min(), digits_tsne[:, 0].max() + 1)
plt.ylim(digits_tsne[:, 1].min(), digits_tsne[:, 1].max() + 1)
for i in range(len(digits.data)):
# 숫자 텍스트를 이용해 산점도를 그립니다
plt.text(digits_tsne[i, 0], digits_tsne[i, 1], str(digits.target[i]),
color = colors[digits.target[i]],
fontdict={'weight': 'bold', 'size': 9})
plt.xlabel("t-SNE 특성 0")
plt.ylabel("t-SNE 특성 1")<jupyter_output><empty_output><jupyter_text># 7. Two moons<jupyter_code>from sklearn.datasets import make_moons<jupyter_output><empty_output><jupyter_text>### 7-1. k-means clustering (1)
*잘못 분류한 사례*<jupyter_code>X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
# 두 개의 클러스터로 데이터에 KMeans 알고리즘을 적용
kmeans = KMeans(n_clusters=2)
kmeans.fit(X)
y_pred = kmeans.predict(X)
# 클러스터 할당과 클러스터 중심을 표시
plt.scatter(X[:, 0], X[:, 1], c=y_pred, cmap=mglearn.cm2, s=60, edgecolors='k')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1],
marker='^', c=[mglearn.cm2(0), mglearn.cm2(1)], s=100, linewidth=2, edgecolors='k')
plt.xlabel("특성 0")
plt.ylabel("특성 1")<jupyter_output><empty_output><jupyter_text>### 7-2. k-means clustering (2)
*PCA, NMF와 비교되는 것*
\- ***차원보다 많은 클러스터를 지정할 수 있음***<jupyter_code>X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
kmeans = KMeans(n_clusters=10, random_state=0)
kmeans.fit(X)
y_pred = kmeans.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_pred, s=60, cmap='Paired', edgecolors='black')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=60,
marker='^', c=range(kmeans.n_clusters), linewidth=2, cmap='Paired', edgecolors='black')
plt.xlabel("특성 0")
plt.ylabel("특성 1")
print("클러스터 레이블:\n", y_pred)
distance_features = kmeans.transform(X)
print("클러스터 거리 데이터의 형태:", distance_features.shape)
print("클러스터 거리:\n", distance_features)<jupyter_output>클러스터 거리 데이터의 형태: (200, 10)
클러스터 거리:
[[1.54731274 1.03376805 0.52485524 ... 1.14060718 1.12484411 1.80791793]
[2.56907679 0.50806038 1.72923085 ... 0.149581 2.27569325 2.66814112]
[0.80949799 1.35912551 0.7503402 ... 1.76451208 0.71910707 0.95077955]
...
[1.12985081 1.04864197 0.91717872 ... 1.50934512 1.04915948 1.17816482]
[0.90881164 1.77871545 0.33200664 ... 1.98349977 0.34346911 1.32756232]
[2.51141196 0.55940949 1.62142259 ... 0.04819401 2.189235 2.63792601]]
<jupyter_text>### 7-3. DBSCAN<jupyter_code>X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
dbscan = DBSCAN()
clusters = dbscan.fit_predict(X_scaled)
plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=clusters, cmap='Set2', s=100, edgecolors='black', alpha=.65)
plt.xlabel("특성 0")
plt.ylabel("특성 1")<jupyter_output><empty_output><jupyter_text>### 7-4. 군집 알고리즘 비교와 평가<jupyter_code>from sklearn.metrics.cluster import adjusted_rand_score
X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
# 평균이 0, 분산이 1이 되도록 데이터의 스케일을 조정
scaler = StandardScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
fig, axes = plt.subplots(1, 4, figsize=(15, 3),
subplot_kw={'xticks': (), 'yticks': ()})
# 사용할 알고리즘 모델을 리스트로 만들기
algorithms = [KMeans(n_clusters=2), AgglomerativeClustering(n_clusters=2),
DBSCAN()]
# 비교를 위해 무작위로 클러스터 할당
random_state = np.random.RandomState(seed=0)
random_clusters = random_state.randint(low=0, high=2, size=len(X))
# 무작위 할당한 클러스터를 그리기
axes[0].scatter(X_scaled[:, 0], X_scaled[:, 1], c=random_clusters,
cmap=mglearn.cm3, s=60, edgecolors='black')
axes[0].set_title("무작위 할당 - ARI: {:.2f}".format(
adjusted_rand_score(y, random_clusters)))
for ax, algorithm in zip(axes[1:], algorithms):
# 클러스터 할당과 클러스터 중심 그림
clusters = algorithm.fit_predict(X_scaled)
ax.scatter(X_scaled[:, 0], X_scaled[:, 1], c=clusters,
cmap=mglearn.cm3, s=60, edgecolors='black')
ax.set_title("{} - ARI: {:.2f}".format(algorithm.__class__.__name__,
adjusted_rand_score(y, clusters)))
from sklearn.metrics import accuracy_score
# 포인트가 클러스터로 나뉜 두 가지 경우
clusters1 = [0, 0, 1, 1, 0]
clusters2 = [1, 1, 0, 0, 1]
# 모든 레이블이 달라졌으므로 정확도는 0
print("정확도: {:.2f}".format(accuracy_score(clusters1, clusters2)))
# 같은 포인트가 클러스터에 모였으므로 ARI는 1
print("ARI: {:.2f}".format(adjusted_rand_score(clusters1, clusters2)))<jupyter_output>정확도: 0.00
ARI: 1.00
<jupyter_text>-----
- 실루엣 계수로 확인해보기
- 오히려 k-means가 가장 높음, 실루엣 계수는 복잡한 세트에선 잘 맞지 않는다.<jupyter_code>from sklearn.metrics.cluster import silhouette_score
X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
# 평균이 0, 분산이 1이 되도록 데이터의 스케일을 조정합니다
scaler = StandardScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
fig, axes = plt.subplots(1, 4, figsize=(15, 3),
subplot_kw={'xticks': (), 'yticks': ()})
# 비교를 위해 무작위로 클러스터 할당을 합니다
random_state = np.random.RandomState(seed=0)
random_clusters = random_state.randint(low=0, high=2, size=len(X))
# 무작위 할당한 클러스터를 그립니다
axes[0].scatter(X_scaled[:, 0], X_scaled[:, 1], c=random_clusters,
cmap=mglearn.cm3, s=60, edgecolors='black')
axes[0].set_title("무작위 할당: {:.2f}".format(
silhouette_score(X_scaled, random_clusters)))
algorithms = [KMeans(n_clusters=2), AgglomerativeClustering(n_clusters=2),
DBSCAN()]
for ax, algorithm in zip(axes[1:], algorithms):
clusters = algorithm.fit_predict(X_scaled)
# 클러스터 할당과 클러스터 중심을 그립니다
ax.scatter(X_scaled[:, 0], X_scaled[:, 1], c=clusters, cmap=mglearn.cm3,
s=60, edgecolors='black')
ax.set_title("{} : {:.2f}".format(algorithm.__class__.__name__,
silhouette_score(X_scaled, clusters)))<jupyter_output><empty_output>
|
no_license
|
/study/3장_비지도학습과_데이터_전처리.ipynb
|
Sean-Parkk/introduction_to_ML_with_Python
| 33 |
<jupyter_start><jupyter_text>## DSP Task (number 7)Выполнено студентом группы 716 Косиковым Виктором### УсловиеПредложите свой дискретный детерминированный сигнал. Для данного сигнала
найдите оконное дискретное преобразование Фурье с длиной окна $\frac{1}{10}$ длины
сигнала и сдвигом окна относительно предыдущего значения в $\frac{1}{4}$ длины окна.### ВыполнениеДавайте создадим детерминированный дискретный сигнал<jupyter_code>#импортируем модули
import numpy as np
import matplotlib.pyplot as plt
from scipy.fftpack import fft
%matplotlib inline
# Select: plot, stem, bar
def plt_sel(s, *args, **kwargs):
if s == 0:
return plt.plot(*args)
if s == 1:
return plt.stem(*args, **kwargs)
if s == 2:
return plt.step(*args)
#количество отсчетов
N = 80
X = fft(r)
# Построение результатов
fig = plt.figure(figsize=(12, 4), dpi=80)
# Сам сигнал
plt.subplot(1, 2, 1)
plt.title('Signal')
plt.stem(r, use_line_collection=True, basefmt='C0')
plt.xlim([0, N-1])
plt.xlabel('samples')
plt.grid()
# ДПФ
plt.subplot(1, 2, 2)
plt.title('Spectrum without window')
plt.stem(X, use_line_collection=True, basefmt='C0')
plt.xlim([0, N//2-1])
plt.xlabel('frequency')
plt.grid()
plt.tight_layout()
#создаем треугольное окно размера 1/10 сигнала
from scipy.signal.windows import bartlett
w = bartlett(N // 10)
w_add = np.zeros(N - (N//10))
window = np.concatenate((w, w_add))
ywf = fft(r*window)
xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
# Строим графики
fig = plt.figure(figsize=(12, 4), dpi=80)
# Сам сигнал
plt.subplot(1, 2, 1)
plt.title('Signal')
plt.stem(r, use_line_collection=True, basefmt='C0')
plt.xlim([0, N-1])
plt.xlabel('samples')
plt.grid()
# ДПФ с окном
plt.subplot(1, 2, 2)
plt.title('Spectrum with window')
plt.stem(ywf, use_line_collection=True, basefmt='C0')
plt.xlim([0, N//2-1])
plt.xlabel('frequency')
plt.grid()
plt.tight_layout()
#смещаем сигнал
bias = np.zeros(len(w) // 4)
window_biased = np.concatenate((bias, w, np.zeros(len(w_add) - len(bias))))
ywfb = fft(r*window)
xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
#Сам сигнал
plt.subplot(1, 2, 1)
plt.title('Signal')
plt.stem(r, use_line_collection=True, basefmt='C0')
plt.xlim([0, N-1])
plt.xlabel('samples')
plt.grid()
# Его ДПФ со смещенным окном
plt.subplot(1, 2, 2)
plt.title('Spectrum with window')
plt.stem(ywfb, use_line_collection=True, basefmt='C0')
plt.xlim([0, N//2-1])
plt.xlabel('frequency')
plt.grid()
plt.tight_layout()<jupyter_output><empty_output>
|
no_license
|
/DSP_Task.ipynb
|
fiendfire2807/Study-tasks
| 1 |
<jupyter_start><jupyter_text>## In which we add some code to loop over all subjects, images, conditions and save results.
### Model variable classesUtility variables needed for various things below<jupyter_code>##a theano random number generator
rng = MRG_RandomStreams(use_cuda = True)
##data types
floatX = 'float32'
intX = 'int32'
##sometime we want to convert to 1hot format outside of theano expressions
##some compile the theano to_one_hot here
_anyVector = T.vector('anyMatrix',dtype=intX)
_anyInt = T.scalar('anyInt',dtype=intX)
to_one_hot_func = function(inputs=[_anyVector,_anyInt], outputs=to_one_hot(_anyVector, _anyInt))<jupyter_output><empty_output><jupyter_text>Basic model dimensions<jupyter_code>##defines number of objects in an object map
class numObjects(object):
def __init__(self, largest=10):
##npy vars
self.maxObjs = largest
##theano expressions
self._K = T.scalar('numObjects',dtype='int32') ##a theano integer scalar
def sample(self,M=1):
return np.random.randint(1,self.maxObjs,size=M)
def set_value(self,value):
self.K = np.array(value,dtype=intX)
##number of pixels (row, col, total) in an object map
class numPixels(object):
def __init__(self, largestRows=10,largestCols=10):
##npy vars
self.maxRows = largestRows
self.maxCols = largestCols
##theano expressions
self._D1 = T.scalar('numPixelRows',dtype='int32') ##num pixel rows
self._D2 = T.scalar('numPixelCols',dtype='int32') ##num pixel cols
self._D = self._D1*self._D2 ##total number of pixels
self._D.name = 'numPixels'
def sample(self,M=1):
numRows = np.random.randint(1,self.maxRows,size=M)
numCols = np.random.randint(1,self.maxCols,size=M)
numPixels = numRows*numCols
return numRows,numCols,numPixels
def set_value(self,D1,D2):
self.D1 = np.array(D1,dtype=intX)
self.D2 = np.array(D2,dtype=intX)
self.D = D1*D2<jupyter_output><empty_output><jupyter_text>Gamma-distributed hyperparameter for prior distribution over object categories<jupyter_code>class priorDispersion(object):
def __init__(self,maxDispersion=10):
##npy vars
self.maxDispersion = maxDispersion
##theano exprs.
self._dispersion = T.scalar('priorDispersionParam',dtype='float32') ##dispersion of categorical distribution
def sample(self,M=1):
return np.random.gamma(self.maxDispersion,size=M)
def set_value(self,value):
self.dispersion = np.array(value,dtype=floatX)
<jupyter_output><empty_output><jupyter_text>The dirichlet-distributed K parameters that define multinomial object distribution prior<jupyter_code>class categoryProbs(object):
def __init__(self, numObjects_inst, priorDispersion_inst):
self.priorDispersion = priorDispersion_inst
self.numObjects = numObjects_inst
self._pi= T.matrix('categoryPrior') ##1 x K
def sample(self,M=1):
'''
sample(M)
generate M categorical distributions
each categorical distribution is a (1 x K) draw from a dirichlet
returns M x K numpy array of draws, each row an independent draw
'''
return np.random.dirichlet(np.repeat(self.priorDispersion.dispersion,self.numObjects.K),size=M)
def set_value(self, value):
self.pi = np.array(value,dtype=floatX)<jupyter_output><empty_output><jupyter_text>The object map $Z$<jupyter_code>class latentObjMap(object):
def __init__(self,categoryPrior_inst, numPixels_inst):
self.categoryPrior=categoryPrior_inst
self.numPixels = numPixels_inst
##theano var for a stack of M object maps
self._Z = T.tensor3('Z', dtype=intX) ##(M x K x D) ##M x K x D stack of one-hot object maps
self.compile_sampler()
def compile_sampler(self):
_M = T.scalar('M',dtype='int32') ##number of samples
_D = self.numPixels._D
_K = self.categoryPrior.numObjects._K
_pZ = T.matrix('pZ') ##(K x D) prior
##this will be an M x K x D int32 tensor
_sampleZ = rng.multinomial(pvals = repeat(_pZ.T,_M,axis=0)).reshape((_D,_M,_K)).dimshuffle((1,2,0))
self._Z_sample_func = function([_pZ,_M,_K,_D],outputs=_sampleZ)
def sample(self, M=1, pZ=None):
'''
sample(M=1)
returns M x K x D numpy array of samples of 1hot encoded object maps.
each pixel of each map is an i.i.d draw from a categorical prior
'''
K = self.categoryPrior.numObjects.K
D = self.numPixels.D
if pZ is None:
catProbs = self.categoryPrior.pi
pZ = np.repeat(catProbs.T, D, axis=1)
thisM = np.array(M,dtype=intX)
return self._Z_sample_func(pZ,thisM,K,D).astype(intX)
def score(self,Z, given_params):
#so this would be p(Z|pi), i.e. the prior on Z.
#don't think I need this yet, so I'll pass.
pass
def view_sample(self, sampleZ, show=True):
'''
view_sample(sampleZ, show=True)
takes a 1 x K x D sample of a 1hot encoded object map
and converts it to a D1 x D2 image of the map.
makes plot if show=True
'''
sampleZImage = np.argmax(sampleZ,axis=1).reshape((self.numPixels.D1,self.numPixels.D2))
if show:
plt.imshow(sampleZImage, interpolation='nearest')
return sampleZImage
<jupyter_output><empty_output><jupyter_text>#### The windows<jupyter_code>foo = [(1,2),(2,4)]
print foo
foo.insert(0,(3,3))
print foo
class probes(object):
##dumb initializer. Want to be able to process windows before assigning as attributes
def __init__(self):
pass
def resolve(self, shape, workingScale=None):
'''
resolve(shape, workingScale=None)
inputs:
shape ~ shape tuple of the window probes (D1Prime, D2Prime)
workingScale ~ integer multiple of smallest possible resolution that perfectly preserves aspect ratio
outputs:
resolutions ~ list of shape tuples. all possible downsamples that perfectly preserve aspect ratio
workingResolution ~ shape tuple of the working scale used for the windows. pass along with windows to "reshape".
You want a working scale that is small as possible without sacrificing any of the objects
in the target image and preserves number of objects in smallest window and number of objects in most dense window.
'''
##get the smallest permissable downsample
greatestCommonDivisor = gcd(shape[0], shape[1])
##generate all permissable resolutions up to max (i.e. all integer rescalings)
smallestResolution = (shape[0]/greatestCommonDivisor, shape[1]/greatestCommonDivisor)
resolutions = [(ii*smallestResolution[0], ii*smallestResolution[1]) for ii in range(2,greatestCommonDivisor+1,2)]
resolutions.insert(0,smallestResolution)
##get the working window resolution
if workingScale is None:
workingScale = -1
workingResolution = resolutions[workingScale]
return resolutions, workingResolution
def reshape(self, windows, reshapeTo):
##downsample factor
N = windows.shape[0]
dns = np.prod(reshapeTo)/np.prod(windows.shape[-2])
##check that smallest window is greater than one pixel
windowSizes = np.sum(np.sum(windows,axis=2),axis=1)
smallestWindowSize = np.min(windowSizes)
sizeOfThisWindowAfterDownsampling = smallestWindowSize*dns
if sizeOfThisWindowAfterDownsampling < 1:
raise ValueError('too much downsampling. smallest window less than one pixel')
else:
windows = np.array(map(lambda x: imresize(x,reshapeTo,interp='nearest', mode='L'), windows))
return windows
def set_value(self,windows,flatten = False):
self.N,self.D1Prime,self.D2Prime = windows.shape
self.DPrime = self.D1Prime*self.D2Prime
if flatten:
windows = windows.reshape((self.N,self.DPrime))
self.windows = windows.astype(intX)
def make_windows(self, makeWindows=False, stride = 1, sizes = [1], n_groups=7, group_order = 2):
if makeWindows:
Windows = []
for sz in sizes:
scale_count = 0
for rows in np.arange(sz,D1,stride,dtype=int, ):
for cols in np.arange(sz,D2,stride,dtype=int):
one_win = np.zeros((D1,D2),dtype=floatX)
one_win[(rows-sz):(rows+sz), (cols-sz):(cols+sz)]=1
Windows.append(one_win)
scale_count +=1
print scale_count
N = len(Windows)
n_groups *= D
W = np.zeros((N+n_groups,D),dtype=intX)
for n in range(N):
W[n,:] = Windows.pop().ravel()
for n in range(N,N+n_groups):
rand_pairs = np.random.permutation(N)[:group_order]
W[n,:] = np.clip(np.sum(W[rand_pairs[0:group_order],:],axis=0), 0, 1)
W = W[:(N+n_groups),:]
N = W.shape[0]
return W.astype(intX)
class target_image(object):
def __init__(self):
pass
def set_values(self,targetObjectMap, targetImage=None):
self.targetImage = targetImage
values = np.array(np.unique(targetObjectMap))
self.targetObjectMap = np.digitize(np.array(targetObjectMap), bins=values, right=True ).astype(intX)
values = np.array(np.unique(targetObjectMap))
self.numberTargetObjects = len(values)
##one-hot encoding
D = np.prod(self.targetObjectMap.shape)
K = self.numberTargetObjects
self.testObjectMapOneHot = np.eye(K)[self.targetObjectMap.ravel()].T.reshape((1,K,D))
def reshape(self,targetObjectMap, reshapeTo, targetImage=None):
values = np.array(np.unique(targetObjectMap))
imK = len(values)
##resize to window, checking for preserved number of objects
testObjectMap=imresize(targetObjectMap, reshapeTo, interp='nearest')
values = np.array(np.unique(testObjectMap))
assert imK==len(values)
##digitize, checking
testObjectMap=np.digitize(np.array(testObjectMap), bins=values, right=True ).astype(int)
values = np.array(np.unique(testObjectMap))
assert imK==len(values)
if targetImage is not None:
testTargetImage = imresize(targetImage, reshapeTo, interp='nearest')
return testObjectMap, testTargetImage
else:
return testObjectMap
<jupyter_output><empty_output><jupyter_text>The noise params $\theta_{+}, \theta_{-}$<jupyter_code>class noiseParams(object):
def __init__(self):
pass
def enumerate_param_grid(self,theta_dns):
'''
enumerate_param_grid(theta_dns)
inputs:
theta_dns ~ integer indicating roughly sqrt(G)*2, G = number of grid points
outputs:
noise_param ~ 2 x G np array. columns are pairs of noise params, [p_on; p_off]. p_off < p_on for each pair.
'''
p_on, p_off = noise_grid(theta_dns,theta_dns)
return p_on, p_off<jupyter_output><empty_output><jupyter_text>The observed responses $\mathbf{r}$<jupyter_code>class responses(object):
'''
responses(latentObjMap_inst, noiseParams_inst)
inputs:
latentObjMap_inst, noiseParams_inst ~ instances of (see above)
outputs:
construct a responses instance with main attribute "lkhd_cube" that
allows to compute likelihood of responses given an object map Z
can set observed responses to known windows using 'set_values'
'''
def __init__(self,latentObjMap_inst, noiseParams_inst):
self.Z = latentObjMap_inst
self.noise = noiseParams_inst
##a G x K x K cube of p(r|count,p_on,p_off)
##K = max count and/or response, G = number of noise params
self.lkhd_cube = T.tensor3('lkhd_cube')
# self.windows=windows ##N x D', where D' = number of stimulus pixels.
self._W = T.matrix('windows', dtype=intX)
self._DPrime = self._W.shape[1]
self.compile_upsampler()
self.compile_object_counter()
def compile_upsampler(self):
##get theano vars for Z and it's dimensions
_Z = self.Z._Z ##M x K x D stack of one-hot object maps
_M = _Z.shape[0]
_K = self.Z.categoryPrior.numObjects._K
_D1 = self.Z.numPixels._D1
_D2 = self.Z.numPixels._D2
_D = self.Z.numPixels._D
##compute an upscale factor so we can upsample pixels in Z to match W
_upscale = T.sqrt(self._DPrime // _D).astype(intX)
##this converts back to image format (M x D1 x D2), where D = D1*D2. int32
_Z_images = _Z.argmax(axis=1).reshape((_M,_D1,_D2), ndim=3)
##this upsamples the Z-maps to (M x D1' x D2'), where D' = D1'*D2'. still int32
_Z_upsamples = T.tile(_Z_images.dimshuffle(0,1,2,'x','x'), (_upscale,_upscale)).transpose(0,1,3,2,4).reshape((_M,_D1*_upscale, _D2*_upscale))
##make a function so we can see upsampled object maps as images (if needed)
self._Z_usample_image_func = function([_M, _D1, _D2, self._DPrime, _Z], outputs=_Z_upsamples)
##this converts the Z-maps back to a one-hot encoding (M x K x D').
_Z_upsamples_one_hot = to_one_hot(_Z_upsamples.ravel(), _K,dtype=intX).reshape((_M, self._DPrime, _K)).transpose((0,2,1))
self._Z_upsample_one_hot_func = function([_M, _K, _D1, _D2, self._DPrime, _Z], outputs=_Z_upsamples_one_hot)
def compile_object_counter(self):
_Z = self.Z._Z ##M x K x D stack of one-hot object maps
_W = self._W
##(M x K x 1 x D')
## N x D'
##(M x K x N x D') sum(D')
##(M x K x N) clip(0,1)
##(M x K x N) sum(K)
##(M x N)
self._object_counts = T.sum(_Z.dimshuffle((0,1,'x',2))*_W,axis=-1).clip(0,1).sum(axis=1)
self._object_count_func = function([_Z, _W], outputs=self._object_counts)
def compute_feature(self,Z, winIdx=None):
'''
compute_feature(Z, winIdx = None)
the feature in this case is an object count over a map Z given a window W
inputs:
Z is an MxKxD stack of one-hot encoded object maps
winIdx ~ array of window indices for getting responses.
if supplied, N' = len(winIdx), otherwise N' = total number of windows
output:
applies N'xD' matrix of windows to map Z, returns MxN' matrix of object counts
note: if D != D', upsamples Z's so they each have D' pixels
'''
M = np.array(Z.shape[0],dtype=intX)
K = self.Z.categoryPrior.numObjects.K
D1 = self.Z.numPixels.D1
D2 = self.Z.numPixels.D2
D = self.Z.numPixels.D
if winIdx is not None:
windows = self.windows[winIdx]
else:
windows = self.windows
if D != self.DPrime:
# print 'upsampling'
Z = self._Z_upsample_one_hot_func(M,K,D1,D2,self.DPrime,Z)
return self._object_count_func(Z,windows)
def score(self,r,c,p_on,p_off):
'''
calculates p(response | object_count, p_on, p_off), the strange likelihood I derived for this model
score(response,object_count,p_on,p_off)
inputs:
response ~ integer
object_count ~ integer
p_on,p_off ~ noise params
outputs:
scalar likelihood value.
'''
##
K = self.Z.categoryPrior.numObjects.K
try:
counts = np.array([nCk(c,m)*nCk(K-c, r-m) for m in range(min(r,c)+1)])
probs = np.array([(1-p_on)**(c-m) * (p_on)**m * (p_off)**(r-m) * (1-p_off)**(K-c-r+m) for m in range(min(r,c)+1)])
return counts.dot(probs)
except TypeError:
print 'Error in score. response and count should be integer-valued.'
print r
print c
print min(r,c)
def make_lkhd_cube(self,p_on, p_off):
'''
make_lkhd_cube(p_on,p_off)
inputs:
p_on,p_off ~ two array-likes of matching length=G giving pairs of noise params
outputs:
lkhd_cube ~ G x (K+1) x K tensor of "scores". dim1 = noise params, dim2 = responses, dim3 = conditioning counts
The response dimension has size K+1 because subjects can respond 0 through K.
'''
K = self.Z.categoryPrior.numObjects.K
countRange = np.arange(1,K+1,dtype=intX)
respRange = np.arange(0,K+1,dtype=intX)
G = len(p_on)
lkhd_cube = np.full((G,K+1,K),0,dtype=floatX)
for g,p in enumerate(zip(p_on,p_off)):
for r in respRange:
for c in countRange:
lkhd_cube[g,r,c-1] = self.score(r,c,p[0],p[1])
return lkhd_cube
def sample(self,Z,p_on,p_off):
'''
sample(Z,p_on,p_off)
inputs:
Z ~ M x K x D
p_on,p_off ~ noise params
returns
noisy_object_counts = M x N matrix of counts drawn i.i.d from likelihood
this is not part of VI, so this is all done on cpu (no theano)
'''
M = Z.shape[0] ##number of maps to sample
K = self.Z.categoryPrior.numObjects.K
object_counts = self.compute_feature(Z)
N = object_counts.shape[1]
sample_responses = np.zeros((M,N), dtype = intX)
for m in range(M):
for n in range(N):
resp_dist = np.zeros(K+1)
oc = np.int(object_counts[m,n])
for k in range(K+1):
resp_dist[k] = self.score(k,oc,p_on,p_off)
sample_responses[m,n]=np.argmax(np.random.multinomial(1,resp_dist))
return sample_responses
def set_values(self,data=None, windows=None):
'''
set_values(data, windows)
inputs: supply either or both
data ~ (1 x N) or (N x 1) or (N,) array of integer responses
windows ~ instance of "probes" class containing (N x D') array of binary probes, where D' = = number of stimulus pixels.
outputs:
sets the observations attribute to data
sets the windows attributes to windows
checks for basic compatibility between windows and responses
'''
if data is not None:
self.observations = np.squeeze(data) ##array of shape (N,)
if windows is not None:
self.windows = windows.windows ##N x D', where D' = number of stimulus pixels.
self.N = self.windows.shape[0]
self.DPrime = np.array(windows.windows.shape[1],dtype=intX)
self.D1Prime = windows.D1Prime
self.D2Prime = windows.D2Prime
if self.DPrime != self.D1Prime*self.D2Prime:
raise ValueError('aspect ratio of windows is not right')
if hasattr(self,'observations') & hasattr(self, 'windows'):
Nobs = self.observations.shape[0]
if Nobs != self.N:
raise ValueError("number of windows (%d) and data points (%d) don't match" %(self.N, Nobs))
if any(map(lambda x,y: x>y, self.observations, self.windows.sum(axis=1))):
raise ValueError('some responses are larger than number of pixels in corresponding window. probably over-downsampled windows')<jupyter_output><empty_output><jupyter_text>### Inference classesDefine variational updates and optimization procedures for model variables<jupyter_code>class inferQZ(object):
def __init__(self):
self.compile_updater()
def compile_updater(self):
##K x N x K tensor of object count probs
##the first K is object id, the last K is object count
_oc_probs = T.tensor3('oc_probs')
##N x K, this will just be the log of P_star.
_lnP_star = T.matrix('lnP_star')
## K x 1, this is place holder for the dot product between pixel value and expected log of hyperparameters pi
_v = T.matrix('prior_penalties')
##K x N x K oc_probs
##(tensordot)
##N x K lnP_star
##K x 1 lnQ_z
##(add V)
##K x 1
##(exp)
##K x 1 Q_z_nn
##(normalize)
##K x 1 Q_z, the variational posterior element for one pixel
##log of Q_z (minus unknown constant that normalizes things)
_lnQ_z = T.tensordot(_oc_probs, _lnP_star, [[1,2], [0,1]]).dimshuffle([0,'x'])+_v
##non-normalized Q_z. we shift by max value to stabilize upcoming exp
_Q_z_nn = T.exp(_lnQ_z-T.max(_lnQ_z))
##variational posterior prob for one pixel
_Q_z = _Q_z_nn / _Q_z_nn.sum()
self._update_func = function([_oc_probs, _lnP_star, _v], outputs = _Q_z)
class optimizeNoiseParams(object):
def __init__(self):
'hi'
def update_noiseParams(self, goodnessOfFit):
'''
update_noiseParams(goodnessOfFit)
goodnessOfFit ~ G x 1 array of g.o.f. measures, one for each discrete candidate of noise params.
returns max, argmax of this array
'''
##find index of the best lkhd params (scalar integer)
goodnessOfFitStar = np.max(goodnessOfFit)
noiseParamStarIdx = np.argmax(goodnessOfFit)
return goodnessOfFitStar, noiseParamStarIdx
class inferQPi(object):
def __init__(self,):
self.compile_updater()
def compile_updater(self):
_alpha_0 = T.scalar('alpha_0')
_q_Z = T.matrix('q_Z') ##K x 1, this is result of summing over pixels in Q_Z matrix. very different from Q_z
_alpha = _q_Z + _alpha_0 ##broadcasts the scalar _alpha_0 across K
##(K x 1) this is needed to update Q_z
_Eln_pi = T.psi(_alpha) - T.psi(_alpha.sum())
##(K x 1) not needed for update of variational posteriors, but we'll want it for model interpretation
_E_pi = _alpha / _alpha.sum()
##returns _Eln_pi (K x 1) (object ids x 1)
##returns _E_pi (K x 1) (object ids x 1)
self._update_func = function([_q_Z, _alpha_0], outputs = [_Eln_pi, _E_pi])
<jupyter_output><empty_output><jupyter_text>Variational inference for $q(Z)$, $q(\pi)$, noiseparams
*Nasty book-keeping note*: counts, responses, and object id's are often converted to indices, or to 1hot formats.
To convert counts to indices we need to subtract 1, because counts always range from 1 to K (inclusive),
whereas indices range from 0 to K-1.
To convert object id's to indices we don't need to subtract 1, because id's are coded from 0 to (K-1).
To convert responses to indices we don't need to subtract, because responses range from 0 to K (inclusive).
Becuase 0 to K (inclusive) is K+1 numbers, we do need to make sure that arrays for storing discrete responses have K+1 elements.
Whew.<jupyter_code>class VI(object):
def __init__(self, responses_inst, inferQZ_inst, optimizeNoiseParams_inst, inferQPi_inst):
self.responses = responses_inst
self.iQZ = inferQZ_inst
self.oNP = optimizeNoiseParams_inst
self.iQPi = inferQPi_inst
##compile theano expressions, functions
self.compile_object_probs()
self.compile_ELBO()
self.compile_predictive_distribution()
self.compile_goodness_of_fit()
##===compile theano expressions, functions===
def compile_object_probs(self):
_object_counts = self.responses._object_counts ##(M x N)
_K = self.responses.Z.categoryPrior.numObjects._K
##non-normalized object count probs (N x K)
##we are using these counts as indices, so we have to subtract 1
_object_count_prob_nn = to_one_hot(_object_counts.astype('int32').flatten()-1,_K).reshape((_object_counts.shape[0],_object_counts.shape[1],_K)).sum(axis=0)
##object count probs (N x K)
_object_count_prob = _object_count_prob_nn / _object_count_prob_nn.sum(axis=1).reshape((_object_count_prob_nn.shape[0], 1))
self.object_count_prob_func = function([_object_counts, _K], outputs = _object_count_prob)
def compile_goodness_of_fit(self):
_P_theta = T.tensor3('_P_theta') ##(G x N x K)
_oc_probs = T.matrix('oc_probs') ##N x K ~ this is a place holder for the "object count prob" matrix
##(G x N x K)
##( N x K) (dot product, broadcast across G)
##(G x 1) --> because we don't do vectors we reshape to make output 2Dimensional (G x 1)
##log of likelihood
_ln_P_theta = T.log(_P_theta)
##ln [q(theta+, theta-) - const (G x 1) ], the log of the unormalized variational distribution over theta
##currently we're taking a point estimate on theta so we don't do the hard normalization step.
_goodnessOfFit = T.tensordot(_ln_P_theta, _oc_probs, axes=[[1,2], [0,1]],).reshape((_P_theta.shape[0], 1))
##returns G x 1 goodness of fit array. each element gives goodness of fit for one possible value of noiseparams
self.goodness_of_fit_func = function([_P_theta, _oc_probs], outputs = _goodnessOfFit)
def compile_ELBO(self):
_goodnessOfFitStar = T.scalar('_goodnessOfFitStar') ##scalar
_qZ = T.matrix('qZ_holder') ##N x K
##scalar: the entropy of the variational posterior
a_min = 10e-15
a_max = 1
_posterior_entropy = -T.tensordot(_qZ.clip(a_min,a_max), T.log(_qZ.clip(a_min,a_max)))
##scalar
_ELBO = _goodnessOfFitStar + _posterior_entropy
self.ELBO_update_func = function([_qZ, _goodnessOfFitStar], outputs=[_goodnessOfFitStar, _posterior_entropy, _ELBO])
def compile_predictive_distribution(self):
_lkhdTable = T.matrix('lkhd_table_pred') ##K+1 x K ~ responses x counts, likelihood for fixed noiseparams
##windows x 1 x counts |
## responses x counts tensordot
##windows x responses
_oc_probs = T.matrix('oc_probs') ##N x K ~ this is a place holder for the "object count prob" matrix
_pred_dist = T.tensordot(_oc_probs, _lkhdTable, axes = [[1],[1]])
self.predictive_distribution_update_func = function(inputs=[_oc_probs,_lkhdTable], outputs=_pred_dist)
##====initialization methods
def init_number_of_objects_range(self, numOverMin=1):
##the smallest number of objects has to be max response to any one probe
smallestPossibleNumberOfObjects = np.max(self.curResponses).astype(intX)
self.numberOfObjectsRange = np.arange(smallestPossibleNumberOfObjects, smallestPossibleNumberOfObjects+numOverMin+1)
##----there's also this complicated thing you could do...
# maxResponseIdx = np.argmax(self.curResponses)
# ##we'll cover a range from this minimum up to a number determined by probe sizes
# sizeOfMaxResponseWindow = np.sum(self.responses.windows[self.curIdx[maxResponseIdx], :])
# sizeOfLargestWindow = np.max(np.sum(self.responses.windows, axis=1))
# differenceInWindowSizes = sizeOfLargestWindow - sizeOfMaxResponseWindow
# largestNumberOfObjects = np.rint(np.min(smallestPossibleNumberOfObjects + hyperHyper*differenceInWindowSizes, self.hardcodedMaxNumberOfObjects)).astype(intX)
# self.number_of_objects_range = np.arange(smallestPossibleNumberOfObjects, largestNumberOfObjects)
def init_pixel_resolution_range(self, numOverMin = 1):
D1Prime,D2Prime = self.responses.D1Prime, self.responses.D2Prime
resolutions,_ = probes().resolve((D1Prime, D2Prime))
self.pixelResolutionRange = resolutions[:numOverMin]
# if not hasattr(self, 'pixel_resolution_range'):
# if listOfTuples is None:
# raise ValueError('you gotta initialize pixel resolution range by hand. run init_pixel_resoltuion and supply list of shape tuples')
# else:
# self.pixel_resolution_range = listOfTuples
# else:
# print 'pixel resoltuion range already intialized by hand'
##select initial values of noiseparams, discretize, access lkhdCube
def init_noiseParams(self, pOnInit, pOffInit,noiseParamNumber):
'''
init_noiseParams(pOnInit, pOffInit,noiseParamNumber)
create a grid of candidate noise params according noiseParamNumber (i.e., something like square root of number
of candidates we'll consider)
take initial guess at noise param and return index of nearest candidate in the grid
'''
##will need the likelihood cube at whatever resolution we're using to infer noise params, so make here
self.noiseParamGrid = np.array(self.responses.noise.enumerate_param_grid(noiseParamNumber),dtype=floatX).T
##for starters, find index of closest noise param values to pOnInit, pOffInit, and then slice the lkhdCube
noiseParamIdx = np.argmin(map(np.linalg.norm, self.noiseParamGrid-np.array([pOnInit,pOffInit]).T))
return noiseParamIdx
def init_Eln_pi(self, qZ):
dirichletParameter = self.responses.Z.categoryPrior.priorDispersion.dispersion
Eln_pi, qPi = self.iQPi._update_func(qZ.sum(axis=1, keepdims=True), dirichletParameter)
return Eln_pi, qPi
def init_qZ(self, numStarterMaps, empiricalLkhdTableStar, dirichletParameter):
'''
qZ, startZ, starerLogs = init_q_Z(numStarterMaps, empiricalLkhdTable, dirichletParameter)
randomly generates a stack of potential latent object maps.
scores each Z according to p(responses | Z)
selects Z with highest score = Z*
constructs an initial qZ by sampling from dirichlet(Z[:,d]*+dirichletParameter) for each pixel d.
inputs:
numStartermaps ~ number of randomly generated Z's
empiricalLkhdTable ~ N x K matrix of likelihoods p(response | count, pOn, pOff).
noise params are implicit and assumed to be best guess at start of running vi.
N can be either Ntrn, Ntest, or Nreg
dirichletParameter ~ slop that you add to best Z to make a qZ
outputs:
qZ ~ K x D variational posterior
startZ ~ the best object map from the randomly generated stack
starterLogs ~ the l
'''
##for generating stacks of one-hot-encoded object maps. these are "special" smooth maps
def make_object_map_stack(K, num_rows, num_cols, image_dimensions,num_maps):
size_of_field = int(np.mean(image_dimensions))
D = np.prod(image_dimensions)
object_map_base = spm(num_rows,num_cols,size_of_field,cluster_pref = 'random',number_of_clusters = K)
object_maps = np.zeros((num_maps, K, D),dtype=intX)
for nm in range(num_maps):
object_map_base.scatter()
tmp = np.squeeze(object_map_base.nn_interpolation())
tmp = imresize(tmp, image_dimensions, interp='nearest')
##convert to one_hot encoding
tmp = np.eye(K)[tmp.ravel()-1].T ##K x D
object_maps[nm] = tmp
return object_maps
##generate candidate maps for starting iQ_Z
K = self.responses.Z.categoryPrior.numObjects.K
D1 = self.responses.Z.numPixels.D1
D2 = self.responses.Z.numPixels.D2
D = D1*D2
objectMapStack = make_object_map_stack(K, 2*K, 2*K, (D1,D2), numStarterMaps)
##get table of log likelihoods for observed data
logEmpiricalLkhdTable = np.log(empiricalLkhdTableStar)
##get object_counts ( M x N)
objectCounts = self.responses.compute_feature(objectMapStack,winIdx=self.curIdx)
##get one-hot encoding of object counts (M x N x K)
##Use functionalized theano version
##we are using these counts as indices, so we have to subtract 1
oneHotObjectCounts = to_one_hot_func(objectCounts.astype(intX).flatten()-1, K).reshape((objectCounts.shape[0],objectCounts.shape[1],K))
##evaluate log_likelihoods
## one_hot_object_counts = M x N x K
## lkhd_table = N x K tensordot
## observedLogLkhds = M x 1
starterLogs = np.tensordot(oneHotObjectCounts, logEmpiricalLkhdTable, axes=2)
bestStartMap = np.argmax(starterLogs)
startZ = objectMapStack[bestStartMap] ## K x D
qZ = np.zeros((K,D), dtype=floatX)
for d in range(D):
qZ[:,d] = np.random.dirichlet(startZ[:,d]+dirichletParameter)
return qZ, startZ, starterLogs
##==========update and optimization methods=============
def optimize_hyper_parameters(self):
bestPercentCorrect = 0
bestK = np.inf
bestD = np.inf
for model in self.storedModels.values():
if model.bestPercentCorrect > bestPercentCorrect:
bestPercentCorrect = model.bestPercentCorrect
bestModel = model
bestK = model.responses.Z.categoryPrior.numObjects.K
bestD = model.responses.Z.numPixels.D
elif model.bestPercentCorrect == bestPercentCorrect:
if (model.responses.Z.categoryPrior.numObjects.K < bestK) or (model.responses.Z.numPixels.D < bestD):
bestPercentCorrect = model.bestPercentCorrect
bestModel = model
bestK = model.responses.Z.categoryPrior.numObjects.K
bestD = model.responses.Z.numPixels.D
return bestModel
def update_qZ(self, qZ, ExpLnPi, noiseParamStarIdx):
'''
coordinate ascent on variational posteriors of object map pixels
update_qZ(qZ, ExpLnPi, noiseParamStarIdx)
'''
##PStar is our term for the empirical lkhd cube sliced at noiseParamStarIdx. it is N x K
logPStar = np.log(self.curEmpiricalLkhdCube[noiseParamStarIdx])
K = self.responses.Z.categoryPrior.numObjects.K
D = self.responses.Z.numPixels.D
for d in range(D):
sampledZ = self.responses.Z.sample(M=self.numSamples, pZ=qZ)
oc_probs = np.zeros((K, self.curN, K),dtype=floatX)
v = np.zeros((K,1), dtype=floatX)
for k in range(K):
sampledZ[:,:,d] = 0. ##clear out object assignment for pixel d
sampledZ[:,k, d] = 1. ##assign pixel d to object k
oc_counts = self.responses.compute_feature(sampledZ,winIdx=self.curIdx)
oc_probs[k,:,:] = self.object_count_prob_func(oc_counts,K) ##calculate object count probs. given this assignment
v[k] = np.dot(sampledZ[0,:,d], ExpLnPi,) ##compare current assignment to prior over assignments
qZ[:, d] = self.iQZ._update_func(oc_probs, logPStar, v).squeeze() ##update variational posterior for pixel d
return qZ
def update_goodness_of_fit(self, qZ):
'''
update_goodness_of_fit(qZ)
return G x 1 array of measures of how well the variational posterior qZ matches the data
for each of the G possible values of the noise params.
'''
empiricalLkhdCube = self.curEmpiricalLkhdCube
K = self.responses.Z.categoryPrior.numObjects.K
sampledZ = self.responses.Z.sample(M=self.numSamples, pZ=qZ)
oc_counts = self.responses.compute_feature(sampledZ,winIdx=self.curIdx)
oc_probs = self.object_count_prob_func(oc_counts, K)
goodnessOfFit = self.goodness_of_fit_func(empiricalLkhdCube, oc_probs)
return goodnessOfFit
def optimize_PStar(self,qZ):
'''
optimize_PStar(qZ)
PStar is what we call the empirical lkhd cube sliced at the current best noise params.
So, it is our current guess at the empricial lkhd table.
We find the best noise param (by measuring goodness of fit)
'''
goodnessOfFit = self.update_goodness_of_fit(qZ)
goodnessOfFitStar, noiseParamStarIdx = self.oNP.update_noiseParams(goodnessOfFit)
##N x K
PStar = self.curEmpiricalLkhdCube[noiseParamStarIdx]
#1 x 2 best noiseparams
noiseParamStar = self.noiseParamGrid[noiseParamStarIdx,:]
return noiseParamStar, noiseParamStarIdx, PStar, goodnessOfFitStar
def update_qPi(self, qZ):
##update prior params
dirchletParam=self.responses.Z.categoryPrior.priorDispersion.dispersion
ExpLnPi, qPi = self.iQPi._update_func(qZ.sum(axis=1, keepdims=True),dirchletParam)
return ExpLnPi, qPi
##===criticism===
def update_ELBO(self, qZ, goodnessOfFitStar):
goodnessOfFitStar, posterior_entropy, ELBO = self.ELBO_update_func(qZ,np.asscalar(goodnessOfFitStar))
return goodnessOfFitStar, posterior_entropy, ELBO
def update_log_predictive_distribution(self, qZ, noiseParamStarIdx):
'''
update_log_predictive_distribution(qZ, noiseParamStarIdx)
inputs:
qZ ~ K x D
noiseParamStarIDx ~ int
outputs:
predictiveDistribution ~ N x K+1 distribution over the K+1 possible response to each of the N windows
empiricalLogPredictiveDistribution ~ N x 1, this is log of predictiveDistribution sliced at empirical responses
'''
##create object count probabilities
K = self.responses.Z.categoryPrior.numObjects.K
sampledZ = self.responses.Z.sample(M=self.numSamples, pZ=qZ)
oc_counts = self.responses.compute_feature(sampledZ,winIdx=self.curIdx)
oc_probs = self.object_count_prob_func(oc_counts, K)
##N x K+1, a distribution over K+1 responses to each window
predictiveDistribution = self.predictive_distribution_update_func(oc_probs,self.lkhdCube[noiseParamStarIdx])
##slice the predictive distribution at the actual responses. if response exceeds capacity of model, prob. is 0
##note that responses can be used as indices without change (unlike counts)
responsesAsIndices = self.curResponses.astype(intX)
smallEnough = map(lambda x: x <= K, responsesAsIndices)
empiricalPredictiveDistribution = np.where(smallEnough, predictiveDistribution[range(self.curN), responsesAsIndices.clip(0,K)],0)
# empiricalPredictiveDistribution[smallEnough,:] = predictiveDistribution[smallEnough, responsesAsIndices[smallEnough]] ##(N x 1)
##N x 1
logEmpiricalPredictiveDistribution = np.log(empiricalPredictiveDistribution)
return predictiveDistribution, logEmpiricalPredictiveDistribution
def update_percent_correct(self, predictiveDistribution):
responses = self.curResponses.astype(intX)
predictions = np.argmax(predictiveDistribution, axis=1)
fraction_correct = np.sum(responses==predictions) / (self.curN*1.)
return fraction_correct*100, predictions
def criticize(self, qZ, noiseParamStarIdx, goodnessOfFitStar, t):
self.goodnessOfFitStar_history[t] = goodnessOfFitStar
_,self.posteriorEntropy_history[t], self.ELBO_history[t] = self.update_ELBO(qZ, goodnessOfFitStar)
##get predictive distribution
predictiveDistribution,logEmpiricalPredictiveDistribution = self.update_log_predictive_distribution(qZ, noiseParamStarIdx)
##sum to get probability of observed data under predictive distribution
self.lnPredictiveDistribution_history[t] = logEmpiricalPredictiveDistribution.mean()
##use predicitive distribution to calculate percent correct
self.percentCorrect_history[t],_ = self.update_percent_correct(predictiveDistribution)
if self.percentCorrect_history[t] > self.bestPercentCorrect:
print '!new best!'
self.bestQZ = qZ.copy()
self.bestNoiseParam = self.noiseParamGrid[noiseParamStarIdx,:].copy()
self.bestPercentCorrect = self.percentCorrect_history[t]
self.bestlnPredictiveDistribution = self.lnPredictiveDistribution_history[t]
print 'ELBO: %f' %(self.ELBO_history[t])
print 'goodness of fit: %f' %(goodnessOfFitStar)
print 'posterior_entropy: %f' %(self.posteriorEntropy_history[t])
print 'mean log of predictive distribution over test samples: %f' %(self.lnPredictiveDistribution_history[t])
print 'pecent correct over test samples: %f' %(self.percentCorrect_history[t])
print '\n'
return
##======utilities and bookeeping
def rng_seed(self):
if not hasattr(self, 'randNumberSeed'):
self.randNumberSeed = np.random.randint(0,high=10**4)
np.random.seed(self.randNumberSeed)
def train_test_regularize_splits(self, trainTestSplit, trainRegSplit):
N = self.responses.observations.shape[0]
shuffledIdx = np.random.permutation(N)
lastTestIdx = np.floor((1-trainTestSplit)*N)
lastRegIdx = lastTestIdx+np.floor((1-trainRegSplit)*(N-lastTestIdx))
testIdx = shuffledIdx[:lastTestIdx]
regIdx = shuffledIdx[lastTestIdx:lastRegIdx]
trainIdx = shuffledIdx[lastRegIdx:]
return trainIdx, testIdx, regIdx
def empirical_lkhd_cube(self, idx=None):
if idx is None:
respAsIdx = self.responses.observations.astype(intX)
else:
respAsIdx = self.responses.observations[idx].astype(intX)
return np.squeeze(self.lkhdCube[:,respAsIdx,:])
def update_current(self, idx):
self.curN = len(idx)
self.curIdx = idx
self.curIdx = self.curIdx
self.curResponses = self.responses.observations[self.curIdx]
try: ##because the computation graph is inelegant
self.curEmpiricalLkhdCube = self.empirical_lkhd_cube(idx=self.curIdx)
except:
pass
def store_learned_model(self):
if not hasattr(self, 'storedModels'):
self.storedModels = {0:copy.deepcopy(self)}
else:
newModelKey = np.max(self.storedModels.keys())+1
self.storedModels[newModelKey] = copy.deepcopy(self)
del self.storedModels[newModelKey].storedModels
##visualize the variational posterior over pixels
def see_Q_Z(self, qZ, target_object_map = None, clim=[0,1]):
##view: construct an image grid
fig = plt.figure(1, (30,10))
K = qZ.shape[0]
D1 = self.responses.Z.numPixels.D1
D2 = self.responses.Z.numPixels.D2
##
qZAsImageStack = qZ.reshape(K,D1,D2)
if target_object_map is not None:
K += 1
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (1, K), # creates grid of axes
axes_pad=0.5, # pad between axes in inch.
cbar_mode = 'each',
cbar_pad = .05
)
if target_object_map is not None:
im = grid[0].imshow(target_object_map,cmap='Dark2', interpolation = 'nearest')
grid[0].cax.colorbar(im)
for kk in range(1,K):
im = grid[kk].imshow(qZAsImageStack[kk-1], cmap='hot', clim=clim)
grid[kk].cax.colorbar(im)
else:
for kk in range(0,K):
im = grid[kk].imshow(qZAsImageStack[kk], cmap='hot', clim=clim,interpolation = 'nearest')
grid[kk].cax.colorbar(im)
##=============RUN the actual variational inference algorithm========
##Note: the only arguments passed to the various methods should be those that are updated every iteration.
def run_VI(self, initialNoisinessOfZ, pOnInit, pOffInit, noiseParamNumber, numStarterMaps, numSamples, maxIterations, trainTestSplit, trainRegSplit, optimizeHyperParams=True, pixelNumOverMin=3, objectNumOverMin=3):
##reset the rng seed
self.rng_seed()
##divide the training, testing, regularization datasets
trainIdx, testIdx, regIdx = self.train_test_regularize_splits(trainTestSplit=trainTestSplit, trainRegSplit=trainRegSplit)
##set empirical likelihoods, and responses to training set
self.update_current(trainIdx)
if optimizeHyperParams:
##collect the arguments for use in recursive call below
allArgs = [initialNoisinessOfZ, pOnInit, pOffInit, noiseParamNumber, numStarterMaps, numSamples, maxIterations, trainTestSplit, trainRegSplit]
##initialize the ranges of hyperparameters
self.init_number_of_objects_range(numOverMin=objectNumOverMin)
self.init_pixel_resolution_range(numOverMin=pixelNumOverMin)
##run vi in a loop over hyperparameters
for k in self.numberOfObjectsRange:
self.responses.Z.categoryPrior.numObjects.set_value(k)
print '--------number of objects: %d------------' %(self.responses.Z.categoryPrior.numObjects.K)
for d1,d2 in self.pixelResolutionRange:
self.responses.Z.numPixels.set_value(d1,d2)
print '--------image resolution: (%d,%d,%d)------------' %(self.responses.Z.numPixels.D1,self.responses.Z.numPixels.D2,self.responses.Z.numPixels.D)
##recursively call run_VI, each time with a fixed set of hyperparams, avoiding the hyperparameter loop
numIters=self.run_VI(*allArgs, optimizeHyperParams=False)
self.numIters = numIters
self.store_learned_model()
bestModel = self.optimize_hyper_parameters()
bestModel.trainIdx = trainIdx
bestModel.regIdx = regIdx
bestModel.testIdx = testIdx
return bestModel
else:
##--construct the lkhdCube and slice it at the discrete value of the noise params closest to the supplied init values
noiseParamStarIdx = self.init_noiseParams(pOnInit, pOffInit, noiseParamNumber)
##store numSamples for consultation in a few functions
self.numSamples = numSamples
##--construct lkhd cube
self.lkhdCube = self.responses.make_lkhd_cube(self.noiseParamGrid[:,0],self.noiseParamGrid[:,1]) ##G x K x K
##set current empirical likelihoods, and responses to training set (have to re-run because of lkhdCube)
self.update_current(trainIdx)
##initialize variational posterior
qZ, self.startZ, self.starterLogs = self.init_qZ(numStarterMaps, self.curEmpiricalLkhdCube[noiseParamStarIdx], initialNoisinessOfZ)
##initialize variational prior
ExpLnPi,_ = self.init_Eln_pi(qZ)
##get initial goodnessOfFitStar
goodnessOfFitStar = self.update_goodness_of_fit(qZ)[noiseParamStarIdx]
##set initial PStar, which is our term for an empirical lkhd table evaluated at the noiseParamStarIdx
##PStar ~ N x K
PStar = self.curEmpiricalLkhdCube[noiseParamStarIdx]
##--initialize arrays for learning histories and storing solutions
iteration = 0
self.ELBO_history = np.zeros((maxIterations+1,1))
self.goodnessOfFitStar_history = np.zeros((maxIterations+1,1))
self.posteriorEntropy_history = np.zeros((maxIterations+1,1))
self.percentCorrect_history = np.zeros((maxIterations+1,1))
self.lnPredictiveDistribution_history = np.zeros((maxIterations+1,1))
self.bestPercentCorrect = 0
delta_ELBO = np.inf
min_delta_ELBO = 10e-15
ELBO_old = 0.
##--publish initial criticism. use regularization data
self.update_current(regIdx)
self.criticize(qZ, noiseParamStarIdx, goodnessOfFitStar, iteration)
##switch back to training data
self.update_current(trainIdx)
iteration += 1
while (delta_ELBO > min_delta_ELBO) and (iteration <= maxIterations):
##put lkhd in log domain
lnPStar = np.log(PStar).astype(floatX)
##coordinate ascent on variational posteriors of object map pixels
qZ = self.update_qZ(qZ, ExpLnPi, noiseParamStarIdx)
##update noise params: calculates goodness of fit for all possible noise params, finds best,
##returns it, returns best noiseParam, index of best noiseParam, and updates PStar
noiseParamStar, noiseParamStarIdx, PStar, goodnessOfFitStar = self.optimize_PStar(qZ)
##update prior params
ExpLnPi, _ = self.update_qPi(qZ)
##criticize using regularization data
self.update_current(regIdx)
self.criticize(qZ, noiseParamStarIdx, goodnessOfFitStar, iteration)
##switch back to training data
self.update_current(trainIdx)
##update ELBO convergence criteria
delta_ELBO = np.abs(self.ELBO_history[iteration]-ELBO_old)
ELBO_old = self.ELBO_history[iteration]
iteration += 1
return iteration
<jupyter_output><empty_output><jupyter_text>#### Big dumb function for importing data<jupyter_code>def open_imagery_probe_data(*args):
'''
open_imagery_probe_data() returns a pandas dataframe with lots of info
or
open_imagery_probe_data(subject, state, targetImage) accesses the dataframe and gets the stuff you want
'''
##which repo?
drive = '/mnt/fast/'
##base directory
base = 'multi_poly_probes'
##pandas dataframe with all the experimental conditions and data
data_place = 'data'
data_file = 'multi_poly_probe_data_3_subjects.pkl'
##open experimental data: this is a pandas dataframe
experiment = pd.read_pickle(join(drive, base, data_place, data_file))
if not args:
return experiment
else:
subject = args[0]
state = args[1]
targetImageName = args[2]
##target images
image_place = 'target_images'
mask_place = 'masks'
target_image_file = targetImageName+'_letterbox.png'
mask_image_file = targetImageName+'_mask.tif'
##window files
window_place = 'probes'
window_file = targetImageName+'_letterbox_img__probe_dict.pkl'
##open target image/object map: useful as a guide
targetImage = open(join(drive, base, image_place, target_image_file),mode='r').convert('L')
##open
targetObjectMap = open(join(drive, base, mask_place, mask_image_file),mode='r').convert('L')
##get the responses you want
##responses of select subject / target image / cognitive state
resp = experiment[(experiment['image']==targetImageName) * (experiment['subj']==subject) * (experiment['state']==state)].response.values
##run a nan check--who knew there would be nans in this data?
nanIdx = ~np.isnan(resp)
resp = resp[nanIdx]
if any(~nanIdx):
print 'killed some nans'
##give it dimensions
resp = resp[np.newaxis,:]
##corresponding window indices
windowIdx = experiment[(experiment['image']==targetImageName) * (experiment['subj']==subject) * (experiment['state']==state)].probe.values
##open the windows. creates a dictionary with "index/mask" keys. this usually takes a while
windows = pd.read_pickle(join(drive, base, window_place, window_file)) ##N x D1Prime x D2Prime
N = len(windows['index'])
window_shape = windows['mask'][0].shape
W = np.zeros((N,window_shape[0],window_shape[1]),dtype=floatX)
##correctly order and reformat
for ii,w in enumerate(windowIdx):
str_dx = map(int, w.split('_'))
dx = windows['index'].index(str_dx)
W[ii] = windows['mask'][dx].clip(0,1)
##run another nan check
W = W[nanIdx]
return W, resp, experiment, targetObjectMap,targetImage
pdb on<jupyter_output>Automatic pdb calling has been turned ON
<jupyter_text>## Loop over subjects, conditions, images, running VI and saving<jupyter_code>resultsDict = dict()
experiment_df = open_imagery_probe_data()
for subject, subject_group in experiment_df.groupby('subj'):
resultsDict[subject] = dict()
for state, state_group in subject_group.groupby('state'):
resultsDict[subject][state] = dict()
for targetImageName, target_group in state_group.groupby('image'):
##get data
windows, resp, _, targetObjectMap,targetImage = open_imagery_probe_data(subject, state, targetImageName)
##==========construct model random variables
##number of objects
nObj = numObjects()
##dispersion on category prior: here we set a hyperprior
pDisp = priorDispersion()
dispersion = 1.0
pDisp.set_value(dispersion)
##resolution of object map Z
nPixels = numPixels()
##category prior and object map
catProb = categoryProbs(nObj,pDisp)
Z = latentObjMap(catProb,nPixels)
##noise params
nP = noiseParams()
##windows: we change their shape a little to make them easier to work with
desiredWindowShape = (375,600)
workingScale = 5
w = probes()
resolutions, workingResolution = w.resolve(desiredWindowShape, workingScale)
w.set_value(w.reshape(windows, workingResolution),flatten=True)
print 'working resolution is (%d, %d)' %(workingResolution[0], workingResolution[1])
##response object
r = responses(Z,nP)
##fake data
r.set_values(windows=w)
r.set_values(data=resp)
print 'total observations: %d' %(r.N)
tMap = target_image()
targetObjectMap_test, targetImage_test = tMap.reshape(targetObjectMap,workingResolution,targetImage=targetImage)
tMap.set_values(targetObjectMap_test, targetImage_test)
##=============
##instantiate the variational inferences we want to perform
iqZ = inferQZ()
iqPi = inferQPi()
##...and the parameter optimizations (point-estimate) we want
oNP = optimizeNoiseParams()
##variational inference combines them all together
vi = VI(r, iqZ,oNP, iqPi)
### Run variational inference
##inference algorithm parameters
initialNoisinessOfZ = 0.2
pOn_init, pOff_init = .8, 0.2
densityOfNoiseParamGrid = 50
numStarterMaps = 20
numSamplesForComputingObjectCountProbs = 4
maxNumIterations = 50
trainTestSplit = 1.0
trainRegSplit = .8
pixelNumOverMin = 2
objectNumOverMin = 2
print '=========================(subject, state, target) = (%s, %s, %s) ====' %(subject, state, targetImageName)
bestModel = vi.run_VI(initialNoisinessOfZ, \
pOn_init, pOff_init, \
densityOfNoiseParamGrid, \
numStarterMaps, \
numSamplesForComputingObjectCountProbs, \
maxNumIterations, \
trainTestSplit, trainRegSplit, \
optimizeHyperParams=True, pixelNumOverMin=pixelNumOverMin, objectNumOverMin=objectNumOverMin)
resultsDict[subject][state][targetImageName]= copy.deepcopy(bestModel)
resultsDict
store = pd.io.pytables.HDFStore('/mnt/NAS/tnaselar/imagery_probe/analysis_results/all_subjects.h5')
store['df'] = pd.DataFrame(resultsDict) # save it
resultsDf = store['df']
resultsDf<jupyter_output><empty_output><jupyter_text>#### A format for easier analysis<jupyter_code>modelList = []
for sbj,subjectDict in resultsDict.iteritems():
for st, stateDict in subjectDict.iteritems():
for targ, modelInstance in stateDict.iteritems():
modelList.append([sbj,st,targ, modelInstance])
modelDf = pd.DataFrame(data=modelList, columns=['subject', 'state', 'target', 'model'])
modelDf
store['easy_format'] = modelDf
##btw, this is how you get nested attributes
def rgetattribute(obj, listOfAttributes):
if type(listOfAttributes) is not list:
listOfAttributes = [listOfAttributes]
try:
return getattr(obj, listOfAttributes[-1])
except:
return rgetattribute(getattr(obj, listOfAttributes.pop(0)), listOfAttributes)
##grab the modeling result you want
def get_model_attribute(attributeString,df,shapeOfAttribute=None):
'''
get_model_attribute(attributeString,df,shapeOfAttribute=None)
attributeString can be a list of attributes if the desired attribute is nested.
'''
n = len(df)
if shapeOfAttribute is not None:
dfshape = [n]+list(shapeOfAttribute)
newArray = np.full(dfshape,0)
else:
newArray = [0]*n
for idx,row in df.iterrows():
copyAttributeString = copy.deepcopy(attributeString)
newArray[idx] = rgetattribute(df['model'].iloc[idx],copyAttributeString)
return newArray
##...if you do something with it that results in nice plottable scalar, add it to a new df
def make_new_df(newColName,newColData,df):
try:
newDf = df.drop('model', axis=1)
except:
newDf = df
newDf[newColName] = newColData
return newDf
<jupyter_output><empty_output><jupyter_text>#### Here's an example of how awesome python is.
This puts the "update_qPi" methods for each bestModel in a list, as well as the corresponding qZ.
Passes qZ as argument to each method. Puts all outputs in a list.<jupyter_code>allQZ = get_model_attribute('bestQZ', modelDf)
flarg = get_model_attribute('update_qPi', modelDf)
qPiList = []
for qZ,update_func in zip(allQZ,flarg):
_,qPi = update_func(qZ)
qPiList.append(qPi)
qPiList<jupyter_output><empty_output><jupyter_text>#### Study metrics<jupyter_code>metricsDf = make_new_df('percentCorrect',get_model_attribute('bestPercentCorrect', modelDf,shapeOfAttribute=(1,)),modelDf)
metricsDf = make_new_df('subject+target', metricsDf['subject']+metricsDf['target'], metricsDf)
metricsDf
import seaborn as sns
sns.pointplot(x='state',y='percentCorrect',hue='target', data=metricsDf)
sns.pointplot(x='state',y='percentCorrect',hue='subject+target', data=metricsDf, )
plt.legend(loc='lower left')<jupyter_output><empty_output><jupyter_text>#### Entropy<jupyter_code>entropy = get_model_attribute('posteriorEntropy_history',modelDf)
numIters = get_model_attribute('numIters', modelDf,shapeOfAttribute=(1,))
finalEntropy = np.full(len(numIters), 0)
ii = 0
for ent,n in zip(entropy,numIters.squeeze()-1):
finalEntropy[ii] = ent[n]
ii += 1
metricsDf = make_new_df('entropy', finalEntropy,metricsDf)
metricsDf
sns.pointplot(x='state',y='entropy',hue='subject', data=metricsDf)
sns.pointplot(x='state',y='entropy', hue='subject+target', data=metricsDf)
plt.legend(loc='lower left')
sns.pointplot(x='state',y='entropy',hue='target', data=metricsDf)
img = entropyDf[metricsDf['state']=='img'].entropy.values
pcp = entropyDf[metricsDf['state']=='pcp'].entropy.values
dx = np.argsort(pcp)
plt.plot(pcp[dx],img[dx],'.')
plt.plot(np.linspace(0,150),np.linspace(0,150),)
plt.axis('equal')<jupyter_output><empty_output><jupyter_text>#### Noise parameters<jupyter_code>noiseParams = get_model_attribute('bestNoiseParam', modelDf,shapeOfAttribute=(2,))
metricsDf = make_new_df('forgetting', 1-noiseParams[:,0], metricsDf)
metricsDf = make_new_df('halucinating', noiseParams[:,1], metricsDf)
metricsDf
sns.pointplot(x='state',y='forgetting',hue='subject', data=metricsDf)
sns.pointplot(x='state',y='forgetting',hue='target', data=metricsDf)
sns.pointplot(x='state',y='forgetting',hue='subject+target', data=metricsDf)
plt.legend(loc='lower left')
sns.pointplot(x='state',y='halucinating',hue='subject+target', data=metricsDf)
plt.legend(loc='lower left')<jupyter_output><empty_output><jupyter_text>#### Resolution and capacity<jupyter_code>loa = ['responses', 'Z', 'numPixels', 'D']
numpixels = get_model_attribute(loa, modelDf,shapeOfAttribute=(1,))
metricsDf = make_new_df('numpixels', numpixels, metricsDf)
loa = ['responses', 'Z', 'categoryPrior', 'numObjects', 'K']
numobjects = get_model_attribute(loa, modelDf,shapeOfAttribute=(1,))
metricsDf = make_new_df('numobjects', numobjects, metricsDf)
metricsDf
sns.pointplot(x='state',y='numpixels',hue='target', data=metricsDf)
plt.legend(loc='lower left')
sns.pointplot(x='state',y='numobjects',hue='subject+target', data=metricsDf)
plt.legend(loc='lower left')<jupyter_output><empty_output>
|
no_license
|
/analysis/variational_approach/.ipynb_checkpoints/analysis_001-checkpoint.ipynb
|
tnaselar/imagery_psychophysics
| 18 |
<jupyter_start><jupyter_text><jupyter_code># git test
2 + 2
<jupyter_output><empty_output>
|
no_license
|
/git_test.ipynb
|
noallynoclan/colab
| 1 |
<jupyter_start><jupyter_text># Les multiplicateurs de Lagrange
## Exemple:
On considére la fonction $f(x,y)=(x-1)^2+(y-2)^2$.
Parmi les points $(x,y)$ sur le cercle $x^2+y^2=45$, on cherche celui /ceux où $f$ atteint son minimum/maximum.<jupyter_code>
var('x,y,u,v')
f(x,y)=(x-1)^2+(y-2)^2
r=sqrt(45)
cm = colormaps.Blues # There are several names predefined
def c(x,y):
return float((f(x,y))/170)
S=plot3d(f(u,v),(u,-r-1,r+1),(v,-r-1,r+1), color=(c,cm))
C = parametric_plot3d([r*cos(v),r*sin(v),0],(v,0,2*pi), color='red', thickness=2)
CS =parametric_plot3d([r*cos(v),r*sin(v),f(r*cos(v), r*sin(v))],(v,0,2*pi), color='green', thickness=3)
show(S+C+CS, aspect_ratio=[5,5,1]);
df=f.gradient()<jupyter_output><empty_output><jupyter_text>Voyons le probème en termes de courbes de niveau et du champ gradient de $f$. Rappelons que $\nabla f(x_0,y_0)$ donne la direction dans laquelle il faut se déplacer à partir de $(x_0,y_0)$ pour avoir la plus forte augmentation dans la valeur de $f$.
Ci bas, on trouve les courbes de niveau de $f$, ainsi que le champ $\nabla f$ et la courbe $x^2 + y^2 = 45$ (en rouge).<jupyter_code>Champf=plot_vector_field(df, (x,-r,r), (y,-r,r), color='darkblue',aspect_ratio=1)
Contourf=contour_plot(f(u,v),(u,-r,r),(v,-r,r),cmap='Blues',labels=True, linestyles='solid',aspect_ratio=1);
Cercle = parametric_plot([r*cos(u),r*sin(u)],(u,0,2*pi), color='red', thickness=2,aspect_ratio=1)
show(Contourf+Cercle+Champf);<jupyter_output><empty_output><jupyter_text>La courbe de contrainte est donnée par par $g(x,y)=45$ ou $(x,y) = x^2 + y^2$ est la fonction de contrainte. C'est donc une courbe de niveau pour la fonction de contrainte $g$. Cette fonction a aussi un champ gradient, $\nabla g$. Dessinons les vecteurs de ce champ, mais seulement ceux qui correspondent aux points sur la courbe.<jupyter_code>Pts=[(r*cos(2*t*pi/20),r*sin(2*t*pi/20)) for t in range(20)];
Pts1=[(1.1*r*cos(2*t*pi/20),1.1*r*sin(2*t*pi/20)) for t in range(20)];
Vecs=[arrow(Pts[j],Pts1[j],color='red', width=1, arrowsize = 2) for j in range(20)];
Champg=sum(Vecs)
show(Contourf+Cercle+Champf+Champg,aspect_ratio=1, figsize = 8);<jupyter_output><empty_output><jupyter_text>**Conjecture:** Les points de maximum / minimum de $f$ sur la courbe $g=45$ sont précisément ceux pour lequels les vecteurs gradient $\nabla f$ et $\nabla g$ sont colinéaires. En d'autres termes, il doit exister $\lambda\in \mathbb{R}$ tel que $$\nabla f = \lambda \nabla g$$
Demandons à SAGE de résoudre les équations qu'on a besoin de résoudre. La fonction $f$ a déjà été définie,mais pas $g$, et nous avons également d'une nouvelle variable $L$ qui jouera le rôle de $\lambda$.<jupyter_code>g(x,y)=x^2+y^2
dg=g.gradient()
var('L')
df(x,y)
dg(x,y)
solve([df[0](x,y)==L*dg[0](x,y), df[1](x,y)==L*dg[1](x,y), g(x,y)==45],[x,y,L])<jupyter_output><empty_output>
|
no_license
|
/Lagrange.ipynb
|
JCBustamante/TestSageBinder
| 4 |
<jupyter_start><jupyter_text>## 練習時間
資料的操作有很多,接下來的馬拉松中我們會介紹常被使用到的操作,參加者不妨先自行想像一下,第一次看到資料,我們一般會想知道什麼訊息?
#### Ex: 如何知道資料的 row 數以及 column 數、有什麼欄位、多少欄位、如何截取部分的資料等等
有了對資料的好奇之後,我們又怎麼通過程式碼來達成我們的目的呢?
#### 可參考該[基礎教材](https://bookdata.readthedocs.io/en/latest/base/01_pandas.html#DataFrame-%E5%85%A5%E9%97%A8)或自行 google<jupyter_code>import os
import numpy as np
import pandas as pd
# 設定 data_path
dir_data = './data/'
f_app = os.path.join(dir_data, 'application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)<jupyter_output>Path of read in data: ./data/application_train.csv
<jupyter_text>### 如果沒有想法,可以先嘗試找出剛剛例子中提到的問題的答案
#### 資料的 row 數以及 column 數<jupyter_code>app_train.shape<jupyter_output><empty_output><jupyter_text>#### 列出所有欄位<jupyter_code>app_train.head<jupyter_output><empty_output><jupyter_text>#### 截取部分資料<jupyter_code>app_train.head(6)<jupyter_output><empty_output>
|
no_license
|
/homework/Day_002_HW.ipynb
|
abcdefghi802002/ML100-DAYS
| 4 |
<jupyter_start><jupyter_text># Linear Regression
In this notebook, we'll take a look at one of the simplest and most useful algorithms in Machine Learning: *linear regression*. Let's look at the name:
* `Linear` - the algorithm assumes that the relationship between the *independent* variables (predictors, features) and the *dependent* variable (response, target) is linear in nature. This means that the formula for the relationship is of the form $$ y = \alpha + \beta_1 \cdot x_1 + \beta_2 \cdot x_2 + \ldots + \beta_n \cdot x_n + \epsilon$$
where $\alpha$ is the *intercept*, or where the line meets the $y$-axis (i.e., $x = 0$) and $\beta_i$ is the *coefficient* for feature $i$. $\epsilon$ is the *noise* - we assume that our data is not perfect and there is some random variation we cannot - and should not - model. In other words, the relationship is a *sum* of *products* of constants and a variable. In fact, if we define a feature $x_0$ with a constant value of 1 and write $\alpha$ as $\beta_0$ we can write $$ y = 1 \cdot \alpha + \sum_{i=1}^{n} \beta_i \cdot x_i + \epsilon = \sum_{i=0}^{n} \beta_i \cdot x_i + \epsilon $$
* `Regression` - the algorithm looks for the *best fit* line: the line that minimizes the *Mean Squared Error* (MSE) between the points and the prediction. Formally: assuming that $\hat{y}$ is the predicted value for the input $x$, the algorithm tries to minimize $ (\hat{y} - y)^2 $ for all values of $\hat{y}$.
Linear regression has some assumptions -
1. The relationship between $x$s and $y$ is linear (obviously)
2. The values are independent - i.e., having seen the value $a$ does not affect the chances of seeing the value $b$
3. The variance of the error is constant across all $x$'s
4. Errors are normally distributed
Let's see an example.
## Example
### The Boston Housing Dataset<jupyter_code>%pylab inline
import pandas as pd
import seaborn as sns
from sklearn.datasets import load_boston
rcParams['figure.figsize'] = (20,7)
boston = load_boston()
# Create dataframe with features
df = pd.DataFrame(boston['data'], columns=boston['feature_names'])
# Add target
df['MEDV'] = boston['target']
print(df.head())
print(boston['DESCR'])<jupyter_output> CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX \
0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0
1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0
2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0
3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0
4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0
PTRATIO B LSTAT MEDV
0 15.3 396.90 4.98 24.0
1 17.8 396.90 9.14 21.6
2 17.8 392.83 4.03 34.7
3 18.7 394.63 2.94 33.4
4 18.7 396.90 5.33 36.2
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN[...]<jupyter_text>For a new dataset, we usually begin by doing *exploratory data analysis* and getting to know the data as well as cleaning up the data. Here, since this is a well-known clean dataset we'll dispense with that.
Only thing we need to check is assumption 4 above - that the error is normally distributed. In practice this means that we expect to see something resembling a bell curve when we plot the target.<jupyter_code>sns.distplot(df['MEDV'])<jupyter_output><empty_output><jupyter_text>Not perfect, but it will do. Usually linear regression tends to work with 'normal-like' distributions.### Setting Up the ExperimentWe'll build a *predictive* model - one that learns parameters and which we can use for new data. Alternatively, we can build an *inferential* model, which can help us explain how the various features affect the target value.
In order to be able to measure the quality of our model we'll split the data into a *training* and *test* set: we'll learn on the training set and check error - MSE - on the test set. (Strictly speaking, we'll also need a *validation* set for comparing models with different features - but since we have little data we'll solve this another way.)<jupyter_code>from sklearn.model_selection import train_test_split
np.random.seed(54321)
X_train, X_test, y_train, y_test = train_test_split(df[df.columns[:-1]], df[df.columns[-1]])
print("Training set size: {} {}".format(X_train.shape, y_train.shape))
print("Test set size: {} {}".format(X_test.shape, y_test.shape))<jupyter_output>Training set size: (379, 13) (379,)
Test set size: (127, 13) (127,)
<jupyter_text>Let's try with a single predictor first - the average number of rooms. How does this feature predict the target?<jupyter_code># It's always good to get intuition with a plot!
scatter(X_train['RM'], y_train)
xlabel("Avg. # of Rooms")
ylabel("Median house price")<jupyter_output><empty_output><jupyter_text>We can see something that looks linear. Let's try fitting and drawing the resulting line.<jupyter_code>from sklearn.linear_model import LinearRegression
# Create the object
lr_rooms = LinearRegression()
# Fit X to y values
lr_rooms.fit(X_train[['RM']], y_train)
# Get model predictions
predictions = lr_rooms.predict(X_train[['RM']])
scatter(X_train['RM'], y_train)
plot(X_train['RM'], predictions)
xlabel("Avg. # of Rooms")
ylabel("Median house price")<jupyter_output><empty_output><jupyter_text>Our model looks reasonably good, however it tends to underestimate price at the lower end (3 - 5 rooms on average) and at the upper end (7.5 - 9 rooms on average). Let's quantify this by looking at the MSE. <jupyter_code>from sklearn.metrics import mean_squared_error
# Calculate the error between the true values and the predicted values
mse = mean_squared_error(y_train, predictions)
print("MSE: {}".format(mse))
print("We miss the target by {:.4}K on average".format(sqrt(mse)))<jupyter_output>MSE: 44.15002858977413
We miss the target by 6.645K on average
<jupyter_text>However, this MSE value is for the *training* set. We're interested in what happens with unseen data:<jupyter_code>predictions = lr_rooms.predict(X_test[['RM']])
mse = mean_squared_error(y_test, predictions)
print("MSE: {}".format(mse))
print("We miss the target by {:.4}K on average".format(sqrt(mse)))<jupyter_output>MSE: 42.5394398891114
We miss the target by 6.522K on average
<jupyter_text>### Trying other features<jupyter_code>print("FEATURE\tMSE")
print("-------\t---")
for feature in ['AGE','DIS','CRIM']:
lr = LinearRegression()
lr.fit(X_train[[feature]], y_train)
test_predictions = lr.predict(X_test[[feature]])
mse = mean_squared_error(y_test, test_predictions)
print("{}\t{:.5}".format(feature, mse))<jupyter_output>FEATURE MSE
------- ---
AGE 70.598
DIS 74.894
CRIM 64.037
<jupyter_text>So far we've been working with a single feature to predict the target. What happens when we use more than one feature?<jupyter_code># Two features
lr_room_age = LinearRegression()
lr_room_age.fit(X_train[['RM','AGE']], y_train)
lr_room_age_test_predictions = lr_room_age.predict(X_test[['RM','AGE']].values)
mse = mean_squared_error(y_test, lr_room_age_test_predictions)
print("MSE: {:.5}".format(mse))
# Three features
lr_room_age_dist = LinearRegression()
lr_room_age_dist.fit(X_train[['RM','AGE','DIS']], y_train)
lr_room_age_dist_test_predictions = lr_room_age_dist.predict(X_test[['RM','AGE','DIS']].values)
mse = mean_squared_error(y_test, lr_room_age_dist_test_predictions)
print("MSE: {:.5}".format(mse))<jupyter_output>MSE: 40.41
<jupyter_text>In going from 2 to 3 features we get a better MSE score. How can we tell how 'good' our fit is?
We use a statistical measure called $R^2$ (pronounced R-squared)<jupyter_code>from sklearn.metrics import r2_score
print("R2 for two features : {:.5}".format(r2_score(lr_room_age_test_predictions, y_test)))
print("R2 for three features: {:.5}".format(r2_score(lr_room_age_dist_test_predictions, y_test)))<jupyter_output>R2 for two features : 0.28461
R2 for three features: 0.28235
<jupyter_text>As it turns out, the 3-feature model is actually not much better than a 2-feature model from a statistical point of view. In practice, it delivers a lower MSE. Which is better depends on the business problem.
Small note: adding more parameters to a model will always decrease its MSE. The reason is that the model is more complex and hence can accomodate the data better. However, we must beware of *overfitting*, where we treat the noise as part of the model.### Validating ResultsSince we don't have a validation dataset, we're using our test set to check what features increase or $R^2$ and/or lower our MSE. But in doing so we might be overfitting to the specific test set. On the other hand, we don't have a lot of data so we need everything we can get for reasonably-sized train and test sets. What to do?
The solution is called *k-fold cross validation*. We split our data into $k$ parts (or *folds*). We train on $k-1$ folds and test on the last remaining fold. We repeat this procedure $k$ times - in other words, each fold gets the chance to be a 'test set' of sorts. By doing this we ensure that we use all of the available data for both train and test.<jupyter_code>from sklearn.model_selection import cross_val_score
def score_fold(trained_lr,X,y):
predictions = trained_lr.predict(X)
return mean_squared_error(y, predictions)
# Use default 5-fold CV
cross_val_score(LinearRegression(), df[['RM']], df['MEDV'], scoring=score_fold)<jupyter_output><empty_output><jupyter_text>So a model with the single RM feature does not return similar MSE values for all folds. So back to the drawing board...
Once we find a good model we train it with *all* the data and deploy.## Polynomial Regression
As we saw above, the form of our relationship is linear: $ \alpha + \beta_1 \cdot x_1 + \ldots + \beta_n \cdot x_n $. Although each $\beta$ is a constant, there is no limitation on the $x$'s themselves. So it's perfectly legal to have a *linear* relationship such as this: $$ \alpha + \beta_1 \cdot x + \beta_2 \cdot x^2 $$
This is known as *polynomial regression*.
<jupyter_code>data = X_train[['RM']]
data['RM2'] = data['RM'] ** 2
lr_room_room2 = LinearRegression()
lr_room_room2.fit(data, y_train)
predictions = lr_room_room2.predict(data)
scatter(X_train['RM'], y_train)
scatter(X_train['RM'], predictions)
xlabel("Avg. # of Rooms")
ylabel("Median house price")<jupyter_output>/Users/yuvalm/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
<jupyter_text>It would appear that using the squared number of rooms leads to a model that better captures the shape of the data. Let's quantify this intuition.<jupyter_code>test_data = X_test[['RM']]
test_data['RM2'] = test_data['RM'] ** 2
test_predictions = lr_room_room2.predict(test_data)
test_mse = mean_squared_error(y_test, test_predictions)
test_r2 = r2_score(y_test, test_predictions)
print("MSE: {:.5}".format(test_mse))
print("R2: {:.5}".format(test_r2))<jupyter_output>MSE: 34.363
R2: 0.57667
<jupyter_text>Intersting - it looks as if the *squared* average number of rooms in the house can significantly increase $R^2$ and lower MSE. This is good news for the business - our predictive model makes smaller mistakes. But now the question arises - can we understand the influence of both the regular and squared values on the target?## Interpreting Linear Regression Coefficients
As it turns out, this is definitely possible. In fact, the *coefficients* of the $x$'s (what we called $\beta_1, \beta_2$, etc.) are exactly what we need. Each coefficient in a linear regression tells us buy how much the target changes when we increase the relevant variable by 1 unit and *keep all other variables constant*. Let's see this in practice.<jupyter_code># Pull out the coefficients for x and x^2
lr_room_room2.coef_<jupyter_output><empty_output><jupyter_text>These numbers tell us that when we increase $x^2$ by 1, the target increases by a positive amount. It also tells us that when we increase $x$ by 1, the target increases by a negative amount (i.e., decreases). Say what??? How can increasing the number of average rooms in the house *reduce* the house price???
The reason is that $x$ and $x^2$ are *correlated*. In other words, increasing or decreasing one will increase or decrease the other one. In linear regression, this means that they explain the same thing. Normally, this is a problem. Here, because our two variables are powers of each other, we can continue safely. Because $x^2$ is bigger than $x$, we see that it does most of the *work* and is positive as we expect. The regular $x$ makes sure we don't increase the target too much.
We'll now try to give our model some values and see these increases/decreases in practice. Since we'll change $x$ without changing $x^2$ (or vice-versa) this is called a *counter-factual*, as we don't have real data for this case (nor will we ever).<jupyter_code>one_one = lr_room_room2.predict([[1,1]])[0]
two_one = lr_room_room2.predict([[2,1]])[0]
one_two = lr_room_room2.predict([[1,2]])[0]
print("model(2,1) - model(1,1) = {:.5}, coef(x) = {:.5}".format(
(two_one - one_one), lr_room_room2.coef_[0]))
print("model(1,2) - model(1,1) = {:.5}, coef(x^2) = {:.5}".format(
(one_two - one_one), lr_room_room2.coef_[1]))<jupyter_output>model(2,1) - model(1,1) = -22.812, coef(x) = -22.812
model(1,2) - model(1,1) = 2.4623, coef(x^2) = 2.4623
|
no_license
|
/01 Linear Regression.ipynb
|
yuvmaz/ds_day_in_life
| 15 |
<jupyter_start><jupyter_text># **Question 1**<jupyter_code>A = "Python language"
A.index("h")
A = "Python language"
A.split()
A = "Python language"
A.find('P')
A = "Python language"
A.count('P')
A = "Python language"
A.upper()<jupyter_output><empty_output><jupyter_text># **Question 2**<jupyter_code>Example = ['Python', 'Github','Lets upgrade','Day 2']
# using append as function.
Example.append('Assigment')
print(Example)
# Using pop as function.
Example.pop(1)
print(Example)
# Using extend as function.
Example.extend('Hello')
print(Example)
# Using reverse as function.
Example.reverse()
print(Example)
# Using clear as function.
Example.clear()
print(Example)<jupyter_output>['Python', 'Github', 'Lets upgrade', 'Day 2', 'Assigment']
['Python', 'Lets upgrade', 'Day 2', 'Assigment']
['Python', 'Lets upgrade', 'Day 2', 'Assigment', 'H', 'e', 'l', 'l', 'o']
['o', 'l', 'l', 'e', 'H', 'Assigment', 'Day 2', 'Lets upgrade', 'Python']
[]
<jupyter_text># **Question 3**<jupyter_code>dictnoary = {'1' : 'Python', '2' : 'Java Script', '3' : 'Java', '4' : 'C#'}
dictnoary.keys()
dictnoary = {'1' : 'Python', '2' : 'Java Script', '3' : 'Java', '4' : 'C#'}
dictnoary.copy()
dictnoary = {'1' : 'Python', '2' : 'Java Script', '3' : 'Java', '4' : 'C#'}
dictnoary.setdefault('3')
dictnoary = {'1' : 'Python', '2' : 'Java Script', '3' : 'Java', '4' : 'C#'}
dictnoary.get('4 ')
dictnoary = {'1' : 'Python', '2' : 'Java Script', '3' : 'Java', '4' : 'C#'}
dictnoary.popitem()<jupyter_output><empty_output>
|
no_license
|
/Day 2 Assigment.ipynb
|
phani-githu/Day-2-Assigment-python-LU
| 3 |
<jupyter_start><jupyter_text># Practical example. Audiobooks## Problem
You are given data from an Audiobook app. Logically, it relates only to the audio versions of books. Each customer in the database has made a purchase at least once, that's why he/she is in the database. We want to create a machine learning algorithm based on our available data that can predict if a customer will buy again from the Audiobook company.
The main idea is that if a customer has a low probability of coming back, there is no reason to spend any money on advertizing to him/her. If we can focus our efforts ONLY on customers that are likely to convert again, we can make great savings. Moreover, this model can identify the most important metrics for a customer to come back again. Identifying new customers creates value and growth opportunities.
You have a .csv summarizing the data. There are several variables: Customer ID, Book length in mins_avg (average of all purchases), Book length in minutes_sum (sum of all purchases), Price Paid_avg (average of all purchases), Price paid_sum (sum of all purchases), Review (a Boolean variable), Review (out of 10), Total minutes listened, Completion (from 0 to 1), Support requests (number), and Last visited minus purchase date (in days).
So these are the inputs (excluding customer ID, as it is completely arbitrary. It's more like a name, than a number).
The targets are a Boolean variable (so 0, or 1). We are taking a period of 2 years in our inputs, and the next 6 months as targets. So, in fact, we are predicting if: based on the last 2 years of activity and engagement, a customer will convert in the next 6 months. 6 months sounds like a reasonable time. If they don't convert after 6 months, chances are they've gone to a competitor or didn't like the Audiobook way of digesting information.
The task is simple: create a machine learning algorithm, which is able to predict if a customer will buy again.
This is a classification problem with two classes: won't buy and will buy, represented by 0s and 1s.
Good luck!## Create the machine learning algorithm
### Import the relevant libraries<jupyter_code># we must import the libraries once again since we haven't imported them in this file
import numpy as np
import tensorflow as tf<jupyter_output><empty_output><jupyter_text>### Data<jupyter_code># let's create a temporary variable npz, where we will store each of the three Audiobooks datasets
npz = np.load('Audiobooks_data_train.npz')
# we extract the inputs using the keyword under which we saved them
# to ensure that they are all floats, let's also take care of that
train_inputs = npz['inputs'].astype(np.float)
# targets must be int because of sparse_categorical_crossentropy (we want to be able to smoothly one-hot encode them)
train_targets = npz['targets'].astype(np.int)
# we load the validation data in the temporary variable
npz = np.load('Audiobooks_data_validation.npz')
# we can load the inputs and the targets in the same line
validation_inputs, validation_targets = npz['inputs'].astype(np.float), npz['targets'].astype(np.int)
# we load the test data in the temporary variable
npz = np.load('Audiobooks_data_test.npz')
# we create 2 variables that will contain the test inputs and the test targets
test_inputs, test_targets = npz['inputs'].astype(np.float), npz['targets'].astype(np.int)<jupyter_output><empty_output>
|
no_license
|
/ML/Deep Learning_A business case/TensorFlow_Audiobooks_Machine_Learning_Part1_with_comments.ipynb
|
Faruque169/Python-for-Data-Science
| 2 |
<jupyter_start><jupyter_text>## Template - do not modify, duplicate<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'Spectral'
def sigmoid(z) :
"""
Returns the sigmoid function of the given input
"""
return 1 /(1 + np.exp(-z))
def sigmoid_prime(z) :
"""
Returns the derivative of the sigmoid function of the given input
"""
return sigmoid(z)*(1-sigmoid(z))
def fp(x, eps):
"""
Returns the value the SReLU takes in the interval -eps to eps
The polynomial satifies the following conditions:
fp(eps) = eps
fp'(eps) = 1
fp(-eps) = 0
fp'(-eps) = 0
"""
a_0 = eps/4
a_1 = 0.5
a_2 = 1/(4*eps)
a_3 = 0
return a_0 + a_1*x + a_2*x**2 + a_3*x**3
def SReLU(x, eps):
"""
Returns a smoothed ReLU function with parameter epsilon
"""
return fp(x, eps)*(np.abs(x) < eps) + x*(x >= eps)
def fp_prime(x, eps):
"""
Returns derivative of the fp(x) with respect to x
"""
a_0 = eps/4
a_1 = 0.5
a_2 = 1/(4*eps)
a_3 = 0
return a_1 + 2*a_2*x + 3*a_3*x**2
def SReLU_prime(x,eps):
"""
Returns the derivative of the smoothed ReLU function
"""
return fp_prime(x, eps)*(np.abs(x) < eps) + 1*(x >= eps)
def relu(z):
"""
Returns the rectified linear unit applied to the given input
"""
return np.maximum(0,z)
def relu_prime(z) :
"""
Returns the derivative of rectified linear unit applied to the given input
"""
return 1*(z>=0)
def SReLU_prime_wrt_eps(x,eps):
"""
Returns the derivative of the SReLU function
with respect to epsilon given the input and parameter eps
"""
return (1/4-(x**2)/(4*eps**2))*(np.abs(x) < eps)
N = 200 # no. points per class
X = np.zeros((N*2,2))
y = np.zeros((N*2,1), dtype='uint8')
def twospirals(n_points, noise=0.75):
"""
Returns the two spirals dataset.
"""
np.random.seed(1) # Random seed for data
n = np.sqrt(np.random.rand(n_points,1)) * 540 * (2*np.pi)/360
d1x = (- np.cos(n)*n + np.random.rand(n_points,1) * noise)
d1y = (np.sin(n)*n + np.random.rand(n_points,1) * noise)
np.random.seed(None) # Random seed for data
return (np.vstack((np.hstack((d1x,d1y)),np.hstack((-d1x,-d1y)))),
np.where(np.hstack((np.zeros(n_points),np.ones(n_points))) > 0.5, 1, 0))
X, y[:,0] = twospirals(N) # or choose number
plt.title('540-spiral dataset')
plt.plot(X[y[:,0]==0,0], X[y[:,0]==0,1], color='tab:red', marker='.', linestyle='None',label='class 1')
plt.plot(X[y[:,0]==1,0], X[y[:,0]==1,1], color='tab:blue', marker='.', linestyle='None', label='class 2')
plt.legend()
plt.show()
def plotboundary_accuracy(X, y, W1, b1, W2, b2,eps):
"""
Plots the decision boundary of the data
"""
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),np.arange(y_min, y_max, 0.02))
Z_axis = np.dot(SReLU(np.dot(np.c_[xx.ravel(), yy.ravel()], W1) + b1,eps), W2) + b2
Z_axis = (Z_axis > 0.5)
Z_axis = Z_axis.reshape(xx.shape)
plt.contourf(xx, yy, Z_axis, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y[:,0], s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
def initialize_parameters(n_x,n_y,n_h):
"""
Initialization of the weights and biases for the neural network
n_x: number of inputs nodes
n_y: number of output nodes
n_h: number of hidden nodes
"""
np.random.seed(3) # Random seed for weights
W1 = 1.0 * np.random.randn(n_x,n_h) # [n_x, n_h ]
b1 = np.zeros((1,n_h)) # [1, n_h ]
W2 = 1.0 * np.random.randn(n_h,n_y) # [n_h, n_y ]
b2 = np.zeros((1,n_y)) # [1, n_y ]
np.random.seed(None) # Random seed for weighs
return W1, b1, W2, b2 # Return initalise weights and biases
def forward_prop(X, W1, b1, W2, b2,eps):
"""
Return the parameters after one foward pass
N: Number of points per class
X: dimension [2N, n_x]
n_x: number of inputs nodes
n_y: number of output nodes
n_h: number of hidden nodes
"""
Z1 = np.dot(X, W1) + b1 # [2N, n_h]
A1 = SReLU(Z1,eps) # [2N, n_h]
Z2 = np.dot(A1, W2) + b2 # [2N, n_y]
A2 = sigmoid(Z2) # [2N, n_y]
return Z1, A1, Z2, A2
def cost(Y, A2):
"""
Returns the L2 error between the predicted values
by the network A2 and the actual values of the data Y
"""
m = Y.shape[0]
return np.sum(np.square(A2-Y))/m
def back_prop(X,y, Z1, A1, Z2, A2, reg, eps):
"""
Return the gradient of parameters with respect to the loss after one foward pass
reg: regularisation parameter
eps: smoothed ReLU parameter epsilon
N: Number of points per class
X: dimension [2N, n_x]
n_x: number of inputs nodes
n_y: number of output nodes
n_h: number of hidden nodes
"""
m = X.shape[0] # Number of data points in total, 2N
dZ2 = 2*(A2 - y)/m # Derivative of cross-entropy loss is (A2-y), [m,1]
dZ2 = dZ2*sigmoid_prime(Z2) # Derivative with respect to Z2, [m,1]
dW2 = np.dot(A1.T, dZ2) # Derivative with respect to W2, [n_h, n_y ]
db2 = np.sum(dZ2, axis=0, keepdims=True) # Derivative with respect to b2, [1, n_y ]
dA1 = np.dot(dZ2, W2.T) # Derivative with respect to A1, [m, n_h]
dZ1 = np.multiply(dA1, SReLU_prime(Z1,eps)) # Derivative with respect to Z1, [m, n_h]
depsilon = np.sum(np.multiply(dA1,SReLU_prime_wrt_eps(Z1, eps)))
dW1 = np.dot(X.T, dZ1) # Derivative with respect to W1, [n_x, n_h]
db1 = np.sum(dZ1, axis=0, keepdims=True) # Derivative with respect to b1, [1, n_h]
dW2 = dW2 + reg * W2 # regularisation of weight W2
dW1 = dW1 + reg * W1 # regularisation of weight W1
return dW1, db1, dW2, db2, depsilon
def update_parameters(W1, b1, W2, b2, epsilon_vector, dW1, db1, dW2, db2, depsilon_vector, learning_rate):
"""
Returns the updated values of the parameters
after the given interation
"""
W1 = W1 - dW1 * learning_rate
b1 = b1 - db1 * learning_rate
W2 = W2 - dW2 * learning_rate
b2 = b2 - db2 * learning_rate
epsilon_vector = epsilon_vector - depsilon_vector * 0.001
return W1, b1, W2, b2, epsilon_vector<jupyter_output><empty_output><jupyter_text>## Gradient Descent<jupyter_code>epochs = 20100 # Number of iterations
n_h = 11 # Number of hidden neurons
n_x = 2 # Number of input neurons
n_y = 1 # Number of ouput neurons
reg = 1e-3 # Regularization parameter
learning_rate = 0.2 # Learning rate
# Initialization of parameters and epsilon
W1, b1, W2, b2 = initialize_parameters(n_x,n_y,n_h)
#epsilon_vector = 0.7
epsilon_vector = np.random.rand(1,n_h)
# Arrays to keep track of losses and accuracy at each epoch
losses = np.zeros(epochs)
accuracy = np.zeros(epochs+1)
for i in range(epochs):
Z1, A1, Z2, A2 = forward_prop(X, W1, b1, W2, b2,epsilon_vector) # Forward propagation
reg_loss = 0.5*reg*np.sum(W1*W1) + 0.5*reg*np.sum(W2*W2) # Regularization loss
losses[i] = cost(y, A2) + reg_loss # Total loss
accuracy[i]= 100*np.mean((A2 >0.5) == y) # Accuracy for particular epochs
dW1, db1, dW2, db2, depsilon_vector = back_prop(X,y, Z1, A1, Z2, A2, reg, epsilon_vector) # Backpropation
# Update parameters
W1, b1, W2, b2, epsilon_vector= update_parameters(W1, b1, W2, b2, epsilon_vector,
dW1, db1, dW2, db2, depsilon_vector, learning_rate)
# Calculate final accuracy and loss
predicted_class = sigmoid(np.dot(SReLU(np.dot(X, W1) + b1,epsilon_vector), W2) + b2)
final_accuracy = 100*np.mean((predicted_class >0.5) == y)
final_loss = cost(y, predicted_class) + reg_loss
# Plotting boundaries, accuracy and loss
print('Gradient Descent')
print('Epochs: %.0f' % epochs)
print('Number of neurons: %.0f' % n_h)
print('Learning rate: %.2f' % learning_rate)
print('Final Accuracy: %.2f %%' % final_accuracy)
print('Final loss: %.2f' % final_loss)
fig, (ax1,ax2) = plt.subplots(1, 2,figsize=(15, 6))
plt.sca(ax1)
plotboundary_accuracy(X, y, W1, b1, W2, b2,epsilon_vector)
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Training Loss', color='tab:orange')
ax2.plot(losses, color='tab:orange')
ax2.tick_params(axis='y', labelcolor='tab:orange')
ax3 = ax2.twinx() # instantiate a second axes that shares the same x-axis
ax3.set_ylabel('Accuracy', color='tab:green') # we already handled the x-label with ax1
ax3.plot(accuracy, color='tab:green')
ax3.tick_params(axis='y', labelcolor='tab:green')
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
epsilon_vector<jupyter_output><empty_output><jupyter_text>
## SGD<jupyter_code>epochs = 10000 # Number of iterations
n_h = 11 # Number of hidden neurons
n_x = 2 # Number of input neurons
n_y = 1 # Number of ouput neurons
reg = 1e-3 # Regularization parameter
learning_rate = 0.1 # Learning rate
sample_size = 0.30 # Sample size taken for each iteration
batch_size = int(sample_size*X.shape[0]) # Size of the subsample data taken
# Initialization of parameters and epsilon
W1, b1, W2, b2 = initialize_parameters(n_x,n_y,n_h)
epsilon_vector = 0.1*np.random.rand(1,n_h)
# Arrays to keep track of losses and accuracy at each epoch
losses = np.zeros(epochs)
accuracy = np.zeros(epochs+1)
for i in range(epochs):
# Take a batch_size number of (random) samples
index = np.random.randint(0,X.shape[0], batch_size)
X_i = X[index,:]
y_i = y[index]
Z1, A1, Z2, A2 = forward_prop(X_i, W1, b1, W2, b2, epsilon_vector) # Forward propagation
reg_loss = 0.5*reg*np.sum(W1*W1) + 0.5*reg*np.sum(W2*W2) # Regularization loss
losses[i] = cost(y_i, A2) + reg_loss # Total loss
accuracy[i]= 100*np.mean((A2 >0.5) == y_i) # Accuracy for particular epochs
dW1, db1, dW2, db2, depsilon_vector = back_prop(X_i,y_i, Z1, A1, Z2, A2, reg,epsilon_vector) # Backpropagation
# Update parameters
W1, b1, W2, b2, epsilon_vector = update_parameters(W1, b1, W2, b2, epsilon_vector,
dW1, db1, dW2, db2, depsilon_vector, learning_rate)
#Calculate final accuracy and loss
predicted_class = sigmoid(np.dot(SReLU(np.dot(X, W1) + b1,epsilon_vector), W2) + b2)
final_accuracy = 100*np.mean((predicted_class >0.5) == y)
final_loss = cost(y, predicted_class) + reg_loss
# Plotting boundaries, accuracy and loss
print('Stochastic Gradient Descent')
print('Epochs: %.0f' % epochs)
print('Number of neurons: %.0f' % n_h)
print('Learning rate: %.2f' % learning_rate)
print('Sample size: %.0f %%' % (100*sample_size))
print('Final Accuracy: %.2f %%' % final_accuracy)
print('Final loss: %.2f' % final_loss)
fig, (ax1,ax2) = plt.subplots(1, 2,figsize=(15, 6))
plt.sca(ax1)
plotboundary_accuracy(X, y, W1, b1, W2, b2,epsilon_vector)
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Training Loss', color='tab:orange')
ax2.plot(losses, color='tab:orange')
ax2.tick_params(axis='y', labelcolor='tab:orange')
ax3 = ax2.twinx() # instantiate a second axes that shares the same x-axis
ax3.set_ylabel('Accuracy', color='tab:green') # we already handled the x-label with ax1
ax3.plot(accuracy, color='tab:green')
ax3.tick_params(axis='y', labelcolor='tab:green')
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()<jupyter_output>Stochastic Gradient Descent
Epochs: 10000
Number of neurons: 10
Learning rate: 0.10
Sample size: 30 %
Final Accuracy: 83.25 %
Final loss: 0.14
|
no_license
|
/Code for boundaries/SReLU - Adaptive/540-spiral.ipynb
|
UoE-AdSHLPs/Adaptive-SHLP
| 3 |
<jupyter_start><jupyter_text>setup<jupyter_code>if not os.path.isdir(os.path.join(os.getcwd(), 'background')):
os.mkdir("background")
else:
print('background already exists')
if not os.path.isdir(os.path.join(os.getcwd(), 'composite')):
os.mkdir("composite")
else:
print('composite already exists')
cap = cv2.VideoCapture('monkey_TaskB.mov')
if not cap.isOpened():
print('monkey.mov not opened')
sys.exit(1)
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frame_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
frame_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
bgctr = length # The total number of background frames
count = 0<jupyter_output>background already exists
<jupyter_text>extract frames from both video<jupyter_code>#extract frames from the video, and store in the background folder
cap = cv2.VideoCapture(path_to_background)
#create_dir_if_not_exists('./background/') # Or you can create it manully
if not cap.isOpened():
print('{} not opened'.format(path_to_background))
sys.exit(1)
time_length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frame_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
frame_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
frame_counter = 0 # FRAME_COUNTER
while(1):
return_flag, frame = cap.read()
if not return_flag:
print('Video Reach End')
break
# Main Content - Start
cv2.imshow('VideoWindowTitle-water', frame)
cv2.imwrite('./background/' + 'frame%d.tif' % frame_counter, frame)
frame_counter += 1
# Main Content - End
if cv2.waitKey(30) & 0xff == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# extract frames from the monkey video, store in the frames folder
if not os.path.isdir(os.path.join(os.getcwd(), 'frames')):
os.mkdir("frames")
else:
print('frames already exists')
if not os.path.isdir(os.path.join(os.getcwd(), 'composite')):
os.mkdir("composite")
else:
print('composite already exists')
framenumber = 0
framectr = 0
omovie = cv2.VideoCapture('monkey_taskB.mov')
frame_height = omovie.get(cv2.CAP_PROP_FRAME_HEIGHT)
frame_width = omovie.get(cv2.CAP_PROP_FRAME_WIDTH)
# Extract the frames from original video
while(1):
ret, frame = omovie.read()
if not ret:
break
print('Extracting: %d' % framenumber)
clear_output(wait=True)
cv2.imwrite('frames/%d.tif' % framenumber, frame)
framenumber += 1
omovie.release()<jupyter_output>Extracting: 948
<jupyter_text>composite both video's background<jupyter_code>BLUE = 80
bgctr = length
count = 0
cap = cv2.VideoCapture('monkey_taskB.mov')
if not cap.isOpened():
print('monkey_taskB.movi not opened')
sys.exit(1)
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frame_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
frame_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
while(1):
ret, monkeyframe = cap.read()
#resize the monkey image to background size
monkeyframe = cv2.resize(monkeyframe, (320, 240))
if not ret:
break
bg = cv2.imread('background/frame%d.tif' % (count%bgctr))
if bg is None:
print('ooops! no bg found BG/frame%d.tif' % (count%bgctr))
break
# overwrite the background
for x in range(monkeyframe.shape[0]):
for y in range(monkeyframe.shape[1]):
################ TODO #################
# replace the corresponding pixels in 'water.mov' with
# non-blue pixels in 'monkey.mov'
#check if the pixel's blue channel is 'blue'
#openCV use B,G,R
#print(monkeyframe[x,y])
if monkeyframe[x,y][0] <= BLUE :
#if monkeyframe[x,y][1] <= 200 :
# if monkeyframe[x,y][2] <= 200 :
#if monkeyframe[x,y][0] <= BLUE:
#the pixel is not blue, continue the overwriting
bg[x, y] = monkeyframe[x, y]
elif monkeyframe[x,y][0] >= 0 :
if monkeyframe[x,y][1] >= 120 :
if monkeyframe[x,y][2] >= 120 :
bg[x, y] = monkeyframe[x, y]
#########################################
cv2.imwrite('composite/composite%d.tif' % count, bg)
cv2.putText(img=bg, text='Compositing: %d%%' % int(100*count/length), org=(int(0), int(bg.shape[1] / 2)),
fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=0.7,
color=(0, 255, 0))
cv2.imshow('Monkey in water', bg)
count += 1
if cv2.waitKey(30) & 0xff == ord('q'):
break
if count >= framenumber:
break
cap.release()
cv2.destroyAllWindows()
#convert the 'composite' folder into a new video
frame_load_path = './composite/'
path_to_output_video = './new_video.mov'
out = cv2.VideoWriter(path_to_output_video, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), 10, (320, 240))
frame_counter = 0
while(1):
img = cv2.imread(frame_load_path + 'composite%d.tif' % frame_counter)
if img is None:
print('No more frames to be loaded')
break;
out.write(img)
frame_counter += 1
out.release()
cv2.destroyAllWindows()
<jupyter_output>No more frames to be loaded
|
no_license
|
/YR3_2/COMP3419/assignment/submission/task b/taskB.ipynb
|
Noire-Black-Heart/study
| 3 |
<jupyter_start><jupyter_text># Integromat - Trigger workflow
**Tags:** #integromat## Input### Import library<jupyter_code>from naas_drivers import integromat<jupyter_output><empty_output><jupyter_text>### Variables<jupyter_code>URL = "https://hook.integromat.com/7edtlwmn8foer0r9i9ainjvsz3vxmwos"
DATA = { "first_name":"Bryan", "last_name":"Helmig", "age": 27 }<jupyter_output><empty_output><jupyter_text>## Model### Connect to integromat<jupyter_code>result = integromat.connect(URL)<jupyter_output><empty_output><jupyter_text>## Output### Send data<jupyter_code>result = integromat.send(DATA)<jupyter_output><empty_output>
|
permissive
|
/Integromat/Integromat_Trigger_workflow.ipynb
|
JMolinaHN/awesome-notebooks
| 4 |
<jupyter_start><jupyter_text>-接下來會定義四種方法來抓不同的標籤
1.method_title 抓地名
2.method_brief 抓簡介
3.address_phone_block 抓地址、地區、電話
4.opentime_classify 抓營業時間、開放時間
方法中的參數element是指該連結的內文<jupyter_code>#1.method_title 抓地名
def method_title(element):
soup = bs(element)
title = soup.select('#FFTPTitleId')[0].text
return title
#2.method_brief 抓簡介
def method_brief(element):
soup = bs(element)
brief = ''
for p in soup.select('#FFTPBriefId')[0].select('p'):
brief += p.text.encode('utf-8')
return brief
#4.opentime_classify 抓營業時間、開放時間
def opentime_classify(element):
return "".join(element.split())
#3.address_phone_block 抓地址、地區、電話
def address(element):
return ''.join(td.text.split()[0:3])
def block(element):
return ''.join(td.text.split()[1])
def phone(element):
return ''.join(td.text.split()[-1])
from bs4 import BeautifulSoup as bs
path = 'travel/'
file_name = '2011051800000240'
f = open(path + file_name + '.txt', 'r')
element = f.read()
soup = bs(element)
content = soup.select('.ttour_infocont')[0]
dic = {'address':'','block':'','phone':''}
for tr in content.select('tr'):
if content.select('th')[0].text.strip().encode('utf-8') == '地址或電話':
print content.select('td')
print address(content.select('td'))
dic[address] = address(content.select('td'))
#for key in dic:
#print key, dic[key]
from bs4 import BeautifulSoup as bs
path = 'travel/'
file_name = '2011051800000240'
f = open(path + file_name + '.txt', 'r')
element = f.read()
dic = {'address':address(element),'block':block(element),'phone':phone(element)}
print dic
soup = bs(element)
print type(soup)
brief = ''
for p in soup.select('#FFTPBriefId')[0].select('p'):
brief += p.text.encode('utf-8')
print type(brief)
print brief
print type(brief)
from bs4 import BeautifulSoup as bs
path = 'travel/'
file_name = '2011051800000240'
f = open(path + file_name + '.txt', 'r')
element = f.read()
#print type(response_text)
#print response_text
f.close()
#rint element
print method_brief(element)
b = method_brief(element)
print b
#print type(method_brief(element))
#brief = ''
#for p in soup.select('#FFTPBriefId')[0].select('p'):
# brief += p.text
#print brief
import requests
res = requests.get('http://www.travel.taipei/frontsite/tw/sceneryChineseListAction.do?method=doFindByPk&menuId=20104&serNo=2011051800000693')
from bs4 import BeautifulSoup as bs
soup = bs(res.text)
dic = {'分類':'','開放時間':'','地址':'','地區':'','電話':''}
print soup.select('#FFTPTitleId')[0].text
brief = ''
for p in soup.select('#FFTPBriefId')[0].select('p'):
brief += p.text.encode('utf-8')
print brief
info = soup.select('.ttour_infocont')[0]
for tr in info.select("tr"):
th = tr.select('th')[0]
td = tr.select('td')[0]
if th.text.strip().encode('utf-8') in dic:
dic[th.text.strip().encode('utf-8')] = td.text.strip().encode('utf-8')
if th.text.strip().encode('utf-8') == '地址或電話':
print "地址", ''.join(td.text.split()[0:3])
print '地區', ''.join(td.text.split()[1])
print '電話', ''.join(td.text.split()[-1])
for key in dic:
print key, dic[key]
from bs4 import BeautifulSoup
def get_response_element (file_name):
path = 'travel/'
f = open(path + file_name + '.txt', 'r')
response_text = f.read()
f.close()
soup = BeautifulSoup(response_text)
return soup<jupyter_output><empty_output>
|
no_license
|
/.ipynb_checkpoints/Taipei_place-checkpoint.ipynb
|
boansu/project
| 1 |
<jupyter_start><jupyter_text>
**If on Google Colab, first do this:**
Click on `Runtime` > `Change runtime type` and select `GPU` as `Hardware accelerator`.# CSBDeep
Technical infrastructure that powers CSBDeep (and StarDist) under the hood.#### Basic setup for this notebook<jupyter_code>try:
import google.colab
COLAB = True
except ModuleNotFoundError:
COLAB = False
if COLAB:
import sys
!{sys.executable} -m pip install csbdeep stardist
import numpy as np
import matplotlib
matplotlib.rcParams["image.interpolation"] = None
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'<jupyter_output><empty_output><jupyter_text>## Keras with TensorFlow 1 & 2<jupyter_code>from csbdeep.utils.tf import keras_import
keras = keras_import()
# equivalent to either:
# - "import keras" # if using TensorFlow 1.x and separate Keras package
# - "from tensorflow import keras" # if using TensorFlow 2.x with integrated Keras
# can also do specific imports, e.g.:
Input, Dense = keras_import('layers', 'Input','Dense')
assert Input == keras.layers.Input and Dense == keras.layers.Dense<jupyter_output><empty_output><jupyter_text>## Blocks and Nets<jupyter_code>from csbdeep.internals.nets import common_unet, custom_unet
from csbdeep.internals.blocks import unet_block, resnet_block
model = common_unet(residual=False,n_channel_out=2)((128,128,3))
model.summary()
x = inputs = Input((128,128,3))
x = resnet_block(64)(x)
x = resnet_block(128, pool=(2,2))(x)
x = keras.layers.GlobalAveragePooling2D()(x)
x = Dense(32, activation='relu')(x)
x = outputs = Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
model.summary()<jupyter_output><empty_output><jupyter_text>## BaseConfig & BaseModel<jupyter_code>from csbdeep.models import BaseModel, BaseConfig
BaseConfig()
class MyConfig(BaseConfig):
def __init__(self, my_parameter, **kwargs):
super().__init__(**kwargs)
self.my_parameter = my_parameter
config = MyConfig(my_parameter=42)
config
class MyModel(BaseModel):
@property
def _config_class(self):
return MyConfig
def _build(self):
pass
# demo: delete model folder if it already exists
%rm -rf models/my_model
# create model folder and persist config
model = MyModel(config, 'my_model', basedir='models')
model
%ls models/my_model
%cat models/my_model/config.json
# load model from folder (config and possibly trained weights)
model = MyModel(None, 'my_model', basedir='models')
model<jupyter_output><empty_output><jupyter_text>BaseModel has more to offer, some is shown below...<jupyter_code>[a for a in dir(model) if not a.startswith('__')]<jupyter_output><empty_output><jupyter_text>## Registry for pretrained models<jupyter_code>try:
from stardist.models import StarDist2D
StarDist2D.from_pretrained()
except ModuleNotFoundError:
pass
try:
StarDist2D.from_pretrained('Versatile (fluorescent nuclei)')
except ModuleNotFoundError:
pass
MyModel.from_pretrained()
from csbdeep.models import register_model, register_aliases
register_model(MyModel, 'my_model', 'http://example.com/my_model.zip', '<hash>')
register_aliases(MyModel, 'my_model', 'My minimal model', 'Another name for my model')
MyModel.from_pretrained()<jupyter_output><empty_output><jupyter_text>## Example: U-Net model for multi-class semantic segmentationNote that the focus is on demonstrating certain concepts rather than being a good/complete segmentation approach.### Helper<jupyter_code>try:
import skimage
except ModuleNotFoundError:
raise RuntimeError("This demo needs scikit-image to run.")
from glob import glob
from tqdm import tqdm
from tifffile import imread
from pathlib import Path
from skimage.segmentation import find_boundaries
def crop(u,shape=(256,256)):
"""Crop central region of given shape"""
return u[tuple(slice((s-m)//2,(s-m)//2+m) for s,m in zip(u.shape,shape))]
def to_3class_label(lbl, onehot=True):
"""Convert instance labeling to background/inner/outer mask"""
b = find_boundaries(lbl,mode='outer')
res = (lbl>0).astype(np.uint8)
res[b] = 2
if onehot:
res = keras.utils.to_categorical(res,num_classes=3).reshape(lbl.shape+(3,))
return res
def dice_bce_loss(n_labels):
"""Combined crossentropy and dice loss"""
K = keras.backend
def _sum(a):
return K.sum(a, axis=(1,2), keepdims=True)
def dice_coef(y_true, y_pred):
return (2 * _sum(y_true * y_pred) + K.epsilon()) / (_sum(y_true) + _sum(y_pred) + K.epsilon())
def _loss(y_true, y_pred):
dice_loss = 0
for i in range(n_labels):
dice_loss += 1-dice_coef(y_true[...,i], y_pred[...,i])
return dice_loss/n_labels + K.categorical_crossentropy(y_true, y_pred)
return _loss
def datagen(X,Y,batch_size,seed=0):
"""Simple data augmentation"""
g = keras.preprocessing.image.ImageDataGenerator(horizontal_flip=True, vertical_flip=True,
rotation_range=10, shear_range=10, fill_mode='reflect')
assert seed is not None
gX = g.flow(X, batch_size=batch_size, seed=seed)
gY = g.flow(Y, batch_size=batch_size, seed=seed)
while True:
yield gX.next(), gY.next()<jupyter_output><empty_output><jupyter_text>### Data<jupyter_code>from csbdeep.utils import download_and_extract_zip_file, normalize
download_and_extract_zip_file(
url = 'https://github.com/mpicbg-csbd/stardist/releases/download/0.1.0/dsb2018.zip',
targetdir = 'data',
verbose = 1,
)
# load and crop out central patch (for simplicity)
X = [crop(imread(x)) for x in sorted(glob('data/dsb2018/train/images/*.tif'))]
Y_label = [crop(imread(y)) for y in sorted(glob('data/dsb2018/train/masks/*.tif'))]
# normalize input image and convert label image to 3-class segmentation mask
X = [normalize(x,1,99.8) for x in tqdm(X)]
Y = [to_3class_label(y) for y in tqdm(Y_label)]
# convert to numpy arrays
X, Y, Y_label = np.expand_dims(np.stack(X),-1), np.stack(Y), np.stack(Y_label)
i = 15
fig, (a0,a1,a2) = plt.subplots(1,3,figsize=(15,5))
a0.imshow(X[i,...,0],cmap='gray'); a0.set_title('input image')
a1.imshow(Y_label[i],cmap='tab20'); a1.set_title('label image')
a2.imshow(Y[i]); a2.set_title('segmentation mask')
fig.suptitle("Example")
None;<jupyter_output><empty_output><jupyter_text>### Model<jupyter_code>from csbdeep.data import PadAndCropResizer
from csbdeep.utils import axes_check_and_normalize
from csbdeep.utils.tf import IS_TF_1, CARETensorBoardImage
if IS_TF_1:
raise NotImplementedError("For sake of simplicity, this example only works with TensorFlow 2.x")
class SegConfig(BaseConfig):
def __init__(self, unet_depth, **kwargs):
super().__init__(**kwargs)
self.unet_depth = unet_depth
class SegModel(BaseModel):
@property
def _config_class(self):
return SegConfig
def _build(self):
return common_unet(n_depth=self.config.unet_depth,
n_first=32, residual=False,
n_channel_out=self.config.n_channel_out,
last_activation='softmax')((None,None,self.config.n_channel_in))
def _prepare_for_training(self, validation_data, lr):
assert self.config.n_channel_out > 1
self.keras_model.compile(optimizer=keras.optimizers.Adam(lr),
loss=dice_bce_loss(self.config.n_channel_out),
metrics=['categorical_crossentropy','accuracy'])
self.callbacks = self._checkpoint_callbacks()
self.callbacks.append(keras.callbacks.TensorBoard(log_dir=str(self.logdir/'logs'),
write_graph=False, profile_batch=0))
self.callbacks.append(CARETensorBoardImage(model=self.keras_model, data=validation_data,
log_dir=str(self.logdir/'logs'/'images'),
n_images=3, prob_out=False))
self._model_prepared = True
def train(self, X,Y, validation_data, lr, batch_size, epochs, steps_per_epoch):
if not self._model_prepared:
self._prepare_for_training(validation_data, lr)
training_data = datagen(X,Y,batch_size)
history = self.keras_model.fit(training_data, validation_data=validation_data,
epochs=epochs, steps_per_epoch=steps_per_epoch,
callbacks=self.callbacks, verbose=1)
self._training_finished()
return history
def predict(self, img, axes=None, normalizer=None, resizer=PadAndCropResizer()):
normalizer, resizer = self._check_normalizer_resizer(normalizer, resizer)
axes_net = self.config.axes
if axes is None:
axes = axes_net
axes = axes_check_and_normalize(axes, img.ndim)
axes_net_div_by = tuple((2**self.config.unet_depth if a in 'XYZ' else 1) for a in axes_net)
x = self._make_permute_axes(axes, axes_net)(img)
x = normalizer(x, axes_net)
x = resizer.before(x, axes_net, axes_net_div_by)
pred = self.keras_model.predict(x[np.newaxis])[0]
pred = resizer.after(pred, axes_net)
return pred
# demo: delete model folder if it already exists
%rm -rf models/seg_model
config = SegConfig(n_channel_in=1, n_channel_out=3, unet_depth=2)
model = SegModel(config, 'seg_model', basedir='models')
model
model.keras_model.summary(line_length=110)<jupyter_output><empty_output><jupyter_text>### Train<jupyter_code>from csbdeep.data import shuffle_inplace
# shuffle data
shuffle_inplace(X, Y, Y_label, seed=0)
# split into 80% training and 20% validation images
n_val = len(X) // 5
def split_train_val(a):
return a[:-n_val], a[-n_val:]
X_train, X_val = split_train_val(X)
Y_train, Y_val = split_train_val(Y)
Y_label_train, Y_label_val = split_train_val(Y_label)
if COLAB:
%reload_ext tensorboard
%tensorboard --logdir=models
# for demonstration purposes: training only for a very short time here
history = model.train(X_train,Y_train, validation_data=(X_val,Y_val),
lr=3e-4, batch_size=4, epochs=10, steps_per_epoch=10)<jupyter_output><empty_output><jupyter_text>Model folder after training:<jupyter_code>%ls models/seg_model
# only works if "tree" is installed
!tree models/seg_model<jupyter_output><empty_output><jupyter_text>Model weights at best validation loss are automatically loaded after training. Or when reloading the model from disk:<jupyter_code>model = SegModel(None, 'seg_model', basedir='models')<jupyter_output><empty_output><jupyter_text>### Predict<jupyter_code># can predict via keras model, but only works for properly-shaped and normalized images
Yhat_val = model.keras_model.predict(X_val, batch_size=8)
Yhat_val.shape
i = 1
img, lbl, mask = X_val[i,:223,:223,0], Y_label_val[i,:223,:223], Y_val[i,:223,:223]
img.shape, lbl.shape, mask.shape
# U-Net models expects input to be divisible by certain sizes, hence fails here.
try:
model.keras_model.predict(img[np.newaxis])
except ValueError as e:
print(e)
mask_pred = model.predict(img, axes='YX')
mask_pred.shape
from skimage.measure import label
# threshold inner (green) and find connected components
lbl_pred = label(mask_pred[...,1] > 0.7)
fig, ((a0,a1,a2),(b0,b1,b2)) = plt.subplots(2,3,figsize=(15,10))
a0.imshow(img,cmap='gray'); a0.set_title('input image')
a1.imshow(lbl,cmap='tab20'); a1.set_title('label image')
a2.imshow(mask); a2.set_title('segmentation mask')
b0.axis('off')
b1.imshow(lbl_pred,cmap='tab20'); b1.set_title('label image (prediction)')
b2.imshow(mask_pred); b2.set_title('segmentation mask (prediction)')
fig.suptitle("Example")
None;<jupyter_output><empty_output><jupyter_text>## Tile iterator to process large images<jupyter_code>from csbdeep.internals.predict import tile_iterator
help(tile_iterator)
img = imread('data/dsb2018/test/images/5f9d29d6388c700f35a3c29fa1b1ce0c1cba6667d05fdb70bd1e89004dcf71ed.tif')
img = normalize(img, 1,99.8)
plt.figure(figsize=(8,8))
plt.imshow(img, clim=(0,1), cmap='gray')
plt.title(f"example image with shape = {img.shape}");
import matplotlib.patches as patches
def process(x):
return model.predict(x, axes='YX')
img_processed = process(img)
img_processed_tiled = np.empty_like(img_processed)
###
block_sizes = (8,8)
n_block_overlaps = (3,5)
n_tiles = (3,5)
print(f"block_sizes = {block_sizes}")
print(f"n_block_overlaps = {n_block_overlaps}")
print(f"n_tiles = {n_tiles}")
fig, ax = plt.subplots(*n_tiles, figsize=(15,8))
ax = ax.ravel()
[a.axis('off') for a in ax]
i = 0
for tile,s_src,s_dst in tile_iterator(img, n_tiles, block_sizes, n_block_overlaps, guarantee='size'):
# tile is padded; will always start and end at a multiple of block size
# tile[s_src] removes the padding (shown in magenta)
# the slice s_dst denotes the region where tile[s_src] comes from
# process tile, crop the padded region from the result and put it at its original location
img_processed_tiled[s_dst] = process(tile)[s_src]
ax[i].imshow(tile, clim=(0,1), cmap='gray')
rect = patches.Rectangle( [s.start for s in reversed(s_src)],
*[s.stop-s.start for s in reversed(s_src)],
edgecolor='none',facecolor='m',alpha=0.6)
ax[i].add_patch(rect)
i+=1
plt.tight_layout()
assert np.allclose(img_processed, img_processed_tiled)
None;<jupyter_output><empty_output>
|
permissive
|
/examples/other/technical.ipynb
|
rhoadesScholar/CSBDeep
| 14 |
<jupyter_start><jupyter_text>### 11. 프로젝트:멋진 작사가 만들기<jupyter_code>import re
import numpy as np
import tensorflow as tf
import glob
import os
from sklearn.model_selection import train_test_split<jupyter_output><empty_output><jupyter_text>#### 1. 데이터 읽어오기<jupyter_code>txt_file_path = os.getenv('HOME') + '/Aiffel/11_lyricist/data/lyrics/*'
txt_list = glob.glob(txt_file_path)
raw_corpus = []
for txt_file in txt_list:
with open(txt_file, 'r') as f:
raw = f.read().splitlines()
raw_corpus.extend(raw)
print('데이터 크기:', len(raw_corpus))
print('Examples:\n', raw_corpus[:3])<jupyter_output>데이터 크기: 187088
Examples:
['Hey, Vietnam, Vietnam, Vietnam, Vietnam', 'Vietnam, Vietnam, Vietnam Yesterday I got a letter from my friend', 'Fighting in Vietnam']
<jupyter_text>#### 2. 데이터 정제
- 문장 부호 양쪽에 공백 추가
- 모든 문자들을 소문자로 변환
- 특수 문자 모두 제거<jupyter_code>def preprocess_sentence(sentence):
sentence = sentence.lower().strip()
sentence = re.sub(r"([?.!,¿])", r" \1 ", sentence)
sentence = re.sub(r'[" "]+', " ", sentence)
sentence = re.sub(r"[^a-zA-Z?.!,¿]+", " ", sentence)
sentence = sentence.strip()
sentence = '<start> ' + sentence + ' <end>'
return sentence
corpus = []
for sentence in raw_corpus:
corpus.append(preprocess_sentence(sentence))
corpus[:10]<jupyter_output><empty_output><jupyter_text>#### 3. 평가 데이터셋 분리
- train data(80%), validation data(20%)<jupyter_code>def tokenize(corpus):
tokenizer = tf.keras.preprocessing.text.Tokenizer(
num_words=12000,
filters='',
oov_token='<unk>'
)
tokenizer.fit_on_texts(corpus)
tensor = tokenizer.texts_to_sequences(corpus)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor, padding='post', maxlen=15)
print(tensor, tokenizer)
return tensor, tokenizer
tensor, tokenizer = tokenize(corpus)
for idx in tokenizer.index_word:
print(idx, ":", tokenizer.index_word[idx])
if idx >= 10: break
src_input = tensor[:, :-1]
tgt_input = tensor[:, 1:]
print(src_input[0])
print(tgt_input[0])
enc_train, enc_val, dec_train, dec_val = train_test_split(src_input, tgt_input, test_size=0.2, random_state=2020)
print("Source Train:", enc_train.shape)
print("Target Train:", dec_train.shape)
BUFFER_SIZE = len(src_input)
BATCH_SIZE = 256
steps_per_epoch = len(src_input) // BATCH_SIZE
VOCAB_SIZE = tokenizer.num_words + 1
dataset = tf.data.Dataset.from_tensor_slices((enc_train, dec_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
dataset<jupyter_output><empty_output><jupyter_text>#### 4. 모델 만들기<jupyter_code>class TextGenerator(tf.keras.Model):
def __init__(self, vocab_size, embedding_size, hidden_size):
super(TextGenerator, self).__init__()
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_size)
self.rnn_1 = tf.keras.layers.LSTM(hidden_size, return_sequences=True)
self.rnn_2 = tf.keras.layers.LSTM(hidden_size, return_sequences=True)
self.linear = tf.keras.layers.Dense(vocab_size)
def call(self, x):
out = self.embedding(x)
out = self.rnn_1(out)
out = self.rnn_2(out)
out = self.linear(out)
return out
embedding_size = 512
hidden_size = 2048
model = TextGenerator(tokenizer.num_words + 1, embedding_size, hidden_size)
optimizer = tf.keras.optimizers.Adam()
loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
model.compile(loss=loss, optimizer=optimizer)
model.fit(dataset, epochs=7, validation_data=(enc_val, dec_val))<jupyter_output>Epoch 1/7
584/584 [==============================] - 261s 447ms/step - loss: 3.3038 - val_loss: 2.9007
Epoch 2/7
584/584 [==============================] - 268s 459ms/step - loss: 2.7133 - val_loss: 2.6210
Epoch 3/7
584/584 [==============================] - 265s 454ms/step - loss: 2.3755 - val_loss: 2.4387
Epoch 4/7
584/584 [==============================] - 251s 429ms/step - loss: 2.0571 - val_loss: 2.3124
Epoch 5/7
584/584 [==============================] - 251s 429ms/step - loss: 1.7623 - val_loss: 2.2239
Epoch 6/7
584/584 [==============================] - 254s 435ms/step - loss: 1.5028 - val_loss: 2.1748
Epoch 7/7
584/584 [==============================] - 261s 447ms/step - loss: 1.2869 - val_loss: 2.1507
<jupyter_text>#### 5. 평가하기<jupyter_code>def generate_text(model, tokenizer, init_sentence="<start>", max_len=20):
# 테스트를 위해서 입력받은 init_sentence도 일단 텐서로 변환합니다.
test_input = tokenizer.texts_to_sequences([init_sentence])
test_tensor = tf.convert_to_tensor(test_input, dtype=tf.int64)
end_token = tokenizer.word_index["<end>"]
# 텍스트를 실제로 생성할때는 루프를 돌면서 단어 하나씩 생성해야 합니다.
while True:
predict = model(test_tensor) # 입력받은 문장의 텐서를 입력합니다.
predict_word = tf.argmax(tf.nn.softmax(predict, axis=-1), axis=-1)[:, -1] # 우리 모델이 예측한 마지막 단어가 바로 새롭게 생성한 단어가 됩니다.
# 우리 모델이 새롭게 예측한 단어를 입력 문장의 뒤에 붙여 줍니다.
test_tensor = tf.concat([test_tensor,
tf.expand_dims(predict_word, axis=0)], axis=-1)
# 우리 모델이 <END>를 예측했거나, max_len에 도달하지 않았다면 while 루프를 또 돌면서 다음 단어를 예측해야 합니다.
if predict_word.numpy()[0] == end_token: break
if test_tensor.shape[1] >= max_len: break
generated = ""
# 생성된 tensor 안에 있는 word index를 tokenizer.index_word 사전을 통해 실제 단어로 하나씩 변환합니다.
for word_index in test_tensor[0].numpy():
generated += tokenizer.index_word[word_index] + " "
return generated # 이것이 최종적으로 모델이 생성한 자연어 문장입니다.
generate_text(model, tokenizer, init_sentence="<start> i love", max_len=20)<jupyter_output><empty_output>
|
no_license
|
/Exploration/11_lyricist/E11_lyricist.ipynb
|
benestump/Aiffel
| 6 |
<jupyter_start><jupyter_text>**Ahmet Zafer SAGLIK - Example QS Model**** Creating and training a ML Question-Answer(QA) Model which given a Query-question predicts the best answer from the context given. The dataset given consists of N source texts and M questions per source. For each question there is an answer given from the text, as well as a list of alternatives answers within the text too.**
**Steps:**
1. INSTALL THE NECESSARY LIBRARIES
2. IMPORT THE NECESSARY LIBRARIES
3. CREATE DICTIONARY FOR NER DETECTION
4. TRAIN MODELS(BERT,NERD)
5. EVALUATION AND TESTING
6. RESULTS
Just give the Test Data to Read Function with Sample Number
Careful: Bert is slow so Dont give big numbers# **INSTALL THE NECESSARY LIBRARIES**<jupyter_code>!pip install ner-d
!pip install sentence-transformers<jupyter_output>Requirement already satisfied: sentence-transformers in /usr/local/lib/python3.7/dist-packages (1.1.0)
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.1.95)
Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.8.1+cu101)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.4.1)
Requirement already satisfied: transformers<5.0.0,>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (4.5.1)
Requirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (3.2.5)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (4.41.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.22.2.post1)
Requirement already satisfied[...]<jupyter_text># **IMPORT THE NECESSARY LIBRARIES**<jupyter_code>import nltk
nltk.download('punkt')
nltk.download('wordnet')
import numpy as np, pandas as pd
import json
import ast
from textblob import TextBlob
import spacy
from nerd import ner
from scipy.spatial.distance import cosine
from nerd import ner
from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('bert-base-nli-mean-tokens')<jupyter_output><empty_output><jupyter_text># **MAKE DICT FOR NER DETECTION**<jupyter_code>questionsArray=["when","where","who","how much","how many","organization","event","language","art","law","what year","place"]
quest_dict = {}
quest_dict["when"] = ["DATE","TIME"]
quest_dict["where"] = ["FAC","GPE","LOC"]
quest_dict["who"]= ["PERSON","NORP","FAC"]
quest_dict["how much"] = ["MONEY","QUANTITY", "ORDINAL","CARDINAL","PERCENT"]
quest_dict["how many"] = ["QUANTITY", "ORDINAL","CARDINAL","PERCENT"]
quest_dict["organization"]=["ORG"]
quest_dict["event"] = ["EVENT"]
quest_dict["language"]=["LANGUAGE"]
quest_dict["art"]=["WORK_OF_ART"]
quest_dict["law"]=["LAW"]
quest_dict["what year"]=["DATE"]
quest_dict["place"]=["FAC","GPE","LOC"]
<jupyter_output><empty_output><jupyter_text># **READ FILE**<jupyter_code>def read_json(file_name,sample_num):
with open(f"{file_name}") as data_file:
data = json.load(data_file)
column_names=["answer_start","context","question","text","id"]
df=pd.DataFrame(columns=column_names)
count=0
for key, value in data.items():
for x in range(len(value["qas"])):
df=df.append({
'answer_start':value["qas"][x]["answers"][0]["answer_start"],
'question': value["qas"][x]["question"],
'context' : value["context"],
'text':value["qas"][x]["answers"][0]["text"],
'id': value["qas"][x]["id"],},
ignore_index=True)
count+=1
if count==sample_num:
break
return df<jupyter_output><empty_output><jupyter_text># **DATA PROCESS**<jupyter_code>def find_target(x):
index = -1
for i in range(len(x["sentences"])):
if x["text"] in x["sentences"][i]: index = i
return index
def data_process(df):
df['sentences'] = df['context'].apply(lambda x: [item.raw for item in TextBlob(x).sentences])
df["target"] = df.apply(find_target, axis = 1)
return df<jupyter_output><empty_output><jupyter_text># **MODEL TRAINING**<jupyter_code>def bert_sentence_sim(df):
#tknzr = TweetTokenizer()
#lemmatizer = WordNetLemmatizer()
count=0
sim_list=[None]*len(df["sentences"])
for i in (df["sentences"]):
#We dont need to Lemmatize or tokenize the data since Bert dont need it. When we do accuracy is decreasing.
#i_str=lemmatizer.lemmatize(''.join(i))
#question_lemma=lemmatizer.lemmatize(df['question'][count])
sentence_embeddings = sbert_model.encode(i)
query_vec = sbert_model.encode(df['question'][count])
sent_cont=0
d = {}
for sent in i:
sim=0
sim = cosine(query_vec, sbert_model.encode([sent]))
d[sent_cont] = sim
sent_cont+=1
sent_index=(min(d, key=d.get))
sim_list[count]=sent_index
count=count+1
df['sim_sent'] = sim_list
return df
def ner_detect(df):
arr_answer_tuple=[None]*len(df["sentences"])
arr_single_answer=[]
for index in range(len(df["sentences"])):
flag=False
answer_sentence=(df['sentences'][index][int(df['sim_sent'][index])])
question=df['question'][index]
doc = ner.name(answer_sentence)
answer_label = [(X.text, X.label_) for X in doc]
for quest_index in range(len(questionsArray)):
if questionsArray[quest_index] in df['question'].values[index].lower():
question_equal_tags=quest_dict[questionsArray[quest_index]]
for simple_tag in (question_equal_tags):
for answer_tuple in (answer_label):
if answer_tuple[1]==simple_tag:
flag=True
if (arr_answer_tuple[index]==None) :
arr_answer_tuple[index]=answer_tuple[0]
else:
arr_answer_tuple[index]=arr_answer_tuple[index] +" "+answer_tuple[0]
if flag==False:
arr_answer_tuple[index]=answer_sentence
else:
continue
df['last_ans'] = arr_answer_tuple
return df
<jupyter_output><empty_output><jupyter_text>## **EVALUATION AND TESTING**<jupyter_code>def true_in_sentence(df):
true=0
limit=0
for i,a in zip(df['target'],df['sim_sent']):
if i==a:
true+=1
limit+=1
if limit==len(df["sentences"]):
break
return true/len(df["sentences"])
def exactly_true(df):
true_ner=0
limit=0
for i,a in zip(df['text'],df['last_ans']):
if i in a:
true_ner+=1
limit+=1
if limit==len(df["sentences"]):
break
return true_ner/len(df["sentences"])<jupyter_output><empty_output><jupyter_text>**RESULTS**<jupyter_code>data = read_json("qa_dataset.json",sample_num=200)
data_ready=data_process(data)
data_include_sim=bert_sentence_sim(data_ready)
data_include_ner=ner_detect(data_include_sim)
result_true=true_in_sentence(data_include_ner)
print('Finds the sentence index of answer with accuracy',result_true)
exactly_true_var=exactly_true(data_include_ner)
print('Finds exact result or nearly exact result accuracy',exactly_true_var)
(data_include_ner.head(20))
<jupyter_output>Finds the sentence index of answer with accuracy 0.535
Finds exact result or nearly exact result accuracy 0.52
|
no_license
|
/nlp_project_zafer_200_sample (1).ipynb
|
AHMETZAFERSAGLIK/Question-Answer-Model-NLP
| 8 |
<jupyter_start><jupyter_text>基準値遷移<jupyter_code>sfs.PlotReferenceValueTransition(getTmpDf(year=2019), target_fund_id, funds, divergence, similar)<jupyter_output><empty_output><jupyter_text>bollinger bands width
- volatility<jupyter_code>sfs.PlotBollingerBandsWidth(getTmpDf(year=2019), target_fund_id, funds, period=45)
sfs.PlotBollingerBand(getTmpDf(year=2019), target_fund_id, funds, period=30)<jupyter_output><empty_output><jupyter_text>MACD<jupyter_code>sfs.PlotMACD(getTmpDf(year=2019), target_fund_id, funds, short_period=9, long_period=26)<jupyter_output><empty_output><jupyter_text>AROON<jupyter_code>sfs.PlotAroon(getTmpDf(year=2019), target_fund_id, funds, period=25)<jupyter_output><empty_output><jupyter_text>chande momentum oscillator<jupyter_code>sfs.PlotCMO(getTmpDf(year=2019), target_fund_id, funds, period=14)<jupyter_output><empty_output><jupyter_text>detrended price oscillator<jupyter_code>sfs.PlotDPO(getTmpDf(year=2019), target_fund_id, funds)<jupyter_output><empty_output><jupyter_text>double exponential moving average
<jupyter_code>sfs.PlotDEMA(getTmpDf(year=2019), target_fund_id, funds, period=14)<jupyter_output><empty_output><jupyter_text>double_smoothed_stochastic<jupyter_code>sfs.PlotDSS(getTmpDf(year=2019), target_fund_id, funds, period=7)<jupyter_output><empty_output><jupyter_text>hull moving average
<jupyter_code>sfs.PlotHMA(getTmpDf(year=2019), target_fund_id, funds, periods=[20, 25, 50])<jupyter_output><empty_output><jupyter_text>price oscillator<jupyter_code>sfs.PlotPO(getTmpDf(year=2019), target_fund_id, funds, periods=[7, 14])<jupyter_output><empty_output><jupyter_text>relative strength index
- RSI > 69 And Reverse() -> Sell
- RSI Buy<jupyter_code>sfs.PlotRSI(getTmpDf(year=2019), target_fund_id, funds, period=20)<jupyter_output><empty_output><jupyter_text>rate of change
- ROC Sell
- ROC > 0 -> Buy<jupyter_code>sfs.PlotROC(getTmpDf(year=2019), target_fund_id, funds, period=45)<jupyter_output><empty_output><jupyter_text>stochrsi
- stochrsi Sell
- stochrsi > 20 -> Buy<jupyter_code>sfs.PlotStochrsi(getTmpDf(year=2019), target_fund_id, funds, period=30)<jupyter_output><empty_output><jupyter_text>vertical horizontal filter<jupyter_code>sfs.PlotVHF(getTmpDf(year=2019), target_fund_id, funds, period=28)<jupyter_output><empty_output><jupyter_text>volatility
<jupyter_code>sfs.PlotVolatility(getTmpDf(year=2019), target_fund_id, funds, period=20)<jupyter_output><empty_output>
|
no_license
|
/singleFundsSummary/JP90C000A9Q7.ipynb
|
chanmaji10/NisaPicker
| 15 |
<jupyter_start><jupyter_text>### 1: Use pandas, or the BeautifulSoup package, or any other way you are comfortable with to transform the data in the table on the Wikipedia page into the above pandas dataframe.<jupyter_code>import requests
import lxml.html as lh
import bs4 as bs
import urllib.request
import numpy as np
import pandas as pd
BeautifulSoup = requests.get("https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M")
url= "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
def scrape_table_bs4(cname,cols):
page = urllib.request.urlopen(url).read()
soup = bs.BeautifulSoup(page,'lxml')
table = soup.find("table",class_=cname)
header = [head.findAll(text=True)[0].strip() for head in table.find_all("th")]
data = [[td.findAll(text=True)[0].strip() for td in tr.find_all("td")]
for tr in table.find_all("tr")]
data = [row for row in data if len(row) == cols]
raw_df = pd.DataFrame(data,columns=header)
return raw_df
raw_TorontoPostalCodes = scrape_table_bs4("wikitable",3)
print("# Toronto Postal codes stored in data")
print(raw_TorontoPostalCodes.info(verbose=True))
TorontoPostalCodes=raw_TorontoPostalCodes[~raw_TorontoPostalCodes['Borough'].isin(['Not assigned'])]
TorontoPostalCodes=TorontoPostalCodes.sort_values(by=['Postcode','Borough','Neighbourhood'], ascending=[1,1,1]).reset_index(drop=True)
TorontoPostalCodes.loc[TorontoPostalCodes['Neighbourhood'] == 'Not assigned', ['Neighbourhood']] = TorontoPostalCodes['Borough']
check_unassigned_post_state_sample = TorontoPostalCodes.loc[TorontoPostalCodes['Borough'] == 'Queen\'s Park']
TorontoPostalCodes = TorontoPostalCodes.groupby(['Postcode','Borough'])['Neighbourhood'].apply(', '.join).reset_index()
TorontoPostalCodes
TorontoPostalCodes.shape
TorontoPostalCodes.to_csv('Toronto.TASK_1_df.csv',index=False)<jupyter_output><empty_output><jupyter_text>### 2. Once you are able to create the above dataframe, submit a link to the new Notebook on your Github repository. (2 marks)<jupyter_code>Data_csv = "Toronto.TASK_1_df.csv"
TorontoPostalCodes = pd.read_csv(Data_csv).set_index("Postcode")
TorontoPostalCodes.rename_axis("Postal Code", axis='index', inplace=True)
TorontoPostalCodes.head()
toronto_geocsv = 'https://cocl.us/Geospatial_data'
!wget -q -O 'toronto_m.geospatial_data.csv' toronto_geocsv
geocsv_data = pd.read_csv(toronto_geocsv).set_index("Postal Code")
geocsv_data.head()
toronto_neighborhoods = TorontoPostalCodes.join(geocsv_data)
toronto_neighborhoods.head()
toronto_neighborhoods.to_csv('Toronto.TASK_2_df.csv',index=False)
toronto_neighborhoods.shape<jupyter_output><empty_output><jupyter_text>### 3. Once you are happy with your analysis, submit a link to the new Notebook on your Github repository (3 marks)<jupyter_code>!conda install -c conda-forge geopy --yes
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json
from geopy.geocoders import Nominatim
GeoLocator = Nominatim
import requests
from pandas.io.json import json_normalize
import matplotlib.cm as cm
import matplotlib.colors as colors
from sklearn.cluster import KMeans
!conda install -c conda-forge folium=0.5.0 --yes
import folium
print('Libraries imported.')
toronto_neighborhoods.shape
toronto_neighborhoods.head()
address = 'Toronto, Ontario Canada'
geolocator = Nominatim()
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto Canada are {}, {}.'.format(latitude, longitude))
Toronto_map = folium.Map(location=[latitude, longitude], zoom_start=11)
for lat, lng, borough, neighborhood in zip(toronto_neighborhoods['Latitude'], toronto_neighborhoods['Longitude'], toronto_neighborhoods['Borough'], toronto_neighborhoods['Neighbourhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=4,
popup=label,
color='blue',
fill=True,
fill_color='#87cefa',
fill_opacity=0.5,
parse_html=False).add_to(Toronto_map)
Toronto_map
toronto_data = toronto_neighborhoods[toronto_neighborhoods['Borough'].str.contains("Toronto")].reset_index(drop=True)
print(toronto_data.shape)
toronto_data.head()
Toronto_map = folium.Map(location=[latitude, longitude], zoom_start=11)
for lat, lng, label in zip(toronto_data['Latitude'], toronto_data['Longitude'], toronto_data['Neighbourhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(Toronto_map)
Toronto_map
CLIENT_ID = 'FXEYNHRJOFDACV4FOAQHYBDCGPX20KYHTPJFCCGHJI0JW10H'
CLIENT_SECRET = 'W330DBT11XATMTZJZS0441EHZ4TL24OXGZGQ0ZQUASAM4SZE'
LIMIT = 30
print('Your credentials:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
results = requests.get(url).json()["response"]['groups'][0]['items']
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighbourhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
toronto_neighborhoods = toronto_data
toronto_venues = getNearbyVenues(names=toronto_neighborhoods['Neighbourhood'],
latitudes=toronto_neighborhoods['Latitude'],
longitudes=toronto_neighborhoods['Longitude']
)
print(toronto_venues.shape)
toronto_venues.head()
toronto_venues.groupby('Neighbourhood').count()
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
toronto_onehot['Neighbourhood'] = toronto_venues['Neighbourhood']
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_onehot.shape
toronto_grouped = toronto_onehot.groupby('Neighbourhood').mean().reset_index()
toronto_grouped
toronto_grouped.shape
num_top_venues = 5
for neigh in toronto_grouped['Neighbourhood']:
print("----"+neigh+"----")
temp = toronto_grouped[toronto_grouped['Neighbourhood'] == neigh].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print()
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighbourhood'] = toronto_grouped['Neighbourhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.shape<jupyter_output><empty_output>
|
no_license
|
/Neighborhoods in Toronto.ipynb
|
munyee63/Coursera_Capstone
| 3 |
<jupyter_start><jupyter_text># K Means Iterations
For each iteration, display the classifier label for each point, the region (convex hull) for each label, and where the next centroid will be.<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from scipy.spatial import ConvexHull
import seaborn as sns
%matplotlib inline
sns.set_style("ticks")
def kmeans_iter_graph (point_list=[], *arg, k, out_file):
# combine the lists
points = np.concatenate([p for p in point_list])
colors = ['b', 'g', 'r', 'y', 'm', 'k', 'c']
# select k random points
centroids = np.random.uniform(np.min(points[:,0]),np.max(points[:,1]),(k,2))
plt.figure (figsize=(8,8))
plt.plot(points[:,0], points[:,1], 'bo')
plt.plot(centroids[:,0],centroids[:,1],'r*',markersize=20)
plt.title('Initial Random Centroids')
plt.savefig(out_file+'000.png',dpi=96)
old_centroids = []
counter = 0
while not np.array_equal(centroids,old_centroids):
# for each point in points, get the distance from each centroid and find
# the nearest centroid.
counter+=1
old_centroids = centroids
distances = [[linalg.norm(point-centroids[j]) for j in range(k)] for point in points]
labels = np.asarray([np.argmin(distances[i]) for i in range(len(points))])
# now get the new centroids from each label group
centroids = []
plt.figure(figsize=(8,8))
plt.title('Iteration #'+str(counter))
for i in range(k):
current_points = points[np.where(labels==i)]
hull=ConvexHull(current_points)
centroids.append(np.mean(current_points,axis=0))
plt.plot(current_points[:,0],current_points[:,1], '%so'%colors[i],alpha=0.2)
plt.plot(centroids[i][0],centroids[i][1], '%s*'%colors[i], markersize=20)
plt.plot(old_centroids[i][0],old_centroids[i][1], '%s*'%colors[i], markersize=20, alpha=0.3)
plt.plot([centroids[i][0],old_centroids[i][0]],[centroids[i][1],old_centroids[i][1]], '%s-'%colors[i], alpha=1.0)
for simplex in hull.simplices:
plt.plot(current_points[simplex,0],current_points[simplex,1], '%s--'%colors[i],alpha=0.3)
plt.title('Iteration ' + str(counter))
plt.savefig(out_file+'%03d'%counter+'.png',dpi=96)
print('Final Centroid Coordinates:')
print(centroids)<jupyter_output><empty_output><jupyter_text>Try it with 3 randomly generated regions centered around (0,0), (5,5) and (10,10)<jupyter_code>points1 = np.random.standard_normal((100,2))
points2 = np.random.normal(10,2,(100,2))
points3 = np.random.normal(5,2,(100,2))
point_list = [points1,points2,points3]
kmeans_iter_graph(point_list,k=3,out_file='example1_')<jupyter_output>Final Centroid Coordinates:
[array([ 0.24796396, 0.11090377]), array([ 5.03920599, 5.86426901]), array([ 10.12017354, 10.14535921])]
<jupyter_text>Four regions around (5,5), (10,10), (15,15), and (20,20) with a bunch of extra noise centered on (12.5,12.5). Let's see if KMeans converges towards the four regions.<jupyter_code>p1 = np.random.normal(5,2,(100,2))
p2 = np.random.normal(10,2,(100,2))
p3 = np.random.normal(15,2,(100,2))
p4 = np.random.normal(20,2,(100,2))
p5 = np.random.normal(12.5,5,(50,2))
pl2 = [p1,p2,p3,p4,p5]
kmeans_iter_graph(pl2,k=4,out_file='example2_')<jupyter_output>Final Centroid Coordinates:
[array([ 5.20803973, 5.00411462]), array([ 20.04790979, 19.79497467]), array([ 14.75365572, 14.6691234 ]), array([ 9.51401797, 9.70890713])]
<jupyter_text>Now for centroids not on the 45 degree line...<jupyter_code>pp1 = np.concatenate((np.random.normal(5,2,(100,1)),np.random.normal(8,2,(100,1))),axis=1)
pp2 = np.concatenate((np.random.normal(7,2,(100,1)),np.random.normal(15,2,(100,1))),axis=1)
pp3 = np.concatenate((np.random.normal(15,2,(100,1)),np.random.normal(5,2,(100,1))),axis=1)
pp4 = np.concatenate((np.random.normal(14,2,(100,1)),np.random.normal(13,2,(100,1))),axis=1)
pp5 = np.random.normal(10,5,(250,2))
pl3 = [pp1,pp2,pp3,pp4,pp5]
kmeans_iter_graph(pl3,k=4,out_file='example3_')<jupyter_output>Final Centroid Coordinates:
[array([ 7.25428578, 14.85833453]), array([ 5.32840506, 7.64823214]), array([ 14.67325263, 4.67971284]), array([ 14.35333394, 13.23570452])]
|
no_license
|
/K Means Iteration.ipynb
|
keithqu/illustrative
| 4 |
<jupyter_start><jupyter_text># Geely Automobile Pricing Model
- Debasis Garabadu## Business Objective
Chinese automobile company Geely Auto aspires to set up their car manufacturing unit in US market and simultaneously can provide competition to their US and European counterparts. The company wants to know:
> - Which variables are significant in predicting the price of a car
> - How well those variables describe the price of a car## Analysis Process
The analysis is divided into ten main parts:
> 1. Data Sourcing or Data Understanding
2. Data cleaning and Derived Metrics
3. Data Analysis (Univariate, Bivariate Analysis)
4. Model Preparation
5. Training and Testing set Data Split
6. Model Building
7. Residual Analysis of the Train Data
8. Making Predictions
9. Model Evaluation
10. Final Inference### Prerequisites
> - numpy version: 1.16.1
- pandas version: 0.24.1
- seaborn version: 0.9.0### Import Libraries<jupyter_code># Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
# To display all the columns
pd.options.display.max_columns = None
# To display all the rows
pd.options.display.max_rows = None
# To map Empty Strings or numpy.inf as Na Values
pd.options.mode.use_inf_as_na = True
# Set Precision to 8 for better readability
pd.set_option('precision', 8)
pd.options.display.float_format = '{:.4f}'.format
pd.options.display.expand_frame_repr = False
# Set Style
sns.set(style = "whitegrid")
# Ignore Warnings
warnings.filterwarnings('ignore')<jupyter_output><empty_output><jupyter_text>## 1. Data Sourcing<jupyter_code># 1.1 Import File
car = pd.read_csv('CarPrice_Assignment.csv', sep = ',', encoding = 'ISO-8859-1', skipinitialspace = True)
# Data Glimpse
car.head()
# 1.2 Set Car ID as the index and subtract number one to start the index from zero.
car.set_index('car_ID', inplace = True)
car.index = car.index - 1
# Data Glimpse
car.head()<jupyter_output><empty_output><jupyter_text>## 2. Data Cleaning and Derived Metrics<jupyter_code># 2.1 Initial DataFrame Shape (Observations, Variables)
car.shape<jupyter_output><empty_output><jupyter_text> Delete Unnecessary Rows and Columns<jupyter_code># 2.2 Get the Percentage Rate of NaN or NULL values in each column
round(car.isnull().sum(axis = 0)/len(car), 2)*100<jupyter_output><empty_output><jupyter_text>> None of the Columns has NULL or NaN values. So, nothing needs to be dropped.<jupyter_code># 2.3 Get the Percentage Rate of NaN or NULL values in each rows
rows = pd.DataFrame(data = round(car.isnull().sum(axis = 1)/len(car), 2)*100, columns = ['null_percent'])
rows[rows.null_percent > 0].index<jupyter_output><empty_output><jupyter_text>> None of the Rows has NULL or NaN values. So, nothing needs to be dropped.<jupyter_code># 2.4 Dropping any Duplicate Rows, if any (Keeping the First Value and dropping the rest)
car.drop_duplicates(keep = 'first', inplace = True)
# 2.5 Final DataFrame Shape (Observations, Variables)
car.shape<jupyter_output><empty_output><jupyter_text>> Intial Shape (Before Data Cleaning) and Final shape (After Data Cleaning) remains same.<jupyter_code># 2.6 Concise Summary
car.info()
# 2.7 Getting Car Company from CarName Variable
car['carcompany'] = car.CarName.str.split(' ').str[0]
# 2.8 Dropping Variable CarName
car.drop('CarName', axis = 1, inplace = True)
# Data Glimpse
car.head()
# 2.9 Arranging the Columns
col_list = ['symboling', 'carcompany']
car = car.reindex(columns = (col_list + [x for x in car.columns if x not in col_list]))
# Data Glimpse
car.head()<jupyter_output><empty_output><jupyter_text>Deived Variable `carcompany` has some misspelings. Those can be corrected as following:
- **maxda** can be corrected to **mazda**
- **Nissan** can be corrected to **nissan**
- **porcshce** can be corrected to **porsche**
- **toyouta** can be corrected to **toyota**
- **vokswagen** and **vw** can be corrected to **volkswagen**<jupyter_code># 2.10 Initial Value of derived variable carcompany
car.carcompany.value_counts(sort = False, dropna = False)
# 2.11 Misspelling Correction
car.loc[car.carcompany == 'maxda', 'carcompany'] = 'mazda'
car.loc[car.carcompany == 'Nissan', 'carcompany'] = 'nissan'
car.loc[car.carcompany == 'porcshce', 'carcompany'] = 'porsche'
car.loc[car.carcompany == 'toyouta', 'carcompany'] = 'toyota'
car.loc[((car.carcompany == 'vokswagen') | (car.carcompany == 'vw')), 'carcompany'] = 'volkswagen'
# 2.12 Final Value of derived variable carcompany
car.carcompany.value_counts(sort = False, dropna = False)
# 2.13 Concise summary
car.info()
# 2.14 Mapping all object Data types to category
category_list = car.select_dtypes(include = [np.object]).columns.to_list()
car[category_list] = car[category_list].astype('category')
# Concise Summary
car.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 205 entries, 0 to 204
Data columns (total 25 columns):
symboling 205 non-null int64
carcompany 205 non-null category
fueltype 205 non-null category
aspiration 205 non-null category
doornumber 205 non-null category
carbody 205 non-null category
drivewheel 205 non-null category
enginelocation 205 non-null category
wheelbase 205 non-null float64
carlength 205 non-null float64
carwidth 205 non-null float64
carheight 205 non-null float64
curbweight 205 non-null int64
enginetype 205 non-null category
cylindernumber 205 non-null category
enginesize 205 non-null int64
fuelsystem 205 non-null category
boreratio 205 non-null float64
stroke 205 non-null float64
compressionratio 205 non-null float64
horsepower 205 non-null int64
peakrpm 205 non-null[...]<jupyter_text>## 3. Data Analysis<jupyter_code>#DataFrame Shape (Observations, Variables)
car.shape
# Generate descriptive statistics
car.describe()<jupyter_output><empty_output><jupyter_text>### Univariate Analysis<jupyter_code># Custom Function to add data labels
def add_data_labels(ax, spacing = 5):
# For each bar: Place a label
for rect in ax.patches:
# Get X and Y placement of label from rect.
y_value = rect.get_height()
x_value = rect.get_x() + rect.get_width() / 2
# Number of points between bar and label. Change to your liking.
space = spacing
# Vertical alignment for positive values
va = 'bottom'
# If value of bar is negative: Place label below bar
if y_value < 0:
# Invert space to place label below
space *= -1
# Vertically align label at top
va = 'top'
# Use Y value as label and format number with one decimal place
label = "{:.2f}%".format(y_value)
# Create annotation
plt.annotate(
label, # Use `label` as label
(x_value, y_value), # Place label at end of the bar
xytext = (0, space), # Vertically shift label by `space`
textcoords = "offset points", # Interpret `xytext` as offset in points
ha = 'center', # Horizontally center label
va = va) # Vertically align label differently for positive and negative values.
# 3.1 carcompany vs Percentage Rate
series = car['carcompany'].value_counts(dropna = False, normalize = True) * 100
print(series)
print('\n')
plt.figure(figsize = (17, 12))
ax = sns.barplot(x = series.index, y = series.values, order = series.sort_index().index)
plt.xticks(rotation = 90)
plt.title('Car Company vs Percentage Rate')
plt.xlabel('carcompany', labelpad = 15)
plt.ylabel('Percentage Rate', labelpad = 10)
# Call Custom Function
add_data_labels(ax)
plt.show()
# 3.2 Univariate Plot Analysis of other categorical variables vs Percentage Rate
category_list = list(car.columns[car.dtypes == 'category'])
counter = 1
plt.figure(figsize = (20, 18))
for col_list in category_list:
if col_list != 'carcompany':
series = car[col_list].value_counts(normalize = True) * 100
plt.subplot(3, 3, counter)
ax = sns.barplot(x = series.index, y = series.values, order = series.sort_index().index)
plt.xlabel(col_list, labelpad = 15)
plt.ylabel('Percentage Rate', labelpad = 10)
# Call Custom Function
add_data_labels(ax)
counter += 1
plt.subplots_adjust(hspace = 0.3)
plt.subplots_adjust(wspace = 0.5)
plt.show()<jupyter_output><empty_output><jupyter_text>> #### Univariate Analysis Findings:
>> 1. Car Company having highest number of vehicles:
- toyota (15.61%)
- nissan (8.78%)
- mazda (8.29%)
- honda and mitsubishi (6.34%)
>> 2. Most preferred fuel type:
- gas (90.24%)
>> 3. Most preferred aspiration:
- std (81.95%)
>> 4. Most preferred door number:
- four (56.10%)
>> 5. Top 3 preferred car body:
- sedan (46.83%)
- hatchbag (34.15%)
- wagon (12.20%)
>> 6. Top 2 preferred drive wheel:
- fwd (58.54%)
- rwd (37.07%)
>> 7. Most preferred engine location:
- front (98.54%)
>> 8. Most preferred engine type:
- ohc (77.20%)
>> 9. Most preferred cylinder number:
- four (77.56%)
>> 10. Most preferred fuel system:
- mpfi (77.20%)### Bivariate Analysis - Categorical<jupyter_code># 3.3 Bivariate Plot Analysis of carcompany vs price
print(car.groupby(by = 'carcompany').price.median().sort_values(ascending = False))
print('\n')
plt.figure(figsize=(15, 12))
plt.subplot(2, 1, 1)
sns.lineplot(x = 'carcompany', y = 'price', data = car, estimator = np.median)
plt.xticks(rotation = 90)
plt.title('Car Company vs Price')
plt.xlabel('carcompany', labelpad = 15)
plt.ylabel('Price', labelpad = 10)
plt.subplot(2, 1, 2)
sns.boxplot(x = 'carcompany', y = 'price', data = car)
plt.xticks(rotation = 90)
plt.subplots_adjust(hspace = 0.5)
plt.show()
# 3.4 Bivariate Plot Analysis of other categorical variables vs price
category_list = list(car.columns[car.dtypes == 'category'])
counter = 1
plt.figure(figsize = (20, 18))
for col_list in category_list:
if col_list != 'carcompany':
plt.subplot(3, 3, counter)
sns.boxplot(x = col_list, y = 'price', data = car)
counter += 1
plt.subplots_adjust(hspace = 0.3)
plt.subplots_adjust(wspace = 0.5)
plt.show()<jupyter_output><empty_output><jupyter_text>> #### Bivariate Analysis - Categorical Findings:
>> 1. Top 5 Car Price (highest to lowest):
- jaguar
- buick
- porsche
- bmw
- volvo
>> 2. There are some outliers explaining the fact that the companies produce cars of much higher price range than normal market price.
>> 3. There is no major significance of price based on fuel type with the exception of expensive cars belonging to fueltype gas.
>> 4. turbo aspiration is bit higher than standard aspiration. Exception - Expensive Cars
>> 5. Drive Wheel of rwd (rear wheel drive) is expensive than 4wd (four wheel drive) and fwd (fron wheel drive)
>> 6. Rear Enging location is expensive than front.
>> 7. Price increases as the cylinder number increases.### Bivariate Analysis - Continuous<jupyter_code># 3.5 Generate descriptive statistics of continuous variables
car.describe()
# 3.6 Generate Pairplot
sns.pairplot(car)
plt.show()
# 3.7 Generate Correlation
car.corr()
# 3.8 Generate Cluster or Heat map
kwargs = {'annot': True}
sns.clustermap(car.corr(), center = 0, linewidths = 0.75, figsize = (15, 12), **kwargs)
plt.show()<jupyter_output><empty_output><jupyter_text>> #### Bivariate Analysis - Continuous Findings:
>> 1. Price is positively correlated with:
- enginesize (0.87)
- curbweight (0.84)
- horsepower (0.81)
- carwidth (0.76)
- carlength (0.68)
>> 2. Price is negatively correlated with:
- highwaympg (-0.7)
- citympg (-0.69)
>> 3. There is a high correlation between `highwaympg` and `citympg` (Corr value = 0.97).
>> 4. There is a high correlation between `carlength`, `carwidth`, `curbweight`, `enginesize` and `horsepower` (Corr value > 0.8)### Multicolinearity RemovalFrom the heatmap, we got:
> - `carlength` is highly correlated with `carwidth` (corr = 0.84)
- `carlength` is highly correlated with `curbweight` (corr = 0.88)
- `carwidth` is highly correlated with `curbweight` (corr = 0.87)
- `curbweight` is highly correlated with `enginesize` (corr = 0.85)
- `enginesize` is highly correlated with `horsepower` (corr = 0.81)
- `highwaympg` is highly correlated with `citympg` (corr = 0.97)
From the above, we can drop `carlength` and `carwidth` as both are higly correlated with `curbweight`. Similarily both `highwaympg` and `citympg` can be dropped.
There are some addition highly correlated variables like `curbweight`, `enginesize` and `horsepower`, that are correlated with the outcome variable `price`. We are not drpping these variable in the intial stage. During model preparation, if it turns out that it is not significant (p > 0.05) or VIF is high, we can definitely drop the variables at that point of time one by one.<jupyter_code># 3.9 Dropping the original variables
car.drop(columns = ['carlength', 'carwidth', 'highwaympg', 'citympg'], axis = 1, inplace = True)
# Data Glimpse
car.head()
# 3.10 Generate new correlation and heatmap plotting
car.corr()
# 3.11 Generate Cluster or Heat map
kwargs = {'annot': True}
sns.clustermap(car.corr(), center = 0, linewidths = 0.75, figsize = (15, 12), **kwargs)
plt.show()<jupyter_output><empty_output><jupyter_text>### Outlier: Detection<jupyter_code># 3.11 Distribution Plot of numerical variables
number_list = list(car.columns[car.dtypes != 'category'])
counter = 1
plt.figure(figsize = (20, 18))
for col_list in number_list:
if col_list != 'price':
plt.subplot(5, 3, counter)
sns.distplot(car[col_list], hist = True, kde = True, color = 'g')
counter += 1
plt.subplots_adjust(hspace = 0.4)
plt.subplots_adjust(wspace = 0.5)
plt.show()<jupyter_output><empty_output><jupyter_text>There are some variables (like `compressionratio`, `wheelbase`, `horsepower`, etc) that are not normally distributed, that is, outliers are present. Due to presence of less number of observation, we are not deleting some rows to correct the outliers. We will be doing the scaling in both train and test data to correct this.## 4. Model Preparation**Symboling:**
As per Data Dictionary, Variable `symboling` is insurance risk rating with +3 as risky and -3 as pretty safe. Since it is a categorical field, so the values can be divided into categories like:
- **risky** for range [2, 3]
- **moderate** for range [0, 1]
- **safe** for range [-1, -3]<jupyter_code># Initial Value of variable symboling
car.symboling.value_counts(sort = False, dropna = False)
# Setting values in the variable symboling
car['symboling'] = car['symboling'].map({ 3: 'risky',
2: 'risky',
1: 'moderate',
0: 'moderate',
-1: 'safe',
-2: 'safe',
-3: 'safe'})
# Mapping the column as Category data type
car['symboling'] = car['symboling'].astype('category')
# Final Value of variable symboling
car.symboling.value_counts(sort = False, dropna = False)<jupyter_output><empty_output><jupyter_text>**Car Company:**
Variable `carcompany` has wide range of data points or labels (22 unique data). Encoding with **pandas get_dummies** will create 22 additional columns and with drop of 1 column (drop_first = True), 21 additional columns will be created. This will ultimately create:
> 1. Complexity
> 2. Hard to get the list of car companies that form a linear regression with price.
As it is a categorical field and dependent on price (Based on the domain knowledge, we can say that the brand value of the car company affects the price), so the car company values, based on central tendency (average) of price, can be divided into 3 main categories like:
> - **premium_tier**, when the car company average price is greater than 20,000
> - **mid_tier**, when the car company average price is between 10,000 to 20,000
> - **low_tier**, when the car company average price is less than 10,000<jupyter_code># Summary Statistics of carcompany with price
car[['carcompany', 'price']].describe()
# Summary Statistics price for each carcompany
car.groupby('carcompany').price.median().sort_values(ascending = False)
# Custom Function to get carcompany_category
def get_carcompany_category():
for company_list in car.carcompany.unique().to_list():
df = car[car['carcompany'] == company_list]
if df.price.median() > 20000:
lv_category = 'premium_tier'
c
elif df.price.median() > 10000:
lv_category = 'mid_tier'
else:
lv_category = 'low_tier'
car.loc[car['carcompany'] == company_list, 'carcompany_category'] = lv_category
# Mapping the column as Category data type
car['carcompany_category'] = car['carcompany_category'].astype('category')
# Call Function
get_carcompany_category()
# Data Glimpse
pd.crosstab(car.carcompany, car.carcompany_category)<jupyter_output><empty_output><jupyter_text>**Binary Encoding:**
Maping Variable `fueltype`, `aspiration`, `enginelocation` and `doornumber` to binary values (0, 1).<jupyter_code>car['fueltype'] = car.fueltype.map({'diesel': 0, 'gas': 1})
car['aspiration'] = car.aspiration.map({'turbo': 0, 'std': 1})
car['enginelocation'] = car.enginelocation.map({'rear': 0, 'front': 1})
car['doornumber'] = car.doornumber.map({'two': 0, 'four': 1})<jupyter_output><empty_output><jupyter_text>**Dummy Variable:**
Setting dummy values to variable - `symboling`, `carbody`, `drivewheel`, `fuelsystem`, `cylindernumber`, `enginetype` and `carcompany_category`<jupyter_code>col = ['symboling', 'carbody', 'drivewheel', 'fuelsystem', 'cylindernumber', 'enginetype', 'carcompany_category']
status = pd.get_dummies(car[col])
car = pd.concat([car, status], axis = 1)
# Based on the Bivariate Analysis (Categorical) with price (Box Plot done in the top section), we can go for
# dropping one dummy variable from each section that is least affected with price.
drop_list = ['symboling_risky', 'carbody_hatchback', 'drivewheel_4wd', 'fuelsystem_1bbl', 'cylindernumber_three',
'enginetype_rotor', 'carcompany_category_low_tier']
car.drop(columns = drop_list, axis = 1, inplace = True)
# Data Glimpse
car.head()
car.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 205 entries, 0 to 204
Data columns (total 51 columns):
symboling 205 non-null category
carcompany 205 non-null category
fueltype 205 non-null int64
aspiration 205 non-null int64
doornumber 205 non-null int64
carbody 205 non-null category
drivewheel 205 non-null category
enginelocation 205 non-null int64
wheelbase 205 non-null float64
carheight 205 non-null float64
curbweight 205 non-null int64
enginetype 205 non-null category
cylindernumber 205 non-null category
enginesize 205 non-null int64
fuelsystem 205 non-null category
boreratio 205 non-n[...]<jupyter_text>**Non-Numerical Variables Drop:**
Dropping all left non-numerical or categorical variables<jupyter_code>category_list = list(car.columns[car.dtypes == 'category'])
car.drop(columns = category_list, inplace = True)
# Data Glimpse
car.head()
# Dataframe Shape
car.shape
# Concise Summary
car.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 205 entries, 0 to 204
Data columns (total 43 columns):
fueltype 205 non-null int64
aspiration 205 non-null int64
doornumber 205 non-null int64
enginelocation 205 non-null int64
wheelbase 205 non-null float64
carheight 205 non-null float64
curbweight 205 non-null int64
enginesize 205 non-null int64
boreratio 205 non-null float64
stroke 205 non-null float64
compressionratio 205 non-null float64
horsepower 205 non-null int64
peakrpm 205 non-null int64
price 205 non-null float64
symboling_moderate 205 non-null uint8
symboling_safe 205 non-null uint8
car[...]<jupyter_text>## 5. Training and Testing Set Data Splitting<jupyter_code># from sklearn.model_selection import train_test_split
# We specify this so that the train and test data set always have the same rows, respectively
np.random.seed(0)
df_train, df_test = train_test_split(car, train_size = 0.7, test_size = 0.3, random_state = 100)<jupyter_output><empty_output><jupyter_text>### Feature Scaling<jupyter_code># from sklearn.preprocessing import MinMaxScaler
# Using MinMaxScaler to scale all the numeric variables in the same scale between 0 and 1.
scaler = MinMaxScaler()
# Apply scaler() to all the columns except the 'yes-no' and 'dummy' variables
num_vars = ['wheelbase', 'carheight', 'curbweight', 'enginesize','boreratio', 'stroke', 'compressionratio',
'horsepower', 'peakrpm', 'price']
df_train[num_vars] = scaler.fit_transform(df_train[num_vars])
df_train.head()
# Descriptive Statistics
df_train.describe()<jupyter_output><empty_output><jupyter_text>### Dividing into X and Y sets for the model building<jupyter_code>y_train = df_train.pop('price')
X_train = df_train
# Get Dimension
X_train.shape<jupyter_output><empty_output><jupyter_text>## 6. Model Building
A mix approach will be used. First using the LinearRegression function from SciKit Learn for its compatibility with RFE (which is a utility from sklearn) and then using the Statmodel for statistics analysis of the model.### RFE (Recursive Feature Elimination)<jupyter_code># Importing RFE and LinearRegression
# from sklearn.feature_selection import RFE
# from sklearn.linear_model import LinearRegression
# Running RFE with the output number of the variable equal to 20
lm = LinearRegression()
lm.fit(X_train, y_train)
rfe = RFE(lm, 15) # running RFE
rfe = rfe.fit(X_train, y_train)
list(zip(X_train.columns, rfe.support_, rfe.ranking_))
col = X_train.columns[rfe.support_]
col
X_train.columns[~rfe.support_]<jupyter_output><empty_output><jupyter_text>### Building model using statsmodel, for the detailed statistics<jupyter_code># Define a Custom Function for printing the statsmodel summary
def get_stats(X_train_rfe):
# Adding a constant variable
# import statsmodels.api as sm
X_train_rfe = sm.add_constant(X_train_rfe)
# Running the linear model
lm = sm.OLS(y_train,X_train_rfe).fit()
# Print Summary of linear model
print(lm.summary())
return lm
# Custom Function to get VIF
def get_VIF(X_train):
#from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
X = X_train
vif['Features'] = X.columns
vif['VIF'] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif['VIF'] = round(vif['VIF'], 2)
vif = vif.sort_values(by = "VIF", ascending = False)
return(vif)
# Creating X_test dataframe with RFE selected variables
X_train_rfe = X_train[col]
# Call Custom Function to get OLS Regression Results
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `boreratio` has p-value > 0.05 and high VIF (> 5). So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'boreratio', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `cylindernumber_eight` has p-value > 0.05. So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'cylindernumber_eight', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `cylindernumber_twelve` has p-value > 0.05. So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'cylindernumber_twelve', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `enginesize` has p-value > 0.05 and high VIF (> 10). So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'enginesize', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `stroke` has p-value > 0.05 and high VIF (> 10). So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'stroke', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `cylindernumber_five` has p-value > 0.05. So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'cylindernumber_five', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `cylindernumber_six` has p-value > 0.05. So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'cylindernumber_six', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `curbweight` has high VIF (> 10). So, it is a multicollinear problem and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'curbweight', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `enginelocation` has high VIF (> 10). So, it is a multicollinear problem and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'enginelocation', axis = 1, inplace = True)
# Call Custom Function to get stats
lm = get_stats(X_train_rfe)
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
>> - `enginetype_ohc` has p-value > 0.05. So, it is insignificant and can be dropped.<jupyter_code># Drop Column
X_train_rfe.drop(columns = 'enginetype_ohc', axis = 1, inplace = True)
# Adding a constant variable
X_train_lm = sm.add_constant(X_train_rfe)
# Running the linear model
lm = sm.OLS(y_train, X_train_lm).fit()
# Print Summary of linear model
print(lm.summary())
# Call Custom Function to get VIF
vif = get_VIF(X_train_rfe)
vif<jupyter_output><empty_output><jupyter_text>> #### Findings:
Now the VIF and p-value is within the range. Adjusted R squared is 0.91 which is quite significant. This can be concluded as the final model feature selection.<jupyter_code># Generate Cluster or Heat map
kwargs = {'annot': True}
sns.clustermap(X_train_rfe.corr(), center = 0, linewidths = 0.75, figsize = (10, 8), **kwargs)
plt.show()<jupyter_output><empty_output><jupyter_text>## 7. Residual Analysis of the Train Data
So, now to check if the error terms are also normally distributed (which is infact, one of the major assumptions of linear regression), let us plot the histogram of the error terms and see what it looks like.<jupyter_code>y_train_price = lm.predict(X_train_lm)
# Distribution Plot
plt.figure(figsize = (8, 6))
sns.distplot((y_train - y_train_price), bins = 20)
plt.xlabel('Errors', labelpad = 10)
plt.title('Error Terms')
plt.show()<jupyter_output><empty_output><jupyter_text>Form the distribution plot, it can be visualized as the Error Distribution is close to the Normal Distribution.## 8. Making Predictions
Applying the scaling on the test sets<jupyter_code># Apply scaler() to all the columns except the 'yes-no' and 'dummy' variables
num_vars = ['wheelbase', 'carheight', 'curbweight', 'enginesize','boreratio', 'stroke', 'compressionratio',
'horsepower', 'peakrpm', 'price']
df_test[num_vars] = scaler.transform(df_test[num_vars])
df_test.head()
# Dividing into X_test and y_test¶
y_test = df_test.pop('price')
X_test = df_test
# Now let's use our model to make predictions.
# Creating X_test_new dataframe by dropping variables from X_test
X_test_new = X_test[X_train_rfe.columns]
# Adding a constant variable
X_test_new = sm.add_constant(X_test_new)
# Making predictions
y_pred = lm.predict(X_test_new)<jupyter_output><empty_output><jupyter_text>## 9. Model Evaluation<jupyter_code># Plotting y_test and y_pred to understand the spread.
plt.figure(figsize = (8, 6))
sns.scatterplot(x = y_test, y = y_pred)
plt.xlabel('y_test', labelpad = 10)
plt.ylabel('y_pred', labelpad = 10)
plt.title('y_test vs y_pred')
plt.show()<jupyter_output><empty_output>
|
no_license
|
/Geely_Auto_Linear_Regression_Analysis/Linear_Regression_Assignment_Debasis_Garabadu.ipynb
|
debasisg88/iiitb-mlai-projects
| 38 |
<jupyter_start><jupyter_text># Convolutional Neural Networks
## Project: Write an Algorithm for a Dog Identification App
---
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
---
### Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
### The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
* [Step 0](#step0): Import Datasets
* [Step 1](#step1): Detect Humans
* [Step 2](#step2): Detect Dogs
* [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch)
* [Step 4](#step4): Create a CNN to Classify Dog Breeds (using Transfer Learning)
* [Step 5](#step5): Write your Algorithm
* [Step 6](#step6): Test Your Algorithm
---
## Step 0: Import Datasets
Make sure that you've downloaded the required human and dog datasets:
**Note: if you are using the Udacity workspace, you *DO NOT* need to re-download these - they can be found in the `/data` folder as noted in the cell below.**
* Download the [dog dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip). Unzip the folder and place it in this project's home directory, at the location `/dog_images`.
* Download the [human dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip). Unzip the folder and place it in the home directory, at location `/lfw`.
*Note: If you are using a Windows machine, you are encouraged to use [7zip](http://www.7-zip.org/) to extract the folder.*
In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays `human_files` and `dog_files`.<jupyter_code>import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("/data/lfw/*/*"))
dog_files = np.array(glob("/data/dog_images/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))<jupyter_output>There are 13233 total human images.
There are 8351 total dog images.
<jupyter_text>
## Step 1: Detect Humans
In this section, we use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images.
OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.<jupyter_code>import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()<jupyter_output>Number of faces detected: 1
<jupyter_text>Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter.
In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box.
### Write a Human Face Detector
We can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.<jupyter_code># returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0<jupyter_output><empty_output><jupyter_text>### (IMPLEMENTATION) Assess the Human Face Detector
__Question 1:__ Use the code cell below to test the performance of the `face_detector` function.
- What percentage of the first 100 images in `human_files` have a detected human face?
- What percentage of the first 100 images in `dog_files` have a detected human face?
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.__Answer:__ 98% of the first 100 images in human files have a detected human faces.
17% of the first 100 images in dog files have a detected human faces.<jupyter_code>from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
required_images_human = filter(face_detector, human_files_short)
required_images_dog = filter(face_detector, dog_files_short)
print('total images having human faces in human file = {}'.format(len(list(required_images_human))))
print('total images having human faces in dog file = {}'.format(len(list(required_images_dog))))
<jupyter_output>total images having human faces in human file = 98
total images having human faces in dog file = 17
<jupyter_text>We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.<jupyter_code>### (Optional)
### TODO: Test performance of anotherface detection algorithm.
### Feel free to use as many code cells as needed.<jupyter_output><empty_output><jupyter_text>---
## Step 2: Detect Dogs
In this section, we use a [pre-trained model](http://pytorch.org/docs/master/torchvision/models.html) to detect dogs in images.
### Obtain Pre-trained VGG-16 Model
The code cell below downloads the VGG-16 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). <jupyter_code>import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()<jupyter_output><empty_output><jupyter_text>Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.### (IMPLEMENTATION) Making Predictions with a Pre-trained Model
In the next code cell, you will write a function that accepts a path to an image (such as `'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg'`) as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the [PyTorch documentation](http://pytorch.org/docs/stable/torchvision/models.html).<jupyter_code>from PIL import Image
import torchvision.transforms as transforms
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor()])
img = Image.open(img_path)
pp_img = transform(img).to('cuda')
#adding extra dim
final_data = pp_img.unsqueeze(0)
output = VGG16(final_data)
maxx, index = output[0].max(0)
return index # predicted class index<jupyter_output><empty_output><jupyter_text>### (IMPLEMENTATION) Write a Dog Detector
While looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).<jupyter_code>### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
index = VGG16_predict(img_path)
if index in range(151, 269):
return True
else:
return False<jupyter_output><empty_output><jupyter_text>### (IMPLEMENTATION) Assess the Dog Detector
__Question 2:__ Use the code cell below to test the performance of your `dog_detector` function.
- What percentage of the images in `human_files_short` have a detected dog?
- What percentage of the images in `dog_files_short` have a detected dog?__Answer:__ 0% percentage of the images in human_files_short have a detected dog
93% percentage of the images in dog_files_short have a detected dog
<jupyter_code>### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
required_images_human = filter(dog_detector, human_files_short)
required_images_dog = filter(dog_detector, dog_files_short)
print('total images having dog faces in human file = {}'.format(len(list(required_images_human))))
print('total images having dog faces in dog file = {}'.format(len(list(required_images_dog))))
<jupyter_output>total images having dog faces in human file = 0
total images having dog faces in dog file = 92
<jupyter_text>We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as [Inception-v3](http://pytorch.org/docs/master/torchvision/models.html#inception-v3), [ResNet-50](http://pytorch.org/docs/master/torchvision/models.html#id3), etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.<jupyter_code>### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.<jupyter_output><empty_output><jupyter_text>---
## Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
|
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
|
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
| |
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dog_images/train`, `dog_images/valid`, and `dog_images/test`, respectively). You may find [this documentation on custom datasets](http://pytorch.org/docs/stable/torchvision/datasets.html) to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of [transforms](http://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transform)!<jupyter_code>import os
from torchvision import datasets
from torch.utils.data import DataLoader
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
data_transform_train = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
# going to use this in both valid and test
data_transform_valid = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
train_data_image = datasets.ImageFolder('/data/dog_images/train', transform=data_transform_train)
valid_data_image = datasets.ImageFolder('/data/dog_images/valid', transform=data_transform_valid)
test_data_image = datasets.ImageFolder('/data/dog_images/test', transform=data_transform_valid)
batch_size = 16
loaders = {'train':DataLoader(train_data_image, batch_size = batch_size,shuffle = True) ,
'valid':DataLoader(valid_data_image, batch_size = batch_size,shuffle = True) ,
'test':DataLoader(test_data_image,batch_size = batch_size,shuffle = True)}
<jupyter_output><empty_output><jupyter_text>**Question 3:** Describe your chosen procedure for preprocessing the data.
- How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
- Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?
**Answer**: * I first resize the image to 256x256 than crop this image to 224x224 from center. I pick 224x224 because i find that it is a good fit for image size and most of the pretrained model uses this size too.
* yes I did a data augmentation like random horizontal flip, random rotaion in the range [-20,20].
### (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. Use the template in the code cell below.<jupyter_code>import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 3, padding = 1)
self.conv2 = nn.Conv2d(16, 32, 3, padding = 1)
self.conv3 = nn.Conv2d(32, 64, 3, padding = 1)
self.conv4 = nn.Conv2d(64, 128, 3, padding = 1)
self.conv5 = nn.Conv2d(128, 64, 3, padding = 1)
self.conv6 = nn.Conv2d(64, 32, 3, padding = 1)
self.pool = nn.MaxPool2d(2,2)
self.linear1 = nn.Linear(7*7*32,512)
self.linear2 = nn.Linear(512,256)
self.linear3 = nn.Linear(256,133)
self.dropout = nn.Dropout(0.25)
## Define layers of a CNN
def forward(self, x):
## Define forward behavior
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = self.pool(F.relu(self.conv4(x)))
x = self.pool(F.relu(self.conv5(x)))
x = F.relu(self.conv6(x))
x = x.view(-1,7*7*32)
x = F.relu(self.linear1(x))
x = self.dropout(x)
x = self.linear2(x)
x = self.dropout(x)
x = self.linear3(x)
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()<jupyter_output><empty_output><jupyter_text>__Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. __Answer:__
- First i made a simple architechture by taking less number of conv layer but its did not give the desired result.
- than i increase the conv layer and apply max pool to reduce the dimention by the factor of 2.
- than i use less number of neurons in fc layers.
- than i use dropout layer to avoid overfitting.
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/stable/optim.html). Save the chosen loss function as `criterion_scratch`, and the optimizer as `optimizer_scratch` below.<jupyter_code>import torch.optim as optim
### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()
### TODO: select optimizer
optimizer_scratch = optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.005)<jupyter_output><empty_output><jupyter_text>### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_scratch.pt'`.<jupyter_code>from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
optimizer.zero_grad()
output = model(data)
loss = criterion_scratch(output, target)
loss.backward()
optimizer.step()
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
#avg loss
train_loss = train_loss/len(loaders['train'].dataset)
valid_loss = valid_loss/len(loaders['valid'].dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss < valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), save_path)
valid_loss_min = valid_loss
# return trained model
return model
# train the model
model_scratch = train(10, loaders, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))<jupyter_output>Epoch: 1 Training Loss: 0.000451 Validation Loss: 0.005044
Validation loss decreased (inf --> 0.005044). Saving model ...
Epoch: 2 Training Loss: 0.000443 Validation Loss: 0.004891
Validation loss decreased (0.005044 --> 0.004891). Saving model ...
Epoch: 3 Training Loss: 0.000434 Validation Loss: 0.005070
Epoch: 4 Training Loss: 0.000425 Validation Loss: 0.005076
Epoch: 5 Training Loss: 0.000417 Validation Loss: 0.004987
Epoch: 6 Training Loss: 0.000410 Validation Loss: 0.005082
Epoch: 7 Training Loss: 0.000398 Validation Loss: 0.005135
Epoch: 8 Training Loss: 0.000396 Validation Loss: 0.005146
Epoch: 9 Training Loss: 0.000385 Validation Loss: 0.005313
Epoch: 10 Training Loss: 0.000379 Validation Loss: 0.005236
<jupyter_text>Epoch: 1 Training Loss: 0.000732 Validation Loss: 0.005856
Validation loss decreased (inf --> 0.005856). Saving model ...
Epoch: 2 Training Loss: 0.000732 Validation Loss: 0.005855
Validation loss decreased (0.005856 --> 0.005855). Saving model ...
Epoch: 3 Training Loss: 0.000732 Validation Loss: 0.005853
Validation loss decreased (0.005855 --> 0.005853). Saving model ...
Epoch: 4 Training Loss: 0.000732 Validation Loss: 0.005852
Validation loss decreased (0.005853 --> 0.005852). Saving model ...
Epoch: 5 Training Loss: 0.000731 Validation Loss: 0.005851
Validation loss decreased (0.005852 --> 0.005851). Saving model ...
Epoch: 6 Training Loss: 0.000731 Validation Loss: 0.005850
Validation loss decreased (0.005851 --> 0.005850). Saving model ...
Epoch: 7 Training Loss: 0.000731 Validation Loss: 0.005848
Validation loss decreased (0.005850 --> 0.005848). Saving model ...
Epoch: 8 Training Loss: 0.000731 Validation Loss: 0.005847
Validation loss decreased (0.005848 --> 0.005847). Saving model ...
Epoch: 9 Training Loss: 0.000731 Validation Loss: 0.005846
Validation loss decreased (0.005847 --> 0.005846). Saving model ...
Epoch: 10 Training Loss: 0.000730 Validation Loss: 0.005844
Validation loss decreased (0.005846 --> 0.005844). Saving model ...
Epoch: 11 Training Loss: 0.000730 Validation Loss: 0.005843
Validation loss decreased (0.005844 --> 0.005843). Saving model ...
Epoch: 12 Training Loss: 0.000730 Validation Loss: 0.005842
Validation loss decreased (0.005843 --> 0.005842). Saving model ...
Epoch: 13 Training Loss: 0.000730 Validation Loss: 0.005839
Validation loss decreased (0.005842 --> 0.005839). Saving model ...
Epoch: 14 Training Loss: 0.000730 Validation Loss: 0.005836
Validation loss decreased (0.005839 --> 0.005836). Saving model ...
Epoch: 15 Training Loss: 0.000729 Validation Loss: 0.005831
Validation loss decreased (0.005836 --> 0.005831). Saving model ...
Epoch: 16 Training Loss: 0.000729 Validation Loss: 0.005829
Validation loss decreased (0.005831 --> 0.005829). Saving model ...
Epoch: 17 Training Loss: 0.000728 Validation Loss: 0.005826
Validation loss decreased (0.005829 --> 0.005826). Saving model ...
Epoch: 18 Training Loss: 0.000728 Validation Loss: 0.005823
Validation loss decreased (0.005826 --> 0.005823). Saving model ...
Epoch: 19 Training Loss: 0.000727 Validation Loss: 0.005813
Validation loss decreased (0.005823 --> 0.005813). Saving model ...
Epoch: 20 Training Loss: 0.000726 Validation Loss: 0.005798
Validation loss decreased (0.005813 --> 0.005798). Saving model ...
Epoch: 21 Training Loss: 0.000723 Validation Loss: 0.005768
Validation loss decreased (0.005798 --> 0.005768). Saving model ...
Epoch: 22 Training Loss: 0.000717 Validation Loss: 0.005718
Validation loss decreased (0.005768 --> 0.005718). Saving model ...
Epoch: 23 Training Loss: 0.000711 Validation Loss: 0.005665
Validation loss decreased (0.005718 --> 0.005665). Saving model ...
Epoch: 24 Training Loss: 0.000704 Validation Loss: 0.005653
Validation loss decreased (0.005665 --> 0.005653). Saving model ...
Epoch: 25 Training Loss: 0.000697 Validation Loss: 0.005560
Validation loss decreased (0.005653 --> 0.005560). Saving model ...
Epoch: 26 Training Loss: 0.000692 Validation Loss: 0.005544
Validation loss decreased (0.005560 --> 0.005544). Saving model ...
Epoch: 27 Training Loss: 0.000688 Validation Loss: 0.005490
Validation loss decreased (0.005544 --> 0.005490). Saving model ...
Epoch: 28 Training Loss: 0.000684 Validation Loss: 0.005513
Epoch: 29 Training Loss: 0.000679 Validation Loss: 0.005459
Validation loss decreased (0.005490 --> 0.005459). Saving model ...
Epoch: 30 Training Loss: 0.000677 Validation Loss: 0.005410
Validation loss decreased (0.005459 --> 0.005410). Saving model ...
Epoch: 1 Training Loss: 0.000673 Validation Loss: 0.005418
Validation loss decreased (inf --> 0.005418). Saving model ...
Epoch: 2 Training Loss: 0.000669 Validation Loss: 0.005416
Validation loss decreased (0.005418 --> 0.005416). Saving model ...
Epoch: 3 Training Loss: 0.000664 Validation Loss: 0.005406
Validation loss decreased (0.005416 --> 0.005406). Saving model ...
Epoch: 4 Training Loss: 0.000662 Validation Loss: 0.005386
Validation loss decreased (0.005406 --> 0.005386). Saving model ...
Epoch: 5 Training Loss: 0.000657 Validation Loss: 0.005367
Validation loss decreased (0.005386 --> 0.005367). Saving model ...
Epoch: 6 Training Loss: 0.000652 Validation Loss: 0.005353
Validation loss decreased (0.005367 --> 0.005353). Saving model ...
Epoch: 7 Training Loss: 0.000648 Validation Loss: 0.005343
Validation loss decreased (0.005353 --> 0.005343). Saving model ...
Epoch: 8 Training Loss: 0.000642 Validation Loss: 0.005359
Epoch: 9 Training Loss: 0.000637 Validation Loss: 0.005351
Epoch: 10 Training Loss: 0.000633 Validation Loss: 0.005223
Validation loss decreased (0.005343 --> 0.005223). Saving model ...
Epoch: 11 Training Loss: 0.000628 Validation Loss: 0.005237
Epoch: 12 Training Loss: 0.000624 Validation Loss: 0.005201
Validation loss decreased (0.005223 --> 0.005201). Saving model ...
Epoch: 13 Training Loss: 0.000617 Validation Loss: 0.005173
Validation loss decreased (0.005201 --> 0.005173). Saving model ...
Epoch: 14 Training Loss: 0.000613 Validation Loss: 0.005268
Epoch: 15 Training Loss: 0.000609 Validation Loss: 0.005124
Validation loss decreased (0.005173 --> 0.005124). Saving model ...
Epoch: 16 Training Loss: 0.000605 Validation Loss: 0.005087
Validation loss decreased (0.005124 --> 0.005087). Saving model ...
Epoch: 17 Training Loss: 0.000600 Validation Loss: 0.005262
Epoch: 18 Training Loss: 0.000597 Validation Loss: 0.005040
Validation loss decreased (0.005087 --> 0.005040). Saving model ...
Epoch: 19 Training Loss: 0.000590 Validation Loss: 0.005036
Validation loss decreased (0.005040 --> 0.005036). Saving model ...
Epoch: 20 Training Loss: 0.000585 Validation Loss: 0.005098
Epoch: 21 Training Loss: 0.000582 Validation Loss: 0.005549
Epoch: 22 Training Loss: 0.000577 Validation Loss: 0.004978
Validation loss decreased (0.005036 --> 0.004978). Saving model ...
Epoch: 23 Training Loss: 0.000569 Validation Loss: 0.005138
Epoch: 24 Training Loss: 0.000565 Validation Loss: 0.005028
Epoch: 25 Training Loss: 0.000561 Validation Loss: 0.004942
Validation loss decreased (0.004978 --> 0.004942). Saving model ...
Epoch: 26 Training Loss: 0.000552 Validation Loss: 0.005147
Epoch: 27 Training Loss: 0.000549 Validation Loss: 0.004943
Epoch: 28 Training Loss: 0.000540 Validation Loss: 0.004970
Epoch: 29 Training Loss: 0.000536 Validation Loss: 0.005098
Epoch: 30 Training Loss: 0.000530 Validation Loss: 0.004933
Validation loss decreased (0.004942 --> 0.004933). Saving model ...
Epoch: 1 Training Loss: 0.000523 Validation Loss: 0.004909
Validation loss decreased (inf --> 0.004909). Saving model ...
Epoch: 2 Training Loss: 0.000515 Validation Loss: 0.004905
Validation loss decreased (0.004909 --> 0.004905). Saving model ...
Epoch: 3 Training Loss: 0.000510 Validation Loss: 0.004913
Epoch: 4 Training Loss: 0.000500 Validation Loss: 0.004988
Epoch: 5 Training Loss: 0.000496 Validation Loss: 0.004916
Epoch: 6 Training Loss: 0.000488 Validation Loss: 0.004998
Epoch: 7 Training Loss: 0.000480 Validation Loss: 0.005123
Epoch: 8 Training Loss: 0.000474 Validation Loss: 0.005051
Epoch: 9 Training Loss: 0.000467 Validation Loss: 0.004937
Epoch: 10 Training Loss: 0.000460 Validation Loss: 0.004875
Validation loss decreased (0.004905 --> 0.004875). Saving model ...
Epoch: 11 Training Loss: 0.000450 Validation Loss: 0.005027
Epoch: 12 Training Loss: 0.000441 Validation Loss: 0.004912
Epoch: 13 Training Loss: 0.000432 Validation Loss: 0.004998
Epoch: 14 Training Loss: 0.000425 Validation Loss: 0.005038
Epoch: 15 Training Loss: 0.000418 Validation Loss: 0.005011
Epoch: 16 Training Loss: 0.000411 Validation Loss: 0.005065
Epoch: 17 Training Loss: 0.000401 Validation Loss: 0.005107
Epoch: 18 Training Loss: 0.000393 Validation Loss: 0.005146
Epoch: 19 Training Loss: 0.000387 Validation Loss: 0.005210
Epoch: 20 Training Loss: 0.000377 Validation Loss: 0.005179### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.<jupyter_code>def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders, model_scratch, criterion_scratch, use_cuda)<jupyter_output>Test Loss: 4.187425
Test Accuracy: 10% (88/836)
<jupyter_text>---
## Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively).
If you like, **you are welcome to use the same data loaders from the previous step**, when you created a CNN from scratch.<jupyter_code>import os
from torchvision import datasets
from torch.utils.data import DataLoader
from PIL import Image
import torchvision.transforms as transforms
## TODO: Specify data loaders
data_transform_train = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
# going to use this in both valid and test
data_transform_valid = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
train_data_image = datasets.ImageFolder('/data/dog_images/train', transform=data_transform_train)
valid_data_image = datasets.ImageFolder('/data/dog_images/valid', transform=data_transform_valid)
test_data_image = datasets.ImageFolder('/data/dog_images/test', transform=data_transform_valid)
batch_size = 16
loaders_transfer = {'train':DataLoader(train_data_image, batch_size = batch_size,shuffle = True) ,
'valid':DataLoader(valid_data_image, batch_size = batch_size,shuffle = True) ,
'test':DataLoader(test_data_image,batch_size = batch_size,shuffle = True)}
<jupyter_output><empty_output><jupyter_text>### (IMPLEMENTATION) Model Architecture
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable `model_transfer`.<jupyter_code>import torchvision.models as models
import torch.nn as nn
import torch
## TODO: Specify model architecture
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
#frezzing feature layers
for pram in VGG16.features.parameters():
pram.requires_grad = False
#replacing top layer
VGG16.classifier[6] = nn.Linear(VGG16.classifier[6].in_features, 133, bias = True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
if use_cuda:
model_transfer = model_transfer.cuda()<jupyter_output><empty_output><jupyter_text>__Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.__Answer:__
- First I load the model with its pretrained weights.
- Than I set its features layers require grad to false becaure we did not want to train them.
- Than I changed its final layer to change its output dimention according to out dataset.
** our dataset is small but it is similar to imagenet dataset so my approch is suitable for this case.
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/master/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/master/optim.html). Save the chosen loss function as `criterion_transfer`, and the optimizer as `optimizer_transfer` below.<jupyter_code>import torch.optim as optim
criterion_transfer = nn.CrossEntropyLoss()
optimizer_transfer = optim.SGD(model_transfer.parameters(), lr=0.002)<jupyter_output><empty_output><jupyter_text>### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_transfer.pt'`.<jupyter_code># train the model
model_transfer = train(10, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
model_transfer.load_state_dict(torch.load('model_transfer.pt'))<jupyter_output>Epoch: 1 Training Loss: 0.000414 Validation Loss: 0.002222
Validation loss decreased (inf --> 0.002222). Saving model ...
Epoch: 2 Training Loss: 0.000281 Validation Loss: 0.001494
Validation loss decreased (0.002222 --> 0.001494). Saving model ...
Epoch: 3 Training Loss: 0.000204 Validation Loss: 0.001141
Validation loss decreased (0.001494 --> 0.001141). Saving model ...
Epoch: 4 Training Loss: 0.000160 Validation Loss: 0.001047
Validation loss decreased (0.001141 --> 0.001047). Saving model ...
Epoch: 5 Training Loss: 0.000126 Validation Loss: 0.000943
Validation loss decreased (0.001047 --> 0.000943). Saving model ...
Epoch: 6 Training Loss: 0.000112 Validation Loss: 0.000787
Validation loss decreased (0.000943 --> 0.000787). Saving model ...
Epoch: 7 Training Loss: 0.000092 Validation Loss: 0.000917
Epoch: 8 Training Loss: 0.000081 Validation Loss: 0.000732
Validation loss decreased (0.000787 --> 0.000732). Saving model ...
Epoch: 9 Training Loss: 0.000070 [...]<jupyter_text>### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.<jupyter_code>test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)<jupyter_output>Test Loss: 0.683159
Test Accuracy: 81% (683/836)
<jupyter_text>### (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan hound`, etc) that is predicted by your model. <jupyter_code>### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in loaders_transfer['train'].dataset.classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
img = Image.open(img_path)
pp_img = transform(img).to('cuda')
#adding extra dim
final_data = pp_img.unsqueeze(0)
output = model_transfer(final_data)
maxx, index = output[0].max(0)
return index<jupyter_output><empty_output><jupyter_text>---
## Step 5: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a __dog__ is detected in the image, return the predicted breed.
- if a __human__ is detected in the image, return the resembling dog breed.
- if __neither__ is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `human_detector` functions developed above. You are __required__ to use your CNN from Step 4 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!

### (IMPLEMENTATION) Write your Algorithm<jupyter_code>### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def run_app(img_path):
## handle cases for a human face, dog, and neither
# for human
img = Image.open(img_path)
plt.imshow(img)
plt.show()
if(dog_detector(img_path)):
print("dog is detected and predicted breed is = ",class_names[predict_breed_transfer(img_path)])
elif(face_detector(img_path)):
print('human is detected and the resembling dog breed is = ',class_names[predict_breed_transfer(img_path)])
else:
print("error nothing detected")
<jupyter_output><empty_output><jupyter_text>---
## Step 6: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that _you_ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
### (IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
__Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.__Answer:__ The output is much better than i expected . it gives same breed type to human who looks the same.
even its gives the correct breed of dog in a picture where human was with her.
According to me we can improve the model by =
- providing more data.
- using agumentation.
- increasing compexity of model by adding more layer but in this case we have to take care of the overfitting.<jupyter_code>## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:2], dog_files[:2])):
run_app(file)
for file in np.hstack((human_files[100:102], dog_files[100:102])):
run_app(file)
for file in np.hstack((human_files[200:202], dog_files[200:202])):
run_app(file)
for file in np.hstack((human_files[400:402], dog_files[400:402])):
run_app(file)<jupyter_output><empty_output>
|
permissive
|
/dog_app .ipynb
|
AyushRaghuwanshi/dog-project
| 23 |
<jupyter_start><jupyter_text><jupyter_code>!sudo apt install tesseract-ocr
!pip install pytesseract
import pytesseract
from google.colab.patches import cv2_imshow
import shutil
import cv2
import os
import random
try:
from PIL import Image
except ImportError:
import Image
from google.colab import files
uploaded = files.upload()<jupyter_output><empty_output><jupyter_text>creating rectangles around letters<jupyter_code>h, w, c = img.shape
boxes = pytesseract.image_to_boxes(img)
for b in boxes.splitlines():
b = b.split(' ')
img = cv2.rectangle(img, (int(b[1]), h - int(b[2])), (int(b[3]), h - int(b[4])), (0, 255, 0), 2)
cv2_imshow(img)
cv2.waitKey(0)<jupyter_output><empty_output><jupyter_text>creating boxes around words<jupyter_code>from pytesseract import Output
img1 = cv2.imread('test.png')
d = pytesseract.image_to_data(img1, output_type=Output.DICT)
print(d.keys())
n_boxes = len(d['text'])
for i in range(n_boxes):
if int(d['conf'][i]) > 60:
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
img1 = cv2.rectangle(img1, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2_imshow(img1)
cv2.waitKey(0)
image_path_in_colab="test.png"
extractedInformation = pytesseract.image_to_string(Image.open(image_path_in_colab))
print(extractedInformation)
<jupyter_output>This is a lot of 12 point text to test the
ocr code and see if it works on all types
of file format.
The quick brown dog jumped over the
lazy fox. The quick brown dog jumped
over the lazy fox. The quick brown dog
jumped over the lazy fox. The quick
brown dog jumped over the lazy fox.
<jupyter_text>displaying the images<jupyter_code>from IPython.display import Image
cv2_imshow(img1)
Image("test.png")<jupyter_output><empty_output>
|
permissive
|
/OCR(optical_character_recognition).ipynb
|
geekysharzeel/OCR-Optical-Character_Recognition
| 4 |
<jupyter_start><jupyter_text>##### Copyright 2019 The TensorFlow Authors.<jupyter_code>#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.<jupyter_output><empty_output><jupyter_text># 使用 tf.data 加载 NumPy 数据
在 Tensorflow.org 上查看
在 Google Colab 运行
在 Github 上查看源代码
下载笔记本
本教程提供了一个将数据从 NumPy 数组加载到 `tf.data.Dataset` 中的示例。
此示例从 `.npz` 文件加载 MNIST 数据集。但是,NumPy 数组的来源并不重要。
## 安装<jupyter_code>
import numpy as np
import tensorflow as tf<jupyter_output><empty_output><jupyter_text>### 从 `.npz` 文件中加载<jupyter_code>DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']<jupyter_output><empty_output><jupyter_text>## 使用 `tf.data.Dataset` 加载 NumPy 数组假设您有一个示例数组和相应的标签数组,请将两个数组作为元组传递给 `tf.data.Dataset.from_tensor_slices` 以创建 `tf.data.Dataset` 。<jupyter_code>train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))<jupyter_output><empty_output><jupyter_text>## 使用该数据集### 打乱和批次化数据集<jupyter_code>BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)<jupyter_output><empty_output><jupyter_text>### 建立和训练模型<jupyter_code>model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
model.evaluate(test_dataset)<jupyter_output><empty_output>
|
permissive
|
/site/zh-cn/tutorials/load_data/numpy.ipynb
|
tensorflow/docs-l10n
| 6 |
<jupyter_start><jupyter_text># Exercise 1: Subsets
Let's practice pulling subsets out of a data frame. We subset a lot. Our goal is to build some muscle memory, so that every time we need to subset the data, we do not need to go look up how to do it.
To this end, first try the exercises below without consulting your notes or the internet. Sort out where you need to improve and keep practicing!
1. Create a DataFrame from the following dict. <jupyter_code>import pandas as pd
import matplotlib.pyplot as plt
data_dict = {'soda':['coke', 'diet coke', 'sprite', 'pepsi', 'mug', 'mt. dew'],
'cals':[140, 1, 90, 150, 130, 170],
'sodium':[45, 40, 65, 30, 65, 60],
'corp': ['coca cola', 'coca cola', 'coca cola', 'pepsico', 'pepsico', 'pepsico']}
soda = pd.DataFrame(data_dict)
soda<jupyter_output><empty_output><jupyter_text>2. Print a DataFrame containing only sodas with more than 10 calories.<jupyter_code>soda[soda['cals']>10]<jupyter_output><empty_output><jupyter_text>3. Print a DataFrame containing only sodas with more than 10 calories and less than 100 calories.<jupyter_code>soda[ (soda['cals']>10) & (soda['cals']< 100)]<jupyter_output><empty_output><jupyter_text>4. Print a DataFrame containing only data for coke, pepsi, and mug. Use the `isin()` method.<jupyter_code>to_get = ['coke', 'pepsi', 'mug']
soda[ soda['soda'].isin(to_get) ]<jupyter_output><empty_output><jupyter_text>5. Set the index of the DataFrame to 'soda'.<jupyter_code>soda.set_index('soda', inplace=True)
<jupyter_output><empty_output><jupyter_text>6. Use `.loc[]` to print a DataFrame containing only coke, pepsi, and mug.<jupyter_code>soda.loc[['coke','pepsi', 'mug']]<jupyter_output><empty_output><jupyter_text>7. Print out the average sodium for pepsico products that have more than 135 calories. <jupyter_code>soda[ (soda['corp']=='pepsico') & (soda['cals']>135)]['sodium'].mean()<jupyter_output><empty_output><jupyter_text>8. Print out the number of pepsico products with sodium above 60 mg.<jupyter_code>soda[ (soda['corp']=='pepsico') & (soda['sodium']>60) ].shape[0]<jupyter_output><empty_output><jupyter_text>9. Print out the calories in a coke. <jupyter_code>soda.loc['coke', 'cals']<jupyter_output><empty_output><jupyter_text>10. Reset the index on your DataFrame and create a MultiIndex of corp and soda. Sort your index!<jupyter_code>soda = soda.reset_index()
soda.set_index(['corp','soda'], inplace=True)
soda.sort_index(axis=0, inplace=True)<jupyter_output><empty_output><jupyter_text>11. Print the mean calories and sodium level for coca cola products.<jupyter_code>soda.loc['coca cola'].mean()<jupyter_output><empty_output><jupyter_text># Exercise 1: Wages, Education, and Gender
## (Continued from Exam 1, Exercise 3)
The goal of this exercise is to make a bar chart of hourly wages by gender and level of education. Some useful example code for a similar chart can be found at https://matplotlib.org/gallery/statistics/barchart_demo.html.
To create the bar chart, follow the instructions below:
1. As on the exam, import the file "CPS_March_2016.csv" into a Pandas dataframe. Be careful with missing values. Drop workers who didn't work full time last year or whose hourly wage was less than \$5 or more than \$200.<jupyter_code>import pandas as pd
import matplotlib.pyplot as plt
# Import data:
cps = pd.read_csv("CPS_March_2016.csv",na_values = ".")
# Keep individuals who worked full time last year and with wages between $5 and $200.
cps = cps[(cps['fulltimely'] == 1) & (cps['hrwage'] >= 5) & (cps['hrwage'] <= 200)]<jupyter_output><empty_output><jupyter_text>2. Obtain the average wage by level of education for females and males using the `.groupby()` and `.mean()` methods. You should obtain two Pandas series: one containing the average wage by education level for females, and the other for males. Call these `wages_females` and `wages_males`.<jupyter_code># Obtain conditional means by gender and education level:
wages_females = cps[cps['female'] == 1].groupby(['educ'])['hrwage'].mean()
wages_males = cps[cps['female'] == 0].groupby(['educ'])['hrwage'].mean()
wages_females<jupyter_output><empty_output><jupyter_text>3. Print out `wages_females` and `wages_males`. Note that the `.groupby()` method has set the level of education as the index of each series and that the series is sorted alphabetically. Reorder each series so that the data is sorted by level of attainment: 'Less than HS' first, 'HS diploma/GED' second, and so on.<jupyter_code># Reorder the series of wages so that they are sorted by level of education prior to making the bar chart:
# Let's look at two ways
# Using loc[]
wages_females = wages_females.loc[['Less than HS','HS diploma/GED','Some college','College degree','Graduate degree']]
# Using reindex()
wages_males = wages_males.reindex(['Less than HS','HS diploma/GED','Some college','College degree','Graduate degree'])
<jupyter_output><empty_output><jupyter_text>4. Now we have everything we need to make the bar chart. Format the plot as follows:
1. Set the figure size to (11,8.5).
2. Color the female bars magenta and the male bars blue. Set the alpha composite to 0.75. Create a legend describing the color of each bar. Make sure the spacing between bars is appropriate, with some whitespace between each level of education.
3. Label the horizontal axis tick marks with the appropriate level of education.
4. Label the vertical axis "Dollars per hour", and title the figure "Average hourly wage by gender and level of education".
5. Remove the borders on the top and right sides of the figure.
6. In some whitespace on the figure, insert text containing the overall average wage in dollars and cents for males and females (this should be the overall average by gender, not conditional on education).
4. **Not required, but fun and challenging:** Print the appropriate wage level in dollars and cents on the top of each bar.<jupyter_code>fig, ax = plt.subplots(figsize=(11,8.5))
# This will be the x axis points where each set of bars is plotted:
positions = list(range(len(wages_females)))
# Set width of each bar:
width = 0.3
# Set gap between bars
gap = 0.05
# First, plot the female data. The equal sign is assigning the bar objects to bars_females to use
# for plotting the data labels.
# I shift the female bars to the right (and the males to the left)
positions_f = [p + 0.5*width + 0.5*gap for p in positions]
bars_females = ax.bar(positions_f, wages_females, width, color='magenta', alpha=0.75)
# Now plot the males.
# I shift the female bars to the right (and the males to the left)
positions_m = [p - 0.5*width - 0.5*gap for p in positions]
bars_males = ax.bar(positions_m, wages_males, width, color='blue', alpha=0.75)
plt.xticks(positions, wages_females.index)
plt.legend(['Females', 'Males'], loc='upper left')
plt.ylabel("Dollars per hour")
plt.title("Average hourly wage by gender and level of education")
# Include the overall average wage by gender in whitespace of the plot:
avg_wage_females = cps[cps['female'] == 1]['hrwage'].mean()
avg_wage_males = cps[cps['female'] == 0]['hrwage'].mean()
plt.text(2,40,"Average wage for females: ${0:.2f}".format(avg_wage_females),ha='right')
plt.text(2,38,"Average wage for males: ${0:.2f}".format(avg_wage_males),ha='right')
# Remove borders:
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# Challenge!
# Print the average wage on top of each bar:
label_gap = 0.5 # space between top of bar and label
for rectangle in bars_females + bars_males:
height = rectangle.get_height()
plt.text(rectangle.get_x() + rectangle.get_width()/2.0, height+label_gap,"${0:.2f}".format(round(height,2)), ha='center', va='bottom')
plt.show()<jupyter_output><empty_output><jupyter_text># Exercise 2: Cleaning and merging panel data from multiple sources
Economists nearly always use data from multiple sources to conduct their analysis. It is almost never the case that all the data required for a project is available from one source. This is often an artifact of how economic data are collected in the United States. For example, the Bureau of Labor Statistics (BLS) is responsible for measuring the unemployment rate, while the Bureau of Economic Analysis (BEA) is responsible for measuring gross domestic product. (These bureaus are themselves parts of different departments of the executive branch -- the Department of Labor in the case of the BLS and the Department of Commerce in the case of the BEA). As a result, data are often published in different formats that require some extra work to merge.
In this exercise, your task is to merge state-level unemployment rate data from the BLS with state-level GDP data from the BEA. You will then make a scatterplot showing Okun's Law for Wisconsin and Michigan.
**Note:** An easier way to do this is to use the FRED API to retreive each series directly. However, the purpose of this exercise is to practice cleaning and merging data, so follow the instructions below.
### Part (a): Importing and cleaning the GDP data
1. Read in the file "state_gdp.csv" as a Pandas data frame called `gdp`. This file (downloaded from https://apps.bea.gov/regional/Downloadzip.cfm) contains annualized real GDP (in millions of chained 2009 dollars) for each quarter from 2005Q1 to 2018Q1 for each state and industry. Note that the data is in "wide" format.
2. Note that the data contains a breakdown of each industry's contribution to GDP in each state. For this exercise, we are only interested in *total* real GDP for each state. Thus, drop all rows for which the column "Description" is not equal to "All industry total"
<jupyter_code>import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
gdp = pd.read_csv("state_gdp.csv")
# Keep state totals:
gdp = gdp[gdp['Description'] == "All industry total"]
<jupyter_output><empty_output><jupyter_text>3. Note that the data also contains real GDP for the United States as a whole as well as for several subregions. Since we are only interested in states, these need to be dropped as well. To do this, drop any row for which the column 'GeoName' contains "United States", "New England", "Mideast", "Great Lakes", "Plains", "Southeast", "Southwest", "Rocky Mountain", or "Far West".
<jupyter_code># List of regions we don't need:
place_list = ["United States", "New England", "Mideast", "Great Lakes", "Plains",
"Southeast", "Southwest", "Rocky Mountain", "Far West"]
# Drop regions we don't need:
gdp = gdp[~gdp['GeoName'].isin(place_list)]
<jupyter_output><empty_output><jupyter_text>4. Drop the columns 'GeoFIPS', 'Region', 'ComponentId', 'ComponentName', 'IndustryId', 'IndustryClassification', and 'Description'. Then rename the column 'GeoFIPS' to 'state'.
<jupyter_code># Drop unnecessary columns:
gdp.drop(['GeoFIPS', 'Region', 'ComponentId', 'ComponentName', 'IndustryId', 'IndustryClassification', 'Description'],
axis=1, inplace = True)
# Rename columns and set the index:
gdp.rename(columns={'GeoName':'state'}, inplace = True)
gdp.dtypes<jupyter_output><empty_output><jupyter_text>5. Use the `.melt()` method to convert the data from wide to long format. Then rename the columns 'variable' to 'date' and 'value' to 'real_gdp'. Also, convert 'real_gdp' to a floating point number (the columns containing the GDP data were originally imported as strings since some values of GDP for some industries/states are censored or missing).<jupyter_code># Convert data from wide to long using the .melt() method:
gdp = gdp.melt(id_vars=['state'])
# Rename columns, convert real_gdp to float
gdp.rename(columns={'variable':'date', 'value':'real_gdp'}, inplace = True)
gdp['real_gdp'] = gdp['real_gdp'].astype(float)
gdp<jupyter_output><empty_output><jupyter_text>6. Finally, we should convert the date to a datetime object. This will require a few steps:
1. Unfortunately, the `datetime` package doesn't like the format of our dates: "2005:Q1" etc. Use the `.str.replace()` method to replace the colon ":" with an empty string "".
2. Now convert 'date' to a datetime object. Then use `dt.to_period("Q")` to change the formating from year-month-day to year-quarter.<jupyter_code># Replace the colon with an empty string:
gdp['date'] = gdp['date'].str.replace(':','')
# Convert date to a datetime object, then convert from year-month-day to year-quarter:
gdp['date'] = pd.to_datetime(gdp['date'])
gdp['date'] = gdp['date'].dt.to_period("Q")
type(gdp['date'][0])
gdp.head()
<jupyter_output><empty_output><jupyter_text>### Part (b): Importing and cleaning the unemployment rate data
1. Import the file "state_unemp.xlsx" as a Pandas data frame called `unemp`. This file (downloaded from https://www.bls.gov/lau/rdscnp16.htm) contains measures of the population, labor force, employment, unemployment, and unemployment rate by state (plus New York City and Los Angeles County) in each month from January 1976 to September 2018. Notice how different the formmating is compared to the GDP data! When importing, do the following:
1. The column names are not friendly for importing, so keep only columns 1, 2, 3, and 10.
2. Set the remaining column names to 'state', 'year', 'month', and 'unemp_rate'.
3. Use `skiprows=8` to avoid importing the messy headers.
<jupyter_code>unemp = pd.read_excel("state_unemp.xlsx", usecols=[1,2,3,10],names=['state','year','month','unemp_rate'],skiprows=8)
<jupyter_output><empty_output><jupyter_text>2. Drop the observations for "Los Angeles County" and "New York City".<jupyter_code># List of cities we don't need:
city_list = ["Los Angeles County","New York City"]
# Drop the city observations
unemp = unemp[~unemp['state'].isin(city_list)]
<jupyter_output><empty_output><jupyter_text>3. Create a new column called 'day' that is equal to 1. Then create a datetime object 'date' from the columns year, month, and day. Then drop the year, month, and day columns and set 'date' as the index.<jupyter_code># Create a day variabe:
unemp['day'] = 1
# Create 'date' as a datetime object:
unemp['date'] = pd.to_datetime(unemp[['year','month','day']])
# Drop unnecessary variables:
unemp.drop(['year','month','day'], axis=1, inplace=True)
# Set date as the index:
unemp.set_index(['date'], inplace=True)<jupyter_output><empty_output><jupyter_text>4. Use the `.groupby()`, `.resample()`, and `.mean()` methods to convert the data to quarterly frequency in each state.
<jupyter_code># Resample to quarterly frequency by state:
unemp = unemp.groupby('state').resample('q').mean()<jupyter_output><empty_output><jupyter_text>5. Reset the index and then use `dt.to_period("Q")` to change the formating from year-month-day to year-quarter.<jupyter_code># Reset index and convert datetime from year-month-day to year-quarter:
unemp.reset_index(inplace=True)
unemp['date']= unemp['date'].dt.to_period("Q")<jupyter_output><empty_output><jupyter_text>### Part (c): Merging and plotting the data
1. Create a new data frame by merging the gdp and unemployment rate data on 'state' and 'date'. Set 'state' and 'date' as the index on the new data frame and sort the index.<jupyter_code>data = pd.merge(left=gdp, right=unemp,on=['state','date'],how='inner')
print(data.head(60))
data.set_index(['state','date'],inplace=True)
data.sort_index(inplace=True)
<jupyter_output> state date real_gdp unemp_rate
0 Alabama 2005Q1 169741.0 4.933333
1 Alaska 2005Q1 40299.0 7.000000
2 Arizona 2005Q1 243828.0 4.800000
3 Arkansas 2005Q1 97934.0 5.366667
4 California 2005Q1 1881001.0 5.700000
5 Colorado 2005Q1 238315.0 5.266667
6 Connecticut 2005Q1 230302.0 4.933333
7 Delaware 2005Q1 56408.0 4.000000
8 District of Columbia 2005Q1 92533.0 7.333333
9 Florida 2005Q1 759054.0 4.166667
10 Georgia 2005Q1 421722.0 5.366667
11 Hawaii 2005Q1 64189.0 3.033333
12 Idaho 2005Q1 50451.0 4.233333
13 Illinois 2005Q1 644125.0 5.966667
14 Indiana 2005Q1 266609.0 5.666667
15 Iowa 2005Q1 134575.0 4.433333
16 Kansas 2005Q1 114130.0 5.266667
17 [...]<jupyter_text>2. Calculate the growth rate of real GDP and the change in the unemployment rate for each state and quarter; create new columns for each.<jupyter_code># For some reason the .diff() method (correctly) generates a NaN for the first value
# for each state, but the .pct_change() method doesn't. Weird -- doesn't matter for this exercise but good to be aware of.
data['gdp_growth_rate'] = data.groupby('state')['real_gdp'].pct_change()*100
data['chg_unemp_rate'] = data.groupby('state')['unemp_rate'].diff()
data.head(60)<jupyter_output><empty_output><jupyter_text>3. Change the index to 'state'.<jupyter_code># Note that if you don't reset the index first, then the date will be deleted!!!
data.reset_index(inplace=True)
data.set_index('state', inplace=True)
<jupyter_output><empty_output><jupyter_text>4. Now we can make the desired scatterplot. Plot the GDP growth rate against the change in the unemplyoment rate for Wisconsin and Michigan on the same figure, with GDP growth on the horizontal axis and the change in the unemployment rate on the vertical axis. Make the markers for Wisconsin red x's and the markers for Michigan blue squares that aren't filled in (i.e., only the outline should be shown in blue -- the interior should be blank). Make other aspects of the plot look nice, including nice labels and a legend.<jupyter_code># Create the scatterplot:
fig, ax = plt.subplots(figsize = (11,8.5))
ax.scatter(data.loc['Wisconsin','gdp_growth_rate'],data.loc['Wisconsin','chg_unemp_rate'],
color='red',marker='x',label='Wisconsin')
ax.scatter(data.loc['Michigan','gdp_growth_rate'],data.loc['Michigan','chg_unemp_rate'],
facecolors='none',edgecolors='blue',marker='s',label='Michigan')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.legend(loc='upper right')
plt.title("Okun's Law in Wisconsin and Michigan")
plt.xlabel("GDP growth rate")
plt.ylabel("Change in the unemployment rate")
plt.show()<jupyter_output><empty_output>
|
no_license
|
/9_coding_practice/Coding_Practice_3/Coding_practice_3_finished.ipynb
|
apdang5/Spencer-s-Python-Curriculum
| 29 |
<jupyter_start><jupyter_text># Programming Exercise 6:
# Support Vector Machines
## Introduction
In this exercise, you will be using support vector machines (SVMs) to build a spam classifier. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.
All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).
Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).<jupyter_code># used for manipulating directory paths
import os
# Scientific and vector computation for python
import numpy as np
# Import regular expressions to process emails
import re
# Plotting library
from matplotlib import pyplot
# Optimization module in scipy
from scipy import optimize
# will be used to load MATLAB mat datafile format
from scipy.io import loadmat
# library written for this exercise providing additional functions for assignment submission, and others
import utils
# define the submission/grader object for this exercise
grader = utils.Grader()
# tells matplotlib to embed plots within the notebook
%matplotlib inline<jupyter_output><empty_output><jupyter_text>## 1 Support Vector Machines
In the first half of this exercise, you will be using support vector machines (SVMs) with various example 2D datasets. Experimenting with these datasets will help you gain an intuition of how SVMs work and how to use a Gaussian kernel with SVMs. In the next half of the exercise, you will be using support
vector machines to build a spam classifier.### 1.1 Example Dataset 1
We will begin by with a 2D example dataset which can be separated by a linear boundary. The following cell plots the training data, which should look like this:

In this dataset, the positions of the positive examples (indicated with `x`) and the negative examples (indicated with `o`) suggest a natural separation indicated by the gap. However, notice that there is an outlier positive example `x` on the far left at about (0.1, 4.1). As part of this exercise, you will also see how this outlier affects the SVM decision boundary.<jupyter_code># Load from ex6data1
# You will have X, y as keys in the dict data
data = loadmat(os.path.join('Data', 'ex6data1.mat'))
X, y = data['X'], data['y'][:, 0]
# Plot training data
utils.plotData(X, y)<jupyter_output><empty_output><jupyter_text>In this part of the exercise, you will try using different values of the $C$ parameter with SVMs. Informally, the $C$ parameter is a positive value that controls the penalty for misclassified training examples. A large $C$ parameter tells the SVM to try to classify all the examples correctly. $C$ plays a role similar to $1/\lambda$, where $\lambda$ is the regularization parameter that we were using previously for logistic regression.
The following cell will run the SVM training (with $C=1$) using SVM software that we have included with the starter code (function `svmTrain` within the `utils` module of this exercise). When $C=1$, you should find that the SVM puts the decision boundary in the gap between the two datasets and *misclassifies* the data point on the far left, as shown in the figure (left) below.
SVM Decision boundary for example dataset 1
C=1
C=100
In order to minimize the dependency of this assignment on external libraries, we have included this implementation of an SVM learning algorithm in utils.svmTrain. However, this particular implementation is not very efficient (it was originally chosen to maximize compatibility between Octave/MATLAB for the first version of this assignment set). If you are training an SVM on a real problem, especially if you need to scale to a larger dataset, we strongly recommend instead using a highly optimized SVM toolbox such as [LIBSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvm/). The python machine learning library [scikit-learn](http://scikit-learn.org/stable/index.html) provides wrappers for the LIBSVM library.
**Implementation Note:** Most SVM software packages (including the function `utils.svmTrain`) automatically add the extra feature $x_0$ = 1 for you and automatically take care of learning the intercept term $\theta_0$. So when passing your training data to the SVM software, there is no need to add this extra feature $x_0 = 1$ yourself. In particular, in python your code should be working with training examples $x \in \mathcal{R}^n$ (rather than $x \in \mathcal{R}^{n+1}$); for example, in the first example dataset $x \in \mathcal{R}^2$.
Your task is to try different values of $C$ on this dataset. Specifically, you should change the value of $C$ in the next cell to $C = 100$ and run the SVM training again. When $C = 100$, you should find that the SVM now classifies every single example correctly, but has a decision boundary that does not
appear to be a natural fit for the data.<jupyter_code># You should try to change the C value below and see how the decision
# boundary varies (e.g., try C = 1000)
C = 1
model = utils.svmTrain(X, y, C, utils.linearKernel, 1e-3, 20)
utils.visualizeBoundaryLinear(X, y, model)<jupyter_output><empty_output><jupyter_text>
### 1.2 SVM with Gaussian Kernels
In this part of the exercise, you will be using SVMs to do non-linear classification. In particular, you will be using SVMs with Gaussian kernels on datasets that are not linearly separable.
#### 1.2.1 Gaussian Kernel
To find non-linear decision boundaries with the SVM, we need to first implement a Gaussian kernel. You can think of the Gaussian kernel as a similarity function that measures the “distance” between a pair of examples,
($x^{(i)}$, $x^{(j)}$). The Gaussian kernel is also parameterized by a bandwidth parameter, $\sigma$, which determines how fast the similarity metric decreases (to 0) as the examples are further apart.
You should now complete the code in `gaussianKernel` to compute the Gaussian kernel between two examples, ($x^{(i)}$, $x^{(j)}$). The Gaussian kernel function is defined as:
$$ K_{\text{gaussian}} \left( x^{(i)}, x^{(j)} \right) = \exp \left( - \frac{\left\lvert\left\lvert x^{(i)} - x^{(j)}\right\lvert\right\lvert^2}{2\sigma^2} \right) = \exp \left( -\frac{\sum_{k=1}^n \left( x_k^{(i)} - x_k^{(j)}\right)^2}{2\sigma^2} \right)$$
<jupyter_code>def gaussianKernel(x1, x2, sigma):
"""
Computes the radial basis function
Returns a radial basis function kernel between x1 and x2.
Parameters
----------
x1 : numpy ndarray
A vector of size (n, ), representing the first datapoint.
x2 : numpy ndarray
A vector of size (n, ), representing the second datapoint.
sigma : float
The bandwidth parameter for the Gaussian kernel.
Returns
-------
sim : float
The computed RBF between the two provided data points.
Instructions
------------
Fill in this function to return the similarity between `x1` and `x2`
computed using a Gaussian kernel with bandwidth `sigma`.
"""
sim = 0
# ====================== YOUR CODE HERE ======================
# =============================================================
return sim<jupyter_output><empty_output><jupyter_text>Once you have completed the function `gaussianKernel` the following cell will test your kernel function on two provided examples and you should expect to see a value of 0.324652.<jupyter_code>x1 = np.array([1, 2, 1])
x2 = np.array([0, 4, -1])
sigma = 2
sim = gaussianKernel(x1, x2, sigma)
print('Gaussian Kernel between x1 = [1, 2, 1], x2 = [0, 4, -1], sigma = %0.2f:'
'\n\t%f\n(for sigma = 2, this value should be about 0.324652)\n' % (sigma, sim))<jupyter_output><empty_output><jupyter_text>*You should now submit your solutions.*<jupyter_code>grader[1] = gaussianKernel
grader.grade()<jupyter_output><empty_output><jupyter_text>### 1.2.2 Example Dataset 2
The next part in this notebook will load and plot dataset 2, as shown in the figure below.
<jupyter_code># Load from ex6data2
# You will have X, y as keys in the dict data
data = loadmat(os.path.join('Data', 'ex6data2.mat'))
X, y = data['X'], data['y'][:, 0]
# Plot training data
utils.plotData(X, y)<jupyter_output><empty_output><jupyter_text>From the figure, you can obserse that there is no linear decision boundary that separates the positive and negative examples for this dataset. However, by using the Gaussian kernel with the SVM, you will be able to learn a non-linear decision boundary that can perform reasonably well for the dataset. If you have correctly implemented the Gaussian kernel function, the following cell will proceed to train the SVM with the Gaussian kernel on this dataset.
You should get a decision boundary as shown in the figure below, as computed by the SVM with a Gaussian kernel. The decision boundary is able to separate most of the positive and negative examples correctly and follows the contours of the dataset well.
<jupyter_code># SVM Parameters
C = 1
sigma = 0.1
model= utils.svmTrain(X, y, C, gaussianKernel, args=(sigma,))
utils.visualizeBoundary(X, y, model)<jupyter_output><empty_output><jupyter_text>
#### 1.2.3 Example Dataset 3
In this part of the exercise, you will gain more practical skills on how to use a SVM with a Gaussian kernel. The next cell will load and display a third dataset, which should look like the figure below.

You will be using the SVM with the Gaussian kernel with this dataset. In the provided dataset, `ex6data3.mat`, you are given the variables `X`, `y`, `Xval`, `yval`. <jupyter_code># Load from ex6data3
# You will have X, y, Xval, yval as keys in the dict data
data = loadmat(os.path.join('Data', 'ex6data3.mat'))
X, y, Xval, yval = data['X'], data['y'][:, 0], data['Xval'], data['yval'][:, 0]
# Plot training data
utils.plotData(X, y)<jupyter_output><empty_output><jupyter_text>Your task is to use the cross validation set `Xval`, `yval` to determine the best $C$ and $\sigma$ parameter to use. You should write any additional code necessary to help you search over the parameters $C$ and $\sigma$. For both $C$ and $\sigma$, we suggest trying values in multiplicative steps (e.g., 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30).
Note that you should try all possible pairs of values for $C$ and $\sigma$ (e.g., $C = 0.3$ and $\sigma = 0.1$). For example, if you try each of the 8 values listed above for $C$ and for $\sigma^2$, you would end up training and evaluating (on the cross validation set) a total of $8^2 = 64$ different models. After you have determined the best $C$ and $\sigma$ parameters to use, you should modify the code in `dataset3Params`, filling in the best parameters you found. For our best parameters, the SVM returned a decision boundary shown in the figure below.

**Implementation Tip:** When implementing cross validation to select the best $C$ and $\sigma$ parameter to use, you need to evaluate the error on the cross validation set. Recall that for classification, the error is defined as the fraction of the cross validation examples that were classified incorrectly. In `numpy`, you can compute this error using `np.mean(predictions != yval)`, where `predictions` is a vector containing all the predictions from the SVM, and `yval` are the true labels from the cross validation set. You can use the `utils.svmPredict` function to generate the predictions for the cross validation set.
<jupyter_code>def dataset3Params(X, y, Xval, yval):
"""
Returns your choice of C and sigma for Part 3 of the exercise
where you select the optimal (C, sigma) learning parameters to use for SVM
with RBF kernel.
Parameters
----------
X : array_like
(m x n) matrix of training data where m is number of training examples, and
n is the number of features.
y : array_like
(m, ) vector of labels for ther training data.
Xval : array_like
(mv x n) matrix of validation data where mv is the number of validation examples
and n is the number of features
yval : array_like
(mv, ) vector of labels for the validation data.
Returns
-------
C, sigma : float, float
The best performing values for the regularization parameter C and
RBF parameter sigma.
Instructions
------------
Fill in this function to return the optimal C and sigma learning
parameters found using the cross validation set.
You can use `svmPredict` to predict the labels on the cross
validation set. For example,
predictions = utils.svmPredict(model, Xval)
will return the predictions on the cross validation set.
Note
----
You can compute the prediction error using
np.mean(predictions != yval)
"""
# You need to return the following variables correctly.
C = 1
sigma = 0.3
# ====================== YOUR CODE HERE ======================
# ============================================================
return C, sigma<jupyter_output><empty_output><jupyter_text>The provided code in the next cell trains the SVM classifier using the training set $(X, y)$ using parameters loaded from `dataset3Params`. Note that this might take a few minutes to execute.<jupyter_code># Try different SVM Parameters here
C, sigma = dataset3Params(X, y, Xval, yval)
# Train the SVM
# model = utils.svmTrain(X, y, C, lambda x1, x2: gaussianKernel(x1, x2, sigma))
model = utils.svmTrain(X, y, C, gaussianKernel, args=(sigma,))
utils.visualizeBoundary(X, y, model)
print(C, sigma)<jupyter_output><empty_output><jupyter_text>One you have computed the values `C` and `sigma` in the cell above, we will submit those values for grading.
*You should now submit your solutions.*<jupyter_code>grader[2] = lambda : (C, sigma)
grader.grade()<jupyter_output><empty_output><jupyter_text>
## 2 Spam Classification
Many email services today provide spam filters that are able to classify emails into spam and non-spam email with high accuracy. In this part of the exercise, you will use SVMs to build your own spam filter.
You will be training a classifier to classify whether a given email, $x$, is spam ($y = 1$) or non-spam ($y = 0$). In particular, you need to convert each email into a feature vector $x \in \mathbb{R}^n$ . The following parts of the exercise will walk you through how such a feature vector can be constructed from an email.
The dataset included for this exercise is based on a a subset of the [SpamAssassin Public Corpus](http://spamassassin.apache.org/old/publiccorpus/). For the purpose of this exercise, you will only be using the body of the email (excluding the email headers).### 2.1 Preprocessing Emails
Before starting on a machine learning task, it is usually insightful to take a look at examples from the dataset. The figure below shows a sample email that contains a URL, an email address (at the end), numbers, and dollar
amounts.
While many emails would contain similar types of entities (e.g., numbers, other URLs, or other email addresses), the specific entities (e.g., the specific URL or specific dollar amount) will be different in almost every
email. Therefore, one method often employed in processing emails is to “normalize” these values, so that all URLs are treated the same, all numbers are treated the same, etc. For example, we could replace each URL in the
email with the unique string “httpaddr” to indicate that a URL was present.
This has the effect of letting the spam classifier make a classification decision based on whether any URL was present, rather than whether a specific URL was present. This typically improves the performance of a spam classifier, since spammers often randomize the URLs, and thus the odds of seeing any particular URL again in a new piece of spam is very small.
In the function `processEmail` below, we have implemented the following email preprocessing and normalization steps:
- **Lower-casing**: The entire email is converted into lower case, so that captialization is ignored (e.g., IndIcaTE is treated the same as Indicate).
- **Stripping HTML**: All HTML tags are removed from the emails. Many emails often come with HTML formatting; we remove all the HTML tags, so that only the content remains.
- **Normalizing URLs**: All URLs are replaced with the text “httpaddr”.
- **Normalizing Email Addresses**: All email addresses are replaced with the text “emailaddr”.
- **Normalizing Numbers**: All numbers are replaced with the text “number”.
- **Normalizing Dollars**: All dollar signs ($) are replaced with the text “dollar”.
- **Word Stemming**: Words are reduced to their stemmed form. For example, “discount”, “discounts”, “discounted” and “discounting” are all replaced with “discount”. Sometimes, the Stemmer actually strips off additional characters from the end, so “include”, “includes”, “included”, and “including” are all replaced with “includ”.
- **Removal of non-words**: Non-words and punctuation have been removed. All white spaces (tabs, newlines, spaces) have all been trimmed to a single space character.
The result of these preprocessing steps is shown in the figure below.
While preprocessing has left word fragments and non-words, this form turns out to be much easier to work with for performing feature extraction.#### 2.1.1 Vocabulary List
After preprocessing the emails, we have a list of words for each email. The next step is to choose which words we would like to use in our classifier and which we would want to leave out.
For this exercise, we have chosen only the most frequently occuring words as our set of words considered (the vocabulary list). Since words that occur rarely in the training set are only in a few emails, they might cause the
model to overfit our training set. The complete vocabulary list is in the file `vocab.txt` (inside the `Data` directory for this exercise) and also shown in the figure below.
Our vocabulary list was selected by choosing all words which occur at least a 100 times in the spam corpus,
resulting in a list of 1899 words. In practice, a vocabulary list with about 10,000 to 50,000 words is often used.
Given the vocabulary list, we can now map each word in the preprocessed emails into a list of word indices that contains the index of the word in the vocabulary dictionary. The figure below shows the mapping for the sample email. Specifically, in the sample email, the word “anyone” was first normalized to “anyon” and then mapped onto the index 86 in the vocabulary list.
Your task now is to complete the code in the function `processEmail` to perform this mapping. In the code, you are given a string `word` which is a single word from the processed email. You should look up the word in the vocabulary list `vocabList`. If the word exists in the list, you should add the index of the word into the `word_indices` variable. If the word does not exist, and is therefore not in the vocabulary, you can skip the word.
**python tip**: In python, you can find the index of the first occurence of an item in `list` using the `index` attribute. In the provided code for `processEmail`, `vocabList` is a python list containing the words in the vocabulary. To find the index of a word, we can use `vocabList.index(word)` which would return a number indicating the index of the word within the list. If the word does not exist in the list, a `ValueError` exception is raised. In python, we can use the `try/except` statement to catch exceptions which we do not want to stop the program from running. You can think of the `try/except` statement to be the same as an `if/else` statement, but it asks for forgiveness rather than permission.
An example would be:
```
try:
do stuff here
except ValueError:
pass
# do nothing (forgive me) if a ValueError exception occured within the try statement
```
<jupyter_code>def processEmail(email_contents, verbose=True):
"""
Preprocesses the body of an email and returns a list of indices
of the words contained in the email.
Parameters
----------
email_contents : str
A string containing one email.
verbose : bool
If True, print the resulting email after processing.
Returns
-------
word_indices : list
A list of integers containing the index of each word in the
email which is also present in the vocabulary.
Instructions
------------
Fill in this function to add the index of word to word_indices
if it is in the vocabulary. At this point of the code, you have
a stemmed word from the email in the variable word.
You should look up word in the vocabulary list (vocabList).
If a match exists, you should add the index of the word to the word_indices
list. Concretely, if word = 'action', then you should
look up the vocabulary list to find where in vocabList
'action' appears. For example, if vocabList[18] =
'action', then, you should add 18 to the word_indices
vector (e.g., word_indices.append(18)).
Notes
-----
- vocabList[idx] returns a the word with index idx in the vocabulary list.
- vocabList.index(word) return index of word `word` in the vocabulary list.
(A ValueError exception is raised if the word does not exist.)
"""
# Load Vocabulary
vocabList = utils.getVocabList()
# Init return value
word_indices = []
# ========================== Preprocess Email ===========================
# Find the Headers ( \n\n and remove )
# Uncomment the following lines if you are working with raw emails with the
# full headers
# hdrstart = email_contents.find(chr(10) + chr(10))
# email_contents = email_contents[hdrstart:]
# Lower case
email_contents = email_contents.lower()
# Strip all HTML
# Looks for any expression that starts with < and ends with > and replace
# and does not have any < or > in the tag it with a space
email_contents =re.compile('<[^<>]+>').sub(' ', email_contents)
# Handle Numbers
# Look for one or more characters between 0-9
email_contents = re.compile('[0-9]+').sub(' number ', email_contents)
# Handle URLS
# Look for strings starting with http:// or https://
email_contents = re.compile('(http|https)://[^\s]*').sub(' httpaddr ', email_contents)
# Handle Email Addresses
# Look for strings with @ in the middle
email_contents = re.compile('[^\s]+@[^\s]+').sub(' emailaddr ', email_contents)
# Handle $ sign
email_contents = re.compile('[$]+').sub(' dollar ', email_contents)
# get rid of any punctuation
email_contents = re.split('[ @$/#.-:&*+=\[\]?!(){},''">_<;%\n\r]', email_contents)
# remove any empty word string
email_contents = [word for word in email_contents if len(word) > 0]
# Stem the email contents word by word
stemmer = utils.PorterStemmer()
processed_email = []
for word in email_contents:
# Remove any remaining non alphanumeric characters in word
word = re.compile('[^a-zA-Z0-9]').sub('', word).strip()
word = stemmer.stem(word)
processed_email.append(word)
if len(word) < 1:
continue
# Look up the word in the dictionary and add to word_indices if found
# ====================== YOUR CODE HERE ======================
# =============================================================
if verbose:
print('----------------')
print('Processed email:')
print('----------------')
print(' '.join(processed_email))
return word_indices<jupyter_output><empty_output><jupyter_text>Once you have implemented `processEmail`, the following cell will run your code on the email sample and you should see an output of the processed email and the indices list mapping.<jupyter_code># To use an SVM to classify emails into Spam v.s. Non-Spam, you first need
# to convert each email into a vector of features. In this part, you will
# implement the preprocessing steps for each email. You should
# complete the code in processEmail.m to produce a word indices vector
# for a given email.
# Extract Features
with open(os.path.join('Data', 'emailSample1.txt')) as fid:
file_contents = fid.read()
word_indices = processEmail(file_contents)
#Print Stats
print('-------------')
print('Word Indices:')
print('-------------')
print(word_indices)<jupyter_output><empty_output><jupyter_text>*You should now submit your solutions.*<jupyter_code>grader[3] = processEmail
grader.grade()<jupyter_output><empty_output><jupyter_text>
### 2.2 Extracting Features from Emails
You will now implement the feature extraction that converts each email into a vector in $\mathbb{R}^n$. For this exercise, you will be using n = # words in vocabulary list. Specifically, the feature $x_i \in \{0, 1\}$ for an email corresponds to whether the $i^{th}$ word in the dictionary occurs in the email. That is, $x_i = 1$ if the $i^{th}$ word is in the email and $x_i = 0$ if the $i^{th}$ word is not present in the email.
Thus, for a typical email, this feature would look like:
$$ x = \begin{bmatrix}
0 & \dots & 1 & 0 & \dots & 1 & 0 & \dots & 0
\end{bmatrix}^T \in \mathbb{R}^n
$$
You should now complete the code in the function `emailFeatures` to generate a feature vector for an email, given the `word_indices`.
<jupyter_code>def emailFeatures(word_indices):
"""
Takes in a word_indices vector and produces a feature vector from the word indices.
Parameters
----------
word_indices : list
A list of word indices from the vocabulary list.
Returns
-------
x : list
The computed feature vector.
Instructions
------------
Fill in this function to return a feature vector for the
given email (word_indices). To help make it easier to process
the emails, we have have already pre-processed each email and converted
each word in the email into an index in a fixed dictionary (of 1899 words).
The variable `word_indices` contains the list of indices of the words
which occur in one email.
Concretely, if an email has the text:
The quick brown fox jumped over the lazy dog.
Then, the word_indices vector for this text might look like:
60 100 33 44 10 53 60 58 5
where, we have mapped each word onto a number, for example:
the -- 60
quick -- 100
...
Note
----
The above numbers are just an example and are not the actual mappings.
Your task is take one such `word_indices` vector and construct
a binary feature vector that indicates whether a particular
word occurs in the email. That is, x[i] = 1 when word i
is present in the email. Concretely, if the word 'the' (say,
index 60) appears in the email, then x[60] = 1. The feature
vector should look like:
x = [ 0 0 0 0 1 0 0 0 ... 0 0 0 0 1 ... 0 0 0 1 0 ..]
"""
# Total number of words in the dictionary
n = 1899
# You need to return the following variables correctly.
x = np.zeros(n)
# ===================== YOUR CODE HERE ======================
# ===========================================================
return x<jupyter_output><empty_output><jupyter_text>Once you have implemented `emailFeatures`, the next cell will run your code on the email sample. You should see that the feature vector had length 1899 and 45 non-zero entries.<jupyter_code># Extract Features
with open(os.path.join('Data', 'emailSample1.txt')) as fid:
file_contents = fid.read()
word_indices = processEmail(file_contents)
features = emailFeatures(word_indices)
# Print Stats
print('\nLength of feature vector: %d' % len(features))
print('Number of non-zero entries: %d' % sum(features > 0))<jupyter_output><empty_output><jupyter_text>*You should now submit your solutions.*<jupyter_code>grader[4] = emailFeatures
grader.grade()<jupyter_output><empty_output><jupyter_text>### 2.3 Training SVM for Spam Classification
In the following section we will load a preprocessed training dataset that will be used to train a SVM classifier. The file `spamTrain.mat` (within the `Data` folder for this exercise) contains 4000 training examples of spam and non-spam email, while `spamTest.mat` contains 1000 test examples. Each
original email was processed using the `processEmail` and `emailFeatures` functions and converted into a vector $x^{(i)} \in \mathbb{R}^{1899}$.
After loading the dataset, the next cell proceed to train a linear SVM to classify between spam ($y = 1$) and non-spam ($y = 0$) emails. Once the training completes, you should see that the classifier gets a training accuracy of about 99.8% and a test accuracy of about 98.5%.<jupyter_code># Load the Spam Email dataset
# You will have X, y in your environment
data = loadmat(os.path.join('Data', 'spamTrain.mat'))
X, y= data['X'].astype(float), data['y'][:, 0]
print('Training Linear SVM (Spam Classification)')
print('This may take 1 to 2 minutes ...\n')
C = 0.1
model = utils.svmTrain(X, y, C, utils.linearKernel)
# Compute the training accuracy
p = utils.svmPredict(model, X)
print('Training Accuracy: %.2f' % (np.mean(p == y) * 100))<jupyter_output><empty_output><jupyter_text>Execute the following cell to load the test set and compute the test accuracy.<jupyter_code># Load the test dataset
# You will have Xtest, ytest in your environment
data = loadmat(os.path.join('Data', 'spamTest.mat'))
Xtest, ytest = data['Xtest'].astype(float), data['ytest'][:, 0]
print('Evaluating the trained Linear SVM on a test set ...')
p = utils.svmPredict(model, Xtest)
print('Test Accuracy: %.2f' % (np.mean(p == ytest) * 100))<jupyter_output><empty_output><jupyter_text>### 2.4 Top Predictors for Spam
To better understand how the spam classifier works, we can inspect the parameters to see which words the classifier thinks are the most predictive of spam. The next cell finds the parameters with the largest positive values in the classifier and displays the corresponding words similar to the ones shown in the figure below.
our click remov guarante visit basenumb dollar pleas price will nbsp most lo ga hour
Thus, if an email contains words such as “guarantee”, “remove”, “dollar”, and “price” (the top predictors shown in the figure), it is likely to be classified as spam.
Since the model we are training is a linear SVM, we can inspect the weights learned by the model to understand better how it is determining whether an email is spam or not. The following code finds the words with the highest weights in the classifier. Informally, the classifier 'thinks' that these words are the most likely indicators of spam.<jupyter_code># Sort the weights and obtin the vocabulary list
# NOTE some words have the same weights,
# so their order might be different than in the text above
idx = np.argsort(model['w'])
top_idx = idx[-15:][::-1]
vocabList = utils.getVocabList()
print('Top predictors of spam:')
print('%-15s %-15s' % ('word', 'weight'))
print('----' + ' '*12 + '------')
for word, w in zip(np.array(vocabList)[top_idx], model['w'][top_idx]):
print('%-15s %0.2f' % (word, w))
<jupyter_output><empty_output><jupyter_text>### 2.5 Optional (ungraded) exercise: Try your own emails
Now that you have trained a spam classifier, you can start trying it out on your own emails. In the starter code, we have included two email examples (`emailSample1.txt` and `emailSample2.txt`) and two spam examples (`spamSample1.txt` and `spamSample2.txt`). The next cell runs the spam classifier over the first spam example and classifies it using the learned SVM. You should now try the other examples we have provided and see if the classifier gets them right. You can also try your own emails by replacing the examples (plain text files) with your own emails.
*You do not need to submit any solutions for this optional (ungraded) exercise.*<jupyter_code>filename = os.path.join('Data', 'emailSample1.txt')
with open(filename) as fid:
file_contents = fid.read()
word_indices = processEmail(file_contents, verbose=False)
x = emailFeatures(word_indices)
p = utils.svmPredict(model, x)
print('\nProcessed %s\nSpam Classification: %s' % (filename, 'spam' if p else 'not spam'))<jupyter_output><empty_output>
|
no_license
|
/Coursera projects/Support Vector Machine Spam Classification Project/.ipynb_checkpoints/exercise6-checkpoint.ipynb
|
Vgwang/Data-Science-and-Machine-Learning-Projects
| 22 |
<jupyter_start><jupyter_text>
Universidade Federal de Mato Grosso do Sul
Campus de Campo Grande
Estatística – Prof. Cássio Pinho dos Reis
9ª LISTA DE EXERCÍCIOS - parte 3
Turma: Engenharia de Software
RGA: 2021.1906.069-7
Aluno: Maycon Felipe da Silva Mota
---
<jupyter_code># Definir as funções que irei usar nos exercicios abaixos
import math
import statistics
def CalcularTCal(mediaAmostra, valHip0, desvioPadraoAmostral, tamanhoAmostral):
calculo_1 = mediaAmostra - valHip0
calculo_2 = desvioPadraoAmostral/(math.sqrt(tamanhoAmostral))
resultado = calculo_1/calculo_2
print(f"O valor TCal é {resultado:.3f}")
return resultado
<jupyter_output><empty_output><jupyter_text># 1ª Questão – Numa prova que valia 20 pontos, uma amostra de 25 alunos da turma de estatística teve uma média de 13,5 pontos e um desvio padrão de 4,4 pontos. Sabe-se que para passar nesta disciplina, o aluno tem que tirar 16 pontos. Testar para um nível de 0,05 que µ=16 contra µ ≠ 16 e µ < 16 e interprete os resultados.<jupyter_code>##### Hipoteses
# h0 = u
# h1 != 16
# h1 < 16
# Valor TStudent = 2,06
# Calculando TCal
## Calculo para hipótese nula
print("Hipótese bicaudal")
valorTStudent = -2.06
valor_TCAL = CalcularTCal(13.5,16,4.4,25)
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
print("\nHipótese unicaudal")
# Calculo para Hipotese Alternativa
valorTStudent = 1.71
valor_TCAL = CalcularTCal(13.5,16,4.4,25)
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
<jupyter_output>Hipótese bicaudal
O valor TCal é -2.841
Rejeitar H0, pois -2.841 está na R.C ao nível de 5%, uma vez que -2.841 < -2.06.
Hipótese unicaudal
O valor TCal é -2.841
Rejeitar H0, pois -2.841 está na R.C ao nível de 5%, uma vez que -2.841 < 1.71.
<jupyter_text># Questão 2 - Retirada uma amostra de 15 parafusos, obteve as medidas abaixo para seus diâmetros (mm). Teste se µ = 12,5 contra µ ≠ 12,5 ou µ 12,5 ao nível de 0,05, e interprete os resultados<jupyter_code># Valor T Student:
parafusos = [10, 10, 10, 11, 11, 12, 12, 12, 12, 13, 13, 14, 14, 14, 15]
media_parafusos = statistics.mean(parafusos)
desvio_padrao = statistics.stdev(parafusos)
valorTStudent = -2.1448
valor_TCAL = CalcularTCal(media_parafusos,12.5, desvio_padrao,15)
# Hip A =
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} não está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
# Hip = 12.5
# HipA <> 12.5
print("\nHipótese unicaudal")
# Calculo para Hipotese Alternativa
# hipA
valorTStudent = -1.7613
valor_TCAL = CalcularTCal(media_parafusos,12.5, desvio_padrao,15)
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} não está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
<jupyter_output>O valor TCal é -0.721
Não rejeitar H0, pois -0.721 não está na R.C ao nível de 5%, uma vez que -0.721 < -2.1448.
Hipótese unicaudal
O valor TCal é -0.721
Não rejeitar H0, pois -0.721 não está na R.C ao nível de 5%, uma vez que -0.721 < -1.7613.
<jupyter_text># 3ª Questão – A estatura (cm) de 20 recém-nascidos num hospital foi anotado, as medidas estão abaixo. Teste se a altura média desses recém-nascidos é de 50 cm ou menor ao nível de 0,05, e interprete os resultados<jupyter_code># Valor T Student:
bebes = [41,50,52,49,49,54,50,47,52,49,50,52,50,47,49,51,46,50,49,50]
media_bebes = statistics.mean(bebes)
desvio_padrao = statistics.stdev(bebes)
valorTStudent = -2.0930
valor_TCAL = CalcularTCal(media_bebes,50, desvio_padrao, 20)
# Hipoteses
# Hip = 50
# HipA < 50
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} não está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
print("\nHipótese unicaudal")
# Calculo para Hipotese Alternativa
valorTStudent = -1.7291
valor_TCAL = CalcularTCal(media_bebes,50, desvio_padrao, 20)
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} não está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
<jupyter_output>O valor TCal é -1.069
Não rejeitar H0, pois -1.069 não está na R.C ao nível de 5%, uma vez que -1.069 < -2.093.
Hipótese unicaudal
O valor TCal é -1.069
Não rejeitar H0, pois -1.069 não está na R.C ao nível de 5%, uma vez que -1.069 < -1.7291.
<jupyter_text># 4ª Questão – 15 plantas foram submetidos a um novo método de irrigação durante 3 semanas e verificou-se os aumentos da altura (em cm) das palntas que estão abaixo. Teste se o aumento médio de altura foi de 10 centímetros ao nível de 0,10, e interprete os resultados
<jupyter_code># Valor T Student:
plantas = [10,10,10,11,11,12,12,12,12,13,13,14,14,14,15]
media_plantas = statistics.mean(plantas)
desvio_padrao = statistics.stdev(plantas)
valorTStudent = 1.7613
valor_TCAL = CalcularTCal(media_plantas,10, desvio_padrao, 15)
# Hipoteses
# Hip = 50
# HipA < 50
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} > {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} não está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
print("\nHipótese unicaudal")
# Calculo para Hipotese Alternativa
valorTStudent = -0.8680
valor_TCAL = CalcularTCal(media_plantas,10, desvio_padrao, 15)
if(valor_TCAL < valorTStudent):
print(f"Rejeitar H0, pois {valor_TCAL:.3f} está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
else:
print(f"Não rejeitar H0, pois {valor_TCAL:.3f} não está na R.C ao nível de 5%, uma vez que {valor_TCAL:.3f} < {valorTStudent}.")
<jupyter_output>O valor TCal é 5.284
Não rejeitar H0, pois 5.284 não está na R.C ao nível de 5%, uma vez que 5.284 < 1.7613.
Hipótese unicaudal
O valor TCal é 5.284
Não rejeitar H0, pois 5.284 não está na R.C ao nível de 5%, uma vez que 5.284 < -0.868.
|
no_license
|
/Lista de Exercícios - 10 parte 1.ipynb
|
FelipeGaleao/Learning_Statisticals
| 5 |
<jupyter_start><jupyter_text># [Plato’s cube and the natural geometry of fragmentation](https://arxiv.org/abs/1912.04628)
The basic idea here is that the number of vertices and edges randomly drawn on a sheet of paper should follow a certain probability (6). Here I try to simulate this. The simulation can be adjusted for any size or shape or paper and any topology. ## Helper Functions### Create pseduo random line segments<jupyter_code>import numpy as np
import pylab as plt
from matplotlib import collections as mc
from shapely.geometry import LineString
N = 5
lines = []
plt.figure(figsize=(16, 12))
for _ in range(int(N/2)):
line_h = LineString([(np.random.random()*10,0),(np.random.random()*10,10)])
line_v = LineString([(np.random.random()*10,0),(np.random.random()*10,10)])
lines.append(line_h) # horizontal
lines.append(line_v) # vertical
plt.plot(line_h,color='black')
plt.plot(line_v,color='black')
plt.title('Five Random Lines')
plt.rcParams.update({'font.size': 28})
plt.rc('font', family='serif')
plt.axis('off')
plt.savefig('5randomlines.png')
plt.show()
<jupyter_output><empty_output><jupyter_text>### Find all combinations of line segments (used later)<jupyter_code>import itertools
combos = itertools.combinations(lines, 2)
for seg in combos:
print("Expect two LineString objects, which define two lines from the above plot.")
print(seg)
break<jupyter_output>Expect two LineString objects, which define two lines from the above plot.
(<shapely.geometry.linestring.LineString object at 0x1633593c8>, <shapely.geometry.linestring.LineString object at 0x163353ac8>)
<jupyter_text>### Check if the lines intersect#### Test<jupyter_code>from shapely.geometry import LineString
import matplotlib.pyplot as plt
print("Below is a test to check if the lines intersect.")
line = LineString([(0, 0), (1, 1)])
other = LineString([(0, 1), (1, 0)])
print(line.intersects(other))
print(line.intersection(other))
# True
plt.plot(line)
plt.plot(other)
plt.show()
line = LineString([(0, 0), (1, 1)])
other = LineString([(0, 1), (1, 2)])
print(line.intersects(other))
print(line.intersection(other))
# False
plt.plot(line)
plt.plot(other)
plt.show()<jupyter_output>Below is a test to check if the lines intersect.
True
POINT (0.5 0.5)
<jupyter_text>#### Build<jupyter_code>from shapely.geometry import LineString
import matplotlib.pyplot as plt
print("Expect either point of intersection or empty")
cnt = 0
intersections = []
for seg in combos:
cnt += 1
x = seg[0].intersection(seg[1])
intersections.append(x)
if cnt < 10:
print(x)
intersections = [x.coords[:][0] for x in intersections if x.is_empty is False]<jupyter_output>Expect either point of intersection or empty
POINT (3.368156647582434 0.724854361875451)
POINT (3.010649571695428 2.128943921035383)
POINT (2.557823102901161 3.907395201664707)
LINESTRING EMPTY
LINESTRING EMPTY
LINESTRING EMPTY
LINESTRING EMPTY
LINESTRING EMPTY
POINT (5.210193419337682 3.704633578142987)
<jupyter_text>#### Check the length, which provides the number of nodes or intersections. Use this for mosaics.<jupyter_code>xx = [x.coords[:] for x in intersections if x.is_empty is False]
print(len(xx))
np.unique(xx)<jupyter_output><empty_output><jupyter_text>## Build Clean Code<jupyter_code>import numpy as np
import pylab as pl
from shapely.geometry import Point, Polygon, LineString, MultiPolygon
import itertools
def count_intersections(N=2):
'''
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This code creates pseudo random line segments
Lines within square [(0,10),(0,10)]
Counts number of intersections
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
INPUT:
N [number of lines to draw]
OUTPUT:
len(xx) [number of intersections]
xx [intersections]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
'''
#N = 10
lines = []
for _ in range(int(N/2)):
line_h = LineString([(np.random.random()*10,0),(np.random.random()*10,10)])
line_v = LineString([(np.random.random()*10,0),(np.random.random()*10,10)])
lines.append(line_h) # horizontal
lines.append(line_v) # vertical
combos = itertools.combinations(lines, 2)
intersections = []
for seg in combos:
x = seg[0].intersection(seg[1])
intersections.append(x)
xx = [x.coords[:][0] for x in intersections if x.is_empty is False]
return len(xx), xx, lines<jupyter_output><empty_output><jupyter_text>Now we will check the number of intersections as a function of lines [2:20]<jupyter_code>NUM = 50 # total number of lines drawn
intersections_global = []
for k in range(2,NUM+1):
intersections_local = []
for q in range(100):
num_xs, _, lines = count_intersections(k)
intersections_local.append(num_xs)
intersections_global.append(intersections_local)
plt.figure(figsize=(16,12))
fig = plt.boxplot(intersections_global)
plt.rcParams.update({'font.size': 28})
plt.rc('font', family='serif')
plt.title('Number of Intersections vs Lines Drawn')
plt.xlabel('Number of Lines Drawn')
plt.ylabel('Counts')
plt.xticks(rotation=45)
plt.savefig('intersections_counted.png')
plt.show()
<jupyter_output><empty_output><jupyter_text>### Does this follow a power law? ## Now we look at the mosaics created from the randomly created lines### Find all combinations of points for possible mosaics ($n \geq 3$)<jupyter_code>intersections_count, intersections, lines = count_intersections(5)
mosaics = []
for NN in range(3,intersections_count+1):
mosaics.extend(itertools.combinations(intersections, NN))
mosaics = [mosaic for mosaic in mosaics if Polygon(mosaic).is_valid]
len(mosaics)
lines
display(Polygon(mosaics[0]))
print(Polygon(mosaics[0]).is_valid)
display(Polygon(mosaics[1]))
print(Polygon(mosaics[1]).is_valid)
display(Polygon(mosaics[-2]))
print(Polygon(mosaics[-2]).is_valid)
display(Polygon(mosaics[-1]))
print(Polygon(mosaics[-1]).is_valid)<jupyter_output><empty_output><jupyter_text>### Remove mosaic if it intersects with any of the lines & Count number of points/nodes for remaining mosaics<jupyter_code>mosaics_pass = []
mosaics_size = []
for mosaic in mosaics:
tst = []
for line in lines:
tst.append(line.within(Polygon(mosaic)))
if not sum(tst*1):
mosaics_pass.append(mosaic)
mosaics_size.append(len(mosaic))
mosaics_size<jupyter_output><empty_output><jupyter_text>## Rebuild Clean Code<jupyter_code>import numpy as np
import pylab as plt
from shapely.geometry import Point, Polygon, LineString, MultiPolygon
import itertools
from scipy import stats
def count_intersections(N=2):
'''
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This code creates pseudo random line segments
Lines within square [(0,10),(0,10)]
Counts number of intersections
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
INPUT:
N [number of lines to draw]
OUTPUT:
len(xx) [number of intersections]
xx [intersections]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
'''
#N = 10
lines = []
for _ in range(int(N/2)):
line_h = LineString([(np.random.random()*10,0),(np.random.random()*10,10)])
line_v = LineString([(np.random.random()*10,0),(np.random.random()*10,10)])
lines.append(line_h) # horizontal
lines.append(line_v) # vertical
combos = itertools.combinations(lines, 2)
intersections = []
for seg in combos:
x = seg[0].intersection(seg[1])
intersections.append(x)
xx = [x.coords[:][0] for x in intersections if x.is_empty is False]
return len(xx), xx, lines
def count_mosaics(max_lines):
'''
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This code take the line intersections from "count_intersections"
Creates closed polygons from all combinations of points
Removes invalid polygons
Remove polygons with other intersection points within
Counts number of nodes to create said polygon
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
INPUT:
N [number of lines to draw]
OUTPUT:
len(xx) [number of intersections]
xx [intersections]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
'''
global_mosaics_pass = []
global_mosaics_size = []
for num in range(3,max_lines+1):
intersections_count, intersections, lines = count_intersections(num)
mosaics = []
for NN in range(3,intersections_count+1):
mosaics.extend(itertools.combinations(intersections, NN))
mosaics = [mosaic for mosaic in mosaics if Polygon(mosaic).is_valid]
local_mosaics_pass = []
local_mosaics_size = []
for mosaic in mosaics:
tst = []
for line in lines:
tst.append(line.within(Polygon(mosaic)))
if not sum(tst*1):
local_mosaics_pass.append(mosaic)
local_mosaics_size.append(len(mosaic))
global_mosaics_pass.append([local_mosaics_pass])
global_mosaics_size.append([local_mosaics_size])
return global_mosaics_pass, global_mosaics_size
max_lines = 10
global_mosaics_pass, global_mosaics_size = count_mosaics(max_lines)
plt.figure(figsize=(20,10))
for item in global_mosaics_size[0][0]:
plt.boxplot(global_mosaics_size[0])
plt.show()
print("Mean: ", np.mean(global_mosaics_size[0]))
print("Max: ", np.max(global_mosaics_size[0]))
print("Mode: ", stats.mode(global_mosaics_size[0])[0][0])
print("Median: ", np.median(global_mosaics_size[0]))
global_mosaics_pass[0]<jupyter_output><empty_output>
|
no_license
|
/geogeometry.ipynb
|
STEMlib/geogeometry
| 10 |
<jupyter_start><jupyter_text># Model Selection, Underfitting, and Overfitting
:label:`sec_model_selection`
As machine learning scientists,
our goal is to discover *patterns*.
But how can we be sure that we have
truly discovered a *general* pattern
and not simply memorized our data?
For example, imagine that we wanted to hunt
for patterns among genetic markers
linking patients to their dementia status,
where the labels are drawn from the set
$\{\text{dementia}, \text{mild cognitive impairment}, \text{healthy}\}$.
Because each person's genes identify them uniquely
(ignoring identical siblings),
it is possible to memorize the entire dataset.
We do not want our model to say
*"That's Bob! I remember him! He has dementia!"*
The reason why is simple.
When we deploy the model in the future,
we will encounter patients
that the model has never seen before.
Our predictions will only be useful
if our model has truly discovered a *general* pattern.
To recapitulate more formally,
our goal is to discover patterns
that capture regularities in the underlying population
from which our training set was drawn.
If we are successful in this endeavor,
then we could successfully assess risk
even for individuals that we have never encountered before.
This problem---how to discover patterns that *generalize*---is
the fundamental problem of machine learning.
The danger is that when we train models,
we access just a small sample of data.
The largest public image datasets contain
roughly one million images.
More often, we must learn from only thousands
or tens of thousands of data examples.
In a large hospital system, we might access
hundreds of thousands of medical records.
When working with finite samples, we run the risk
that we might discover apparent associations
that turn out not to hold up when we collect more data.
The phenomenon of fitting our training data
more closely than we fit the underlying distribution is called *overfitting*, and the techniques used to combat overfitting are called *regularization*.
In the previous sections, you might have observed
this effect while experimenting with the Fashion-MNIST dataset.
If you altered the model structure or the hyperparameters during the experiment, you might have noticed that with enough neurons, layers, and training epochs, the model can eventually reach perfect accuracy on the training set, even as the accuracy on test data deteriorates.
## Training Error and Generalization Error
In order to discuss this phenomenon more formally,
we need to differentiate between training error and generalization error.
The *training error* is the error of our model
as calculated on the training dataset,
while *generalization error* is the expectation of our model's error
were we to apply it to an infinite stream of additional data examples
drawn from the same underlying data distribution as our original sample.
Problematically, we can never calculate the generalization error exactly.
That is because the stream of infinite data is an imaginary object.
In practice, we must *estimate* the generalization error
by applying our model to an independent test set
constituted of a random selection of data examples
that were withheld from our training set.
The following three thought experiments
will help illustrate this situation better.
Consider a college student trying to prepare for his final exam.
A diligent student will strive to practice well
and test his abilities using exams from previous years.
Nonetheless, doing well on past exams is no guarantee
that he will excel when it matters.
For instance, the student might try to prepare
by rote learning the answers to the exam questions.
This requires the student to memorize many things.
She might even remember the answers for past exams perfectly.
Another student might prepare by trying to understand
the reasons for giving certain answers.
In most cases, the latter student will do much better.
Likewise, consider a model that simply uses a lookup table to answer questions. If the set of allowable inputs is discrete and reasonably small, then perhaps after viewing *many* training examples, this approach would perform well. Still this model has no ability to do better than random guessing when faced with examples that it has never seen before.
In reality the input spaces are far too large to memorize the answers corresponding to every conceivable input. For example, consider the black and white $28\times28$ images. If each pixel can take one among $256$ grayscale values, then there are $256^{784}$ possible images. That means that there are far more low-resolution grayscale thumbnail-sized images than there are atoms in the universe. Even if we could encounter such data, we could never afford to store the lookup table.
Last, consider the problem of trying
to classify the outcomes of coin tosses (class 0: heads, class 1: tails)
based on some contextual features that might be available.
Suppose that the coin is fair.
No matter what algorithm we come up with,
the generalization error will always be $\frac{1}{2}$.
However, for most algorithms,
we should expect our training error to be considerably lower,
depending on the luck of the draw,
even if we did not have any features!
Consider the dataset {0, 1, 1, 1, 0, 1}.
Our feature-less algorithm would have to fall back on always predicting
the *majority class*, which appears from our limited sample to be *1*.
In this case, the model that always predicts class 1
will incur an error of $\frac{1}{3}$,
considerably better than our generalization error.
As we increase the amount of data,
the probability that the fraction of heads
will deviate significantly from $\frac{1}{2}$ diminishes,
and our training error would come to match the generalization error.
### Statistical Learning Theory
Since generalization is the fundamental problem in machine learning,
you might not be surprised to learn
that many mathematicians and theorists have dedicated their lives
to developing formal theories to describe this phenomenon.
In their [eponymous theorem](https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem), Glivenko and Cantelli
derived the rate at which the training error
converges to the generalization error.
In a series of seminal papers, [Vapnik and Chervonenkis](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theory)
extended this theory to more general classes of functions.
This work laid the foundations of statistical learning theory.
In the standard supervised learning setting, which we have addressed up until now and will stick with throughout most of this book,
we assume that both the training data and the test data
are drawn *independently* from *identical* distributions.
This is commonly called the *i.i.d. assumption*,
which means that the process that samples our data has no memory.
In other words,
the second example drawn and the third drawn
are no more correlated than the second and the two-millionth sample drawn.
Being a good machine learning scientist requires thinking critically,
and already you should be poking holes in this assumption,
coming up with common cases where the assumption fails.
What if we train a mortality risk predictor
on data collected from patients at UCSF Medical Center,
and apply it on patients at Massachusetts General Hospital?
These distributions are simply not identical.
Moreover, draws might be correlated in time.
What if we are classifying the topics of Tweets?
The news cycle would create temporal dependencies
in the topics being discussed, violating any assumptions of independence.
Sometimes we can get away with minor violations of the i.i.d. assumption
and our models will continue to work remarkably well.
After all, nearly every real-world application
involves at least some minor violation of the i.i.d. assumption,
and yet we have many useful tools for
various applications such as
face recognition,
speech recognition, and language translation.
Other violations are sure to cause trouble.
Imagine, for example, if we try to train
a face recognition system by training it
exclusively on university students
and then want to deploy it as a tool
for monitoring geriatrics in a nursing home population.
This is unlikely to work well since college students
tend to look considerably different from the elderly.
In subsequent chapters, we will discuss problems
arising from violations of the i.i.d. assumption.
For now, even taking the i.i.d. assumption for granted,
understanding generalization is a formidable problem.
Moreover, elucidating the precise theoretical foundations
that might explain why deep neural networks generalize as well as they do
continues to vex the greatest minds in learning theory.
When we train our models, we attempt to search for a function
that fits the training data as well as possible.
If the function is so flexible that it can catch on to spurious patterns
just as easily as to true associations,
then it might perform *too well* without producing a model
that generalizes well to unseen data.
This is precisely what we want to avoid or at least control.
Many of the techniques in deep learning are heuristics and tricks
aimed at guarding against overfitting.
### Model Complexity
When we have simple models and abundant data,
we expect the generalization error to resemble the training error.
When we work with more complex models and fewer examples,
we expect the training error to go down but the generalization gap to grow.
What precisely constitutes model complexity is a complex matter.
Many factors govern whether a model will generalize well.
For example a model with more parameters might be considered more complex.
A model whose parameters can take a wider range of values
might be more complex.
Often with neural networks, we think of a model
that takes more training iterations as more complex,
and one subject to *early stopping* (fewer training iterations) as less complex.
It can be difficult to compare the complexity among members
of substantially different model classes
(say, decision trees vs. neural networks).
For now, a simple rule of thumb is quite useful:
a model that can readily explain arbitrary facts
is what statisticians view as complex,
whereas one that has only a limited expressive power
but still manages to explain the data well
is probably closer to the truth.
In philosophy, this is closely related to Popper's
criterion of falsifiability
of a scientific theory: a theory is good if it fits data
and if there are specific tests that can be used to disprove it.
This is important since all statistical estimation is
*post hoc*,
i.e., we estimate after we observe the facts,
hence vulnerable to the associated fallacy.
For now, we will put the philosophy aside and stick to more tangible issues.
In this section, to give you some intuition,
we will focus on a few factors that tend
to influence the generalizability of a model class:
1. The number of tunable parameters. When the number of tunable parameters, sometimes called the *degrees of freedom*, is large, models tend to be more susceptible to overfitting.
1. The values taken by the parameters. When weights can take a wider range of values, models can be more susceptible to overfitting.
1. The number of training examples. It is trivially easy to overfit a dataset containing only one or two examples even if your model is simple. But overfitting a dataset with millions of examples requires an extremely flexible model.
## Model Selection
In machine learning, we usually select our final model
after evaluating several candidate models.
This process is called *model selection*.
Sometimes the models subject to comparison
are fundamentally different in nature
(say, decision trees vs. linear models).
At other times, we are comparing
members of the same class of models
that have been trained with different hyperparameter settings.
With MLPs, for example,
we may wish to compare models with
different numbers of hidden layers,
different numbers of hidden units,
and various choices of the activation functions
applied to each hidden layer.
In order to determine the best among our candidate models,
we will typically employ a validation dataset.
### Validation Dataset
In principle we should not touch our test set
until after we have chosen all our hyperparameters.
Were we to use the test data in the model selection process,
there is a risk that we might overfit the test data.
Then we would be in serious trouble.
If we overfit our training data,
there is always the evaluation on test data to keep us honest.
But if we overfit the test data, how would we ever know?
Thus, we should never rely on the test data for model selection.
And yet we cannot rely solely on the training data
for model selection either because
we cannot estimate the generalization error
on the very data that we use to train the model.
In practical applications, the picture gets muddier.
While ideally we would only touch the test data once,
to assess the very best model or to compare
a small number of models to each other,
real-world test data is seldom discarded after just one use.
We can seldom afford a new test set for each round of experiments.
The common practice to address this problem
is to split our data three ways,
incorporating a *validation dataset* (or *validation set*)
in addition to the training and test datasets.
The result is a murky practice where the boundaries
between validation and test data are worryingly ambiguous.
Unless explicitly stated otherwise, in the experiments in this book
we are really working with what should rightly be called
training data and validation data, with no true test sets.
Therefore, the accuracy reported in each experiment of the book is really the validation accuracy and not a true test set accuracy.
### $K$-Fold Cross-Validation
When training data is scarce,
we might not even be able to afford to hold out
enough data to constitute a proper validation set.
One popular solution to this problem is to employ
$K$*-fold cross-validation*.
Here, the original training data is split into $K$ non-overlapping subsets.
Then model training and validation are executed $K$ times,
each time training on $K-1$ subsets and validating
on a different subset (the one not used for training in that round).
Finally, the training and validation errors are estimated
by averaging over the results from the $K$ experiments.
## Underfitting or Overfitting?
When we compare the training and validation errors,
we want to be mindful of two common situations.
First, we want to watch out for cases
when our training error and validation error are both substantial
but there is a little gap between them.
If the model is unable to reduce the training error,
that could mean that our model is too simple
(i.e., insufficiently expressive)
to capture the pattern that we are trying to model.
Moreover, since the *generalization gap*
between our training and validation errors is small,
we have reason to believe that we could get away with a more complex model.
This phenomenon is known as *underfitting*.
On the other hand, as we discussed above,
we want to watch out for the cases
when our training error is significantly lower
than our validation error, indicating severe *overfitting*.
Note that overfitting is not always a bad thing.
With deep learning especially, it is well known
that the best predictive models often perform
far better on training data than on holdout data.
Ultimately, we usually care more about the validation error
than about the gap between the training and validation errors.
Whether we overfit or underfit can depend
both on the complexity of our model
and the size of the available training datasets,
two topics that we discuss below.
### Model Complexity
To illustrate some classical intuition
about overfitting and model complexity,
we give an example using polynomials.
Given training data consisting of a single feature $x$
and a corresponding real-valued label $y$,
we try to find the polynomial of degree $d$
$$\hat{y}= \sum_{i=0}^d x^i w_i$$
to estimate the labels $y$.
This is just a linear regression problem
where our features are given by the powers of $x$,
the model's weights are given by $w_i$,
and the bias is given by $w_0$ since $x^0 = 1$ for all $x$.
Since this is just a linear regression problem,
we can use the squared error as our loss function.
A higher-order polynomial function is more complex
than a lower-order polynomial function,
since the higher-order polynomial has more parameters
and the model function's selection range is wider.
Fixing the training dataset,
higher-order polynomial functions should always
achieve lower (at worst, equal) training error
relative to lower degree polynomials.
In fact, whenever the data examples each have a distinct value of $x$,
a polynomial function with degree equal to the number of data examples
can fit the training set perfectly.
We visualize the relationship between polynomial degree
and underfitting vs. overfitting in :numref:`fig_capacity_vs_error`.

:label:`fig_capacity_vs_error`
### Dataset Size
The other big consideration to bear in mind is the dataset size.
Fixing our model, the fewer samples we have in the training dataset,
the more likely (and more severely) we are to encounter overfitting.
As we increase the amount of training data,
the generalization error typically decreases.
Moreover, in general, more data never hurt.
For a fixed task and data distribution,
there is typically a relationship between model complexity and dataset size.
Given more data, we might profitably attempt to fit a more complex model.
Absent sufficient data, simpler models may be more difficult to beat.
For many tasks, deep learning only outperforms linear models
when many thousands of training examples are available.
In part, the current success of deep learning
owes to the current abundance of massive datasets
due to Internet companies, cheap storage, connected devices,
and the broad digitization of the economy.
## Polynomial Regression
We can now explore these concepts interactively
by fitting polynomials to data.
<jupyter_code>from d2l import tensorflow as d2l
import tensorflow as tf
import numpy as np
import math<jupyter_output><empty_output><jupyter_text>### Generating the Dataset
First we need data. Given $x$, we will use the following cubic polynomial to generate the labels on training and test data:
$$y = 5 + 1.2x - 3.4\frac{x^2}{2!} + 5.6 \frac{x^3}{3!} + \epsilon \text{ where }
\epsilon \sim \mathcal{N}(0, 0.1^2).$$
The noise term $\epsilon$ obeys a normal distribution
with a mean of 0 and a standard deviation of 0.1.
For optimization, we typically want to avoid
very large values of gradients or losses.
This is why the *features*
are rescaled from $x^i$ to $\frac{x^i}{i!}$.
It allows us to avoid very large values for large exponents $i$.
We will synthesize 100 samples each for the training set and test set.
<jupyter_code>max_degree = 20 # Maximum degree of the polynomial
n_train, n_test = 100, 100 # Training and test dataset sizes
true_w = np.zeros(max_degree) # Allocate lots of empty space
true_w[0:4] = np.array([5, 1.2, -3.4, 5.6])
features = np.random.normal(size=(n_train + n_test, 1))
np.random.shuffle(features)
poly_features = np.power(features, np.arange(max_degree).reshape(1, -1))
for i in range(max_degree):
poly_features[:, i] /= math.gamma(i + 1) # `gamma(n)` = (n-1)!
# Shape of `labels`: (`n_train` + `n_test`,)
labels = np.dot(poly_features, true_w)
labels += np.random.normal(scale=0.1, size=labels.shape)<jupyter_output><empty_output><jupyter_text>Again, monomials stored in `poly_features`
are rescaled by the gamma function,
where $\Gamma(n)=(n-1)!$.
Take a look at the first 2 samples from the generated dataset.
The value 1 is technically a feature,
namely the constant feature corresponding to the bias.
<jupyter_code># Convert from NumPy ndarrays to tensors
true_w, features, poly_features, labels = [tf.constant(x, dtype=
tf.float32) for x in [true_w, features, poly_features, labels]]
features[:2], poly_features[:2, :], labels[:2]<jupyter_output><empty_output><jupyter_text>### Training and Testing the Model
Let us first implement a function to evaluate the loss on a given dataset.
<jupyter_code>def evaluate_loss(net, data_iter, loss): #@save
"""Evaluate the loss of a model on the given dataset."""
metric = d2l.Accumulator(2) # Sum of losses, no. of examples
for X, y in data_iter:
l = loss(net(X), y)
metric.add(tf.reduce_sum(l), tf.size(l).numpy())
return metric[0] / metric[1]<jupyter_output><empty_output><jupyter_text>Now define the training function.
<jupyter_code>def train(train_features, test_features, train_labels, test_labels,
num_epochs=400):
loss = tf.losses.MeanSquaredError()
input_shape = train_features.shape[-1]
# Switch off the bias since we already catered for it in the polynomial
# features
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1, use_bias=False))
batch_size = min(10, train_labels.shape[0])
train_iter = d2l.load_array((train_features, train_labels), batch_size)
test_iter = d2l.load_array((test_features, test_labels), batch_size,
is_train=False)
trainer = tf.keras.optimizers.SGD(learning_rate=.01)
animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log',
xlim=[1, num_epochs], ylim=[1e-3, 1e2],
legend=['train', 'test'])
for epoch in range(num_epochs):
d2l.train_epoch_ch3(net, train_iter, loss, trainer)
if epoch == 0 or (epoch + 1) % 20 == 0:
animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss),
evaluate_loss(net, test_iter, loss)))
print('weight:', net.get_weights()[0].T)<jupyter_output><empty_output><jupyter_text>### Third-Order Polynomial Function Fitting (Normal)
We will begin by first using a third-order polynomial function, which is the same order as that of the data generation function.
The results show that this model's training and test losses can be both effectively reduced.
The learned model parameters are also close
to the true values $w = [5, 1.2, -3.4, 5.6]$.
<jupyter_code># Pick the first four dimensions, i.e., 1, x, x^2/2!, x^3/3! from the
# polynomial features
train(poly_features[:n_train, :4], poly_features[n_train:, :4],
labels[:n_train], labels[n_train:])<jupyter_output>weight: [[ 5.005555 1.2016203 -3.4087229 5.6142106]]
<jupyter_text>### Linear Function Fitting (Underfitting)
Let us take another look at linear function fitting.
After the decline in early epochs,
it becomes difficult to further decrease
this model's training loss.
After the last epoch iteration has been completed,
the training loss is still high.
When used to fit nonlinear patterns
(like the third-order polynomial function here)
linear models are liable to underfit.
<jupyter_code># Pick the first two dimensions, i.e., 1, x, from the polynomial features
train(poly_features[:n_train, :2], poly_features[n_train:, :2],
labels[:n_train], labels[n_train:])<jupyter_output>weight: [[3.450438 3.2629747]]
<jupyter_text>### Higher-Order Polynomial Function Fitting (Overfitting)
Now let us try to train the model
using a polynomial of too high degree.
Here, there are insufficient data to learn that
the higher-degree coefficients should have values close to zero.
As a result, our overly-complex model
is so susceptible that it is being influenced
by noise in the training data.
Though the training loss can be effectively reduced,
the test loss is still much higher.
It shows that
the complex model overfits the data.
<jupyter_code># Pick all the dimensions from the polynomial features
train(poly_features[:n_train, :], poly_features[n_train:, :],
labels[:n_train], labels[n_train:], num_epochs=1500)<jupyter_output>weight: [[ 4.987612 1.2457238 -3.2961967 5.262131 -0.30287886 1.1887136
0.17722891 0.23300523 0.50503683 0.2780397 0.5294121 -0.11936887
0.30072746 0.25551686 -0.47870854 0.23278084 -0.5179443 -0.22894141
-0.22818288 -0.43829793]]
|
no_license
|
/04-Multilayer-Perceptrons/tensorflow/underfit-overfit.ipynb
|
serivan/DeepLearning
| 8 |
<jupyter_start><jupyter_text># Where should we put taxi station in New York city ?## 1- Problem descriptionIn this hands on, we will explore data about taxi in New York city. The purpose is to understand taxi behaviour, generate some insights about the pattern of rides amount throughout the day across the city and suggest the best locations for futur taxi stops where people can get picked up/dropped off by cabs and wait for cabs to pick them up.## 2- DataWe will be using the training data from 'the New York City Taxi Trip Duration DataSet': https://www.kaggle.com/c/nyc-taxi-trip-duration/data that can be obtained from Kaggle. Datase includes pickup time, geo-coordinates, number of passengers, and several other variables.
Please download train.csv data and put it under './data_used/'
## 3- Read data<jupyter_code># import librairies
import numpy as np
import pandas as pd
from datetime import timedelta
import datetime as dt
import matplotlib.pyplot as plt
import folium
%matplotlib inline
# Some set up:
np.random.seed(1987)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
plt.rcParams['figure.figsize'] = [8,8]
# Read data:
nyc_taxi_data = pd.read_csv('../data_used/train.csv')<jupyter_output><empty_output><jupyter_text>## 4- First data exploration & cleaning <jupyter_code># Display the first 5 lines of data
nyc_taxi_data.head()
# Display data info
nyc_taxi_data.info()
# Display data description
nyc_taxi_data.describe()
# Trip duration clean-up<jupyter_output><empty_output><jupyter_text>As we noted earlier there are some outliers associated with the trip_duration variable, specifically a 980 hour maximum trip duration and a minimum of 1 second trip duration. We decided to exclude data that lies outside 2 standard deviations from the mean.<jupyter_code>m = np.mean(nyc_taxi_data['trip_duration'])
s = np.std(nyc_taxi_data['trip_duration'])
nyc_taxi_data = nyc_taxi_data[nyc_taxi_data['trip_duration'] <= m + 2*s]
nyc_taxi_data = nyc_taxi_data[nyc_taxi_data['trip_duration'] >= m - 2*s]
# Latitude and Longitude clean-up<jupyter_output><empty_output><jupyter_text>Looking into it, the borders of New York city, in coordinates comes out to be:
- City_long_border = (-74.03, -73.75)
- City_lat_border = (40.63, 40.85) <jupyter_code>xlim = [-74.03, -73.77]
ylim = [40.63, 40.85]
nyc_taxi_data = nyc_taxi_data[(nyc_taxi_data.pickup_longitude> xlim[0]) & (nyc_taxi_data.pickup_longitude < xlim[1])]
nyc_taxi_data = nyc_taxi_data[(nyc_taxi_data.dropoff_longitude> xlim[0]) & (nyc_taxi_data.dropoff_longitude < xlim[1])]
nyc_taxi_data = nyc_taxi_data[(nyc_taxi_data.pickup_latitude> ylim[0]) & (nyc_taxi_data.pickup_latitude < ylim[1])]
nyc_taxi_data = nyc_taxi_data[(nyc_taxi_data.dropoff_latitude> ylim[0]) & (nyc_taxi_data.dropoff_latitude < ylim[1])]
nyc_taxi_data = nyc_taxi_data[nyc_taxi_data['pickup_longitude'] <= -73.75]
nyc_taxi_data = nyc_taxi_data[nyc_taxi_data['pickup_longitude'] >= -74.03]
nyc_taxi_data = nyc_taxi_data[nyc_taxi_data['pickup_latitude'] <= 40.85]
nyc_taxi_data = nyc_taxi_data[nyc_taxi_data['pickup_latitude'] >= 40.63]
# Date format clean-up
nyc_taxi_data['pickup_datetime'] = pd.to_datetime(nyc_taxi_data.pickup_datetime)
nyc_taxi_data.loc[:, 'pickup_date'] = nyc_taxi_data['pickup_datetime'].dt.date
nyc_taxi_data['dropoff_datetime'] = pd.to_datetime(nyc_taxi_data.dropoff_datetime)
# Create columns month, week, day and hour pick up:
nyc_taxi_data['month'] = nyc_taxi_data.pickup_datetime.apply(lambda x: x.month)
nyc_taxi_data['week'] = nyc_taxi_data.pickup_datetime.apply(lambda x: x.week)
nyc_taxi_data['day'] = nyc_taxi_data.pickup_datetime.apply(lambda x: x.day)
nyc_taxi_data['hour'] = nyc_taxi_data.pickup_datetime.apply(lambda x: x.hour)
# Display new cleaned data
nyc_taxi_data.head()
# Plot trip_duration distribution using hist()
plt.hist(nyc_taxi_data['trip_duration'].values, bins=100)
plt.xlabel('trip_duration')
plt.ylabel('number of train records')
plt.show()<jupyter_output><empty_output><jupyter_text>We see that major trip duration are less than 2000s <jupyter_code># plot the evolution of number of trips over time
plt.plot(nyc_taxi_data.groupby('pickup_date').count()[['id']], 'o-')
plt.title('Trips over time.')
plt.ylabel('Number of trips')
plt.show()<jupyter_output><empty_output><jupyter_text>Around 8000 trips per day in New York city !!## 5- Data Visualization using matpolotlib & Folium<jupyter_code># Let's have a look to drop off and pick up locations
longitude = list(nyc_taxi_data.pickup_longitude) + list(nyc_taxi_data.dropoff_longitude)
latitude = list(nyc_taxi_data.pickup_latitude) + list(nyc_taxi_data.dropoff_latitude)
plt.figure(figsize = (10,8))
plt.plot(longitude,latitude,'.', alpha = 0.4, markersize = 0.05)
plt.title('Taxi pick up and drop off locations in NYC')
plt.ylabel('long borders')
plt.ylabel('lat borders')
plt.show()
# Display NYC map with Folium:
# Function to generate a new New York City map
def generateNYCmap(default_location=[40.737595, -73.993647],default_width='80%', default_height='80%', default_zoom_start=11):
base_map = folium.Map(location=default_location,width=default_width, height=default_height, control_scale=True,zoom_control=True, zoom_start=default_zoom_start)
return base_map
nyc_map = generateNYCmap()
nyc_map
# Create Heatmap of pick up and drop off locations (use HeatMap from folium.plugins & use only 3 first months)
from folium.plugins import HeatMap
df_heatMap = nyc_taxi_data[nyc_taxi_data.month>4]
data = df_heatMap[['pickup_latitude', 'pickup_longitude']].groupby(['pickup_latitude', 'pickup_longitude']).sum().reset_index().values.tolist()
HeatMap(data, radius=8, max_zoom=13).add_to(nyc_map)
# This function is used to display
def embed_map(m):
from IPython.display import IFrame
m.save('../data_generated/index.html')
return IFrame('../data_generated/index.html', width='100%', height='750px')
embed_map(nyc_map)
# We want to see the evolution of this heatmap over the time (use HeatMapWithTime from folium.plugins)
from folium.plugins import HeatMapWithTime
df_hour_list = []
for hour in df_heatMap.hour.sort_values().unique():
df_hour_list.append(df_heatMap.loc[df_heatMap.hour == hour, ['pickup_latitude', 'pickup_longitude', 'count']].groupby(['pickup_latitude', 'pickup_longitude']).sum().reset_index().values.tolist())
nyc_map_2 = generateNYCmap()
HeatMapWithTime(df_hour_list, radius=5, gradient={0.2: 'blue', 0.4: 'lime', 0.6: 'orange', 1: 'red'}, min_opacity=0.5, max_opacity=0.8, use_local_extrema=True).add_to(nyc_map_2)
embed_map(nyc_map_2)
# Let's create clusters of pick up and drop off locations (Use KMeans)
from sklearn.cluster import KMeans
df_loc = pd.DataFrame()
df_loc['longitude'] = longitude
df_loc['latitude'] = latitude
kmeans = KMeans(n_clusters=15, random_state=2, n_init = 10).fit(df_loc)
df_loc['label'] = kmeans.labels_
df_loc = df_loc.sample(100000)
# Plot clusters
plt.figure(figsize = (10,10))
for label in df_loc.label.unique():
plt.plot(df_loc.longitude[df_loc.label == label],df_loc.latitude[df_loc.label == label],'.', alpha = 0.3, markersize = 0.3)
plt.title('Clusters of New York city for main taxi pick up and drop off locations')
plt.show()
# Let's order clusters by most visited:
df_loc['count']= 1
df_loc.groupby('label').count().sort_values(by='count', ascending=False)<jupyter_output><empty_output><jupyter_text>Let's plot the cluster centers:<jupyter_code>fig,ax = plt.subplots(figsize = (10,10))
for label in loc_df.label.unique():
ax.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 0.4, markersize = 0.1, color = 'gray')
ax.plot(kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1],'o', color = 'brown')
ax.annotate(label, (kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1]), color = 'brown', fontsize = 25)
ax.set_title('Cluster Centers')
plt.show()
<jupyter_output><empty_output><jupyter_text>## So where to put taxi stations ?<jupyter_code># Finally, Add markers to represent represnt taxi station
nyc_map.add_child(folium.ClickForMarker(popup='Potential taxi stop'))
embed_map(nyc_map)<jupyter_output><empty_output>
|
no_license
|
/solution/Hands_on_nyc_taxi_station.ipynb
|
samehBF/playing_with_geo_data
| 8 |
<jupyter_start><jupyter_text># Analysis of Cross-Sectional Data
Linear regression is an essential tool of any econometrician and is widely used throughout finance and economics. Linear regression’s success is owed to two key features: the availability of simple, closed-form estimators, and the ease and directness of interpretation.## Model Description
$$Y = X \beta + \varepsilon$$
With the following assumptions:
* $E(\varepsilon) = 0 $
* $V(\varepsilon) = \sigma^2 I$ (covariance stationary)
* $X$ is nonstochastic fix full rank $K$
**OLS Estimator**: $\hat{\beta} = (X'X)^{-1} X'y$
**OLS Variance Estimator**:
$$ \hat{\sigma}^2 = \frac{ \hat{\varepsilon}' \hat{\varepsilon} } {n-k}$$
The main assumptions are:
* Linearity
* conditional mean is zero
* conditional homoskedasticity ($\sigma^2$)
* conditional normality
* X is full rank
What does the heteroscedasticity mean?
The disturbance in matrix A is homoskedastic; this is the simple case where OLS is the best linear unbiased estimator. The disturbances in matrices B and C are heteroskedastic. In matrix B, the variance is time-varying, increasing steadily across time; in matrix C, the variance depends on the value of x. The disturbance in matrix D is homoskedastic because the diagonal variances are constant, even though the off-diagonal covariances are non-zero and ordinary least squares is inefficient for a different reason: serial correlation.
$$A = \sigma^2 \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \ \ \ B = \sigma^2 \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} $$
$$C = \sigma^2 \begin{bmatrix} x_1 & 0 & 0 \\ 0 & x_2 & 0 \\ 0 & 0 & x_3 \end{bmatrix} \ \ \ D = \sigma^2 \begin{bmatrix} 1 & \rho & \rho^2 \\ \rho & 1 & \rho \\ \rho^2 & \rho & 1 \end{bmatrix}$$
An alternative to modeling heteroskedastic data is to transform the data so that is is homoskedastic using generalized least squares (GLS). GLS extends OLS to allow for arbitrary weighting matrices. The GLS estimator of β is defined:
$$\hat{\beta}^{GLS} = (X'W^{-1}X)^{-1} X' W^{-1} y$$
for some positive definite matrix W. The full value of GLS is only realized when $W$ is wisely chosen.
We could also use maximum likelihood to estimate the coefficient. It is important to note that the derivation of the OLS estimator does not require an assumption of normality. Moreover, the unbiasedness, variance, and BLUE properties do not rely on the conditional normality of residuals.
T-tests can be used to test a single hypothesis involving one or more coefficient. In linear factors models, Fama and French (1992) use returns on specially constructed portfolios as factors to capture specific types of risk. We will first study Fama French 3-factor model and then move to the models with more factors.
The traditional asset pricing model, known formally as the capital asset pricing model (CAPM) uses only one variable to describe the returns of a portfolio or stock with the returns of the market as a whole. In contrast, the Fama–French model uses three variables. Fama and French started with the observation that two classes of stocks have tended to do better than the market as a whole: (i) small caps and (ii) stocks with a high book-to-market ratio (B/P, customarily called value stocks, contrasted with growth stocks).
They then added two factors to CAPM to reflect a portfolio's exposure to these two classes:
$$r = R_f + \beta(R_m - R_f) + b_s \cdot SMB + b_v \cdot HML + +\alpha + \epsilon$$
Or we could write it as:
$$r - R_f = \alpha + \beta(R_m - R_f) + b_s \cdot SMB + b_v \cdot HML + \epsilon $$
We will use the dataset from Fama and French, which you could download [here](http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html).<jupyter_code>import pandas as pd
import numpy as np
import pandas_datareader as pdr
import matplotlib.pyplot as plt
import statsmodels.api as sm
import statsmodels.formula.api as smf
import scipy.stats as stats
from statsmodels.stats.descriptivestats import describe
from statsmodels.stats.diagnostic import het_white
from statsmodels.tsa.api import VAR
# a function of reading and cleaning dataset
def read_fama_french(file):
df = (pd.read_csv(file, skiprows=2) # skip the first two rows
.iloc[0:1136, :] # select certain rows
.set_index('Unnamed: 0')
.rename_axis(None) # remove the name of axis
)
df.index = pd.to_datetime(df.index, format="%Y%m") # convert to datetime
df = df.astype('float')
return df
ff_monthly = read_fama_french('../data/ThreeFactorsMonthly.CSV')
ff_monthly.tail()
ff_monthly.plot()
ff_monthly['RF'].plot()<jupyter_output><empty_output><jupyter_text>### Descriptive Statistics
Now, we would construct a dataset consisting of monthly observations from 1980 to 2019. And we will use it to predict fund returns of "FCNTX". Before we run the regression, we have to math the timeline first.<jupyter_code>def get_return_data(ticker, start_date, end_date, period="M"):
daily_price = pdr.data.DataReader(ticker, 'yahoo', start_date, end_date)['Adj Close']
resample_price = daily_price.resample(period).ffill().dropna()
rtn = resample_price.pct_change().dropna()
return rtn
start_date = '1980-01-01' # to math the timeline
end_date = '2019-12-31'
ticker = "FCNTX"
fcntx_return = get_return_data(ticker, start_date, end_date) * 100 # express it in the percentage format
fcntx_return.head()
fcntx_return.plot()
dataset = ff_monthly.loc['1980-02-01':end_date]
dataset.head()
fcntx_return.shape[0] == dataset.shape[0] # check weather we have the same lenght
# join the two dataset
dataset['fcntx'] = fcntx_return.values
dataset['fcntx_excess'] = dataset['fcntx'] - dataset['RF']
dataset.head()
dataset.tail()
dataset = dataset.rename({'Mkt-RF': 'mkt_excess'}, axis=1)
describe(dataset)
# run Fama French 3 factor model
ff_model = smf.ols('fcntx_excess ~ mkt_excess + SMB + HML', data=dataset).fit()
ff_model.summary()<jupyter_output><empty_output><jupyter_text>**coefficient of SMB** is not significant from zero. One could see why it is like that when we do the model assessment. The Fama-French three-factor model (in future uses – the Fama-French model) pays attention to three major factors:
* market risk
* company size
* value factorsThe main factors driving expected returns are sensitivity to the market, sensitivity to size, and sensitivity to value stocks, as measured by the book-to-market ratio.
HML accounts for the spread in returns between value stocks and growth stocks and argues that companies with high book-to-market ratios, also known as value stocks, outperform those with lower book-to-market values, known as growth stocks.
In this model, the coefficient of HML is negative, which indicates the manager of fcntx might not rely on the value premium to earn an abnormal return, by investing in stocks with relative low book-to-market ratios.### Assessing Fit<jupyter_code>fig = sm.graphics.plot_regress_exog(ff_model, 'SMB')
fig.tight_layout(pad=1.0)
fig = sm.qqplot(ff_model.resid, fit=True, line="45")
fig = sm.qqplot(ff_model.resid, stats.t, fit=True, line="45")<jupyter_output><empty_output><jupyter_text>It looks like that our linear regression model is heteroskedastic. We could run the test. <jupyter_code>labels = ['LM Statistic', 'LM-Test p-value', 'F-Statistic','F-Test p-value']
white_test = het_white(ff_model.resid, ff_model.model.exog)
dict(zip(labels, white_test))<jupyter_output><empty_output><jupyter_text>Heteroskedasticity is indicated if p <0.05, so according to these tests, this model is heteroskedastic.<jupyter_code>ff_model.resid.plot()
# check the autocorrelation of resid
fig = sm.graphics.tsa.plot_acf(ff_model.resid)
# extract rho and sigma
rho, sigma = sm.regression.yule_walker(ff_model.resid)
sigma
# run GLS model
ff_gls = smf.glsar('fcntx_excess ~ mkt_excess + SMB + HML', data=dataset).fit()
ff_gls.summary()
fig = sm.qqplot(ff_gls.resid, stats.t, fit=True, line="45")<jupyter_output><empty_output>
|
no_license
|
/financial_econometrics/Analysis_cross_sectional_data.ipynb
|
oceanumeric/python-finance
| 5 |
<jupyter_start><jupyter_text># 2.1 Color quantization k-means
For this problem you will write code to quantize a color space by applying k-means clustering to the pixels in a given input image. We will experiment with two different color spaces — RGB and HSV.
Implement each of the functions described below. After each function there is a test on the 4x6 image that will be generated within this notebook. These test are to help you verify and debug your code. However, they will not cover every possible edge case. We encourage you to write additional test or debug your code line-by-line to make sure the functions work as expected.
> Note: to pass the tests in this notebook and on Gradescope you will need to use a random seed value of `101` whenever possible. Please check the docstrings for any of the 3rd party functions to make sure you set the random seed properly.### Exporting this notebook to a .py script
Once you are done implementing all the required functions in this notebook, you can go ahead and use the provided `notebook2script.py` script to convert this notebook into a `.py` file for submission.
The provided script will look for all the cells with the `#export` tag in the first line of the cell and only add those cells to the final script. This tag is already present for all the required cells in this notebook.
If you add any cells that you want to include in the submission, you can add the tag to the top of the cell.
The idea behind this is that students get to experiment, print and plot freely in the notebook while ensuring the submission file remains Gradescope friendly. Please avoid putting the `#export` tag on cells with `print`, `imshow`, and `plot` statements.<jupyter_code>#export
import numpy as np
from sklearn.cluster import KMeans
from skimage.color import rgb2hsv, hsv2rgb
from typing import Tuple
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>The commands in the follwing cell will plot all images/plots in an interactive window. If you would prefer to not have interactive plots, comment out %matplotlib notebook and uncomment %matplotlib inline instead.
You can use plt.rcParams['figure.figsize'] to make all the plots in this notebook bigger or smaller.<jupyter_code>%matplotlib notebook
# %matplotlib inline
# plt.rcParams['figure.figsize'] = (7, 3)
# set test_k = 4 to pass the tests in this notebook
test_k = 4
# generate a random test image (with a seed of `101`)
np.random.seed(101)
test_img = np.random.randint(0, 256, size=(4, 6, 3), dtype=np.uint8)
_, ax = plt.subplots()
ax.axis("off")
ax.imshow(test_img)<jupyter_output><empty_output><jupyter_text>## (a) Quantize in RGB space
Given an RGB image, quantize the 3-dimensional RGB space, and map each pixel in the input image to its nearest k-means center. That is, replace the RGB value at each pixel with its nearest cluster’s average RGB value.
Use the [sklearn.cluster.KMeans](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) class to perfom the k-means clustering. See the documentation for details on how to use the class, and make sure you set `random_state=101`.<jupyter_code>#export
def quantize_rgb(img: np.ndarray, k: int) -> np.ndarray:
"""
Compute the k-means clusters for the input image in RGB space, and return
an image where each pixel is replaced by the nearest cluster's average RGB
value.
Inputs:
img: Input RGB image with shape H x W x 3 and dtype "uint8"
k: The number of clusters to use
Output:
An RGB image with shape H x W x 3 and dtype "uint8"
"""
quantized_img = np.zeros_like(img)
##########################################################################
# TODO: Perform k-means clustering and return an image where each pixel #
# is assigned the value of the nearest clusters RGB values. #
##########################################################################
##########################################################################
##########################################################################
return quantized_img
expected_quantized_img_rgb = np.array([[[159, 173, 49],
[ 80, 34, 60],
[159, 173, 49],
[ 99, 60, 190],
[ 99, 60, 190],
[159, 173, 49]],
[[ 80, 34, 60],
[ 99, 60, 190],
[209, 185, 212],
[ 80, 34, 60],
[ 99, 60, 190],
[ 99, 60, 190]],
[[ 99, 60, 190],
[159, 173, 49],
[159, 173, 49],
[ 80, 34, 60],
[ 99, 60, 190],
[ 99, 60, 190]],
[[209, 185, 212],
[209, 185, 212],
[159, 173, 49],
[ 80, 34, 60],
[209, 185, 212],
[ 99, 60, 190]]], dtype=np.uint8)
quantized_img_rgb = quantize_rgb(test_img, test_k)
if np.allclose(quantized_img_rgb, expected_quantized_img_rgb):
print("\nQuantized image computed correctly!")
else:
print("\nQuantized image is incorrect.")
print(f"\nexpected:\n\n{expected_quantized_img_rgb}")
print(f"\ncomputed:\n\n{quantized_img_rgb}")<jupyter_output><empty_output><jupyter_text>Let's take a look at the results.<jupyter_code>fig, axs = plt.subplots(1, 2)
axs[0].axis("off")
axs[0].imshow(test_img)
axs[1].axis("off")
axs[1].imshow(quantized_img_rgb)
# uncomment this line and change the filename as needed to save the figure
# fig.savefig(f"output-quantized-rgb-{k}.png", dpi=200, bbox_inches="tight")<jupyter_output><empty_output><jupyter_text>## (b) Quantize in HSV space
Given an RGB image, convert it to HSV and quantize the 1-dimensional Hue space. Map each pixel in the input image to its nearest quantized Hue value, while keeping its Saturation and Value channels the same as the input. Convert the quantized output back to RGB color space.
Use the [sklearn.cluster.KMeans](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) class to perfom the k-means clustering. See the documentation for details on how to use the class, and make sure you set `random_state=101`.
Use the [skimage.color.rgb2hsv](https://scikit-image.org/docs/dev/api/skimage.color.html#skimage.color.rgb2hsv) and [skimage.color.hsv2rgb](https://scikit-image.org/docs/dev/api/skimage.color.html#skimage.color.hsv2rgb) functions to convert the image to HSV and back to RGB.<jupyter_code>#export
def quantize_hsv(img: np.ndarray, k: int) -> np.ndarray:
"""
Compute the k-means clusters for the input image in the hue dimension of the
HSV space. Replace the hue values with the nearest cluster's hue value. Finally,
convert the image back to RGB.
Inputs:
img: Input RGB image with shape H x W x 3 and dtype "uint8"
k: The number of clusters to use
Output:
An RGB image with shape H x W x 3 and dtype "uint8"
"""
quantized_img = np.zeros_like(img)
##########################################################################
# TODO: Convert the image to HSV. Perform k-means clustering in hue #
# space. Replace the hue values in the image with the cluster centers. #
# Convert the image back to RGB. #
##########################################################################
##########################################################################
##########################################################################
return quantized_img
expected_quantized_img_hsv = np.array([[[ 94, 179, 49],
[131, 11, 112],
[101, 141, 81],
[ 38, 23, 146],
[ 55, 31, 227],
[243, 166, 22]],
[[ 87, 7, 74],
[252, 3, 212],
[253, 215, 246],
[ 54, 75, 43],
[ 29, 0, 239],
[ 90, 79, 175]],
[[132, 125, 187],
[114, 205, 66],
[ 99, 213, 40],
[ 86, 17, 75],
[149, 86, 139],
[ 72, 63, 138]],
[[192, 147, 184],
[199, 195, 227],
[245, 172, 36],
[ 68, 53, 24],
[187, 183, 220],
[ 68, 49, 199]]], dtype=np.uint8)
quantized_img_hsv = quantize_hsv(test_img, test_k)
if np.allclose(quantized_img_hsv, expected_quantized_img_hsv):
print("\nQuantized image computed correctly!")
else:
print("\nQuantized image is incorrect.")
print(f"\nexpected:\n\n{expected_quantized_img_hsv}")
print(f"\ncomputed:\n\n{quantized_img_hsv}")<jupyter_output><empty_output><jupyter_text>Let's take a look at the results.<jupyter_code>fig, axs = plt.subplots(1, 2)
axs[0].axis("off")
axs[0].imshow(test_img)
axs[1].axis("off")
axs[1].imshow(quantized_img_hsv)
# uncomment this line and change the filename as needed to save the figure
# fig.savefig(f"output-quantized-hsv-{k}.png", dpi=200, bbox_inches="tight")<jupyter_output><empty_output><jupyter_text>## (c) Sum of squared error
Write a function to compute the SSD error (sum of squared error) between the original RGB pixel values and the quantized values<jupyter_code>#export
def compute_quantization_error(img: np.ndarray, quantized_img: np.ndarray) -> int:
"""
Compute the sum of squared error between the two input images.
Inputs:
img: Input RGB image with shape H x W x 3 and dtype "uint8"
quantized_img: Quantized RGB image with shape H x W x 3 and dtype "uint8"
Output:
"""
error = 0
##########################################################################
# TODO: Compute the sum of squared error. #
##########################################################################
##########################################################################
##########################################################################
return error
error_rgb = compute_quantization_error(test_img, quantized_img_rgb)
print(f"quantization error (rgb): {error_rgb:,}")
error_hsv = compute_quantization_error(test_img, quantized_img_hsv)
print(f"quantization error (hsv): {error_hsv:,}")
if error_rgb == 112251:
print("\nQuantization error computed correctly!")
else:
print("\nQuantization error incorrect")
print(f"\nexpected: 112,251\ncomputed: {error_rgb}")
if error_hsv == 33167:
print("\nQuantization error computed correctly!")
else:
print("\nQuantization error incorrect")
print(f"\nexpected: 33,167\ncomputed: {error_hsv}")<jupyter_output><empty_output><jupyter_text>## (d) Calculate Hue histograms
Given an image, compute and display two histograms of its hue values. Let the first histogram use equally-spaced bins (uniformly dividing up the hue values), and let the second histogram use bins defined by the `k` cluster center memberships (i.e., all pixels belonging to hue cluster `i` go to the `i-th` bin, for `i=1,...k`).<jupyter_code>#export
def get_hue_histograms(img: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]:
"""
Compute the histogram values two ways: equally spaced and clustered.
Inputs:
img: Input RGB image with shape H x W x 3 and dtype "uint8"
k: The number of clusters to use
Output:
hist_equal: The values for an equally spaced histogram
hist_clustered: The values for a histogram of the cluster assignments
"""
hist_equal = np.zeros((k,), dtype=np.int64)
hist_clustered = np.zeros((k,), dtype=np.int64)
##########################################################################
# TODO: Convert the image to HSV. Calculate a k-bin histogram for the #
# hue dimension. Calculate the k-means clustering of the hue space. #
# Calculate the histogram values for the cluster assignments. #
##########################################################################
##########################################################################
##########################################################################
return hist_equal, hist_clustered
expected_hist_equal = np.array([ 6, 2, 6, 10], dtype=np.int64)
expected_hist_clustered = np.array([3, 7, 9, 5], dtype=np.int64)
hist_equal, hist_clustered = get_hue_histograms(test_img, test_k)
if np.all(hist_equal == expected_hist_equal):
print("\nEqual histogram values computed correctly!")
else:
print("\nEqual histogram values are incorrect.")
print(f"\nexpected: {expected_hist_equal}")
print(f"\ncomputed: {hist_equal}")
if np.all(hist_clustered == expected_hist_clustered):
print("\nClustered histogram values computed correctly!")
else:
print("\nClustered histogram values are incorrect.")
print(f"\nexpected: {expected_hist_clustered}")
print(f"\ncomputed: {hist_clustered}")<jupyter_output><empty_output><jupyter_text>Let's take a look at the results.<jupyter_code>fig, axs = plt.subplots(1, 2)
axs[0].set_title("equal")
axs[0].bar(np.arange(test_k), hist_equal)
axs[1].set_title("clustered")
axs[1].bar(np.arange(test_k), hist_clustered)
# uncomment this line and change the filename as needed to save the figure
# fig.savefig(f"output-histograms-{k}.png", dpi=200, bbox_inches="tight")<jupyter_output><empty_output><jupyter_text>## Submission
Once you are ready to submit, you can run the following cell to export this notebook into a Python script. You should submit this script to Gradescope.<jupyter_code>!python notebook2script.py<jupyter_output><empty_output>
|
no_license
|
/ps2/.ipynb_checkpoints/KMeans-checkpoint.ipynb
|
MetricVoid/CV
| 10 |
<jupyter_start><jupyter_text># dataset## SKlearn<jupyter_code># Here we import the lfw_people dataset. We're going to use this as "noise" later on
from sklearn.datasets import fetch_lfw_people
people = fetch_lfw_people(min_faces_per_person=20, resize=0.7) # The "min_faces_per_person" parameter can be changed in order to combat the amount of classes. The higher the parameter the less classes.
image_shape = people.images[0].shape
# We make sure that a face bellonging to a single person may only come up 10 times at most
mask = np.zeros(people.target.shape, dtype=np.bool)
for target in np.unique(people.target):
mask[np.where(people.target == target)[0][:10]] = 1
X_people = people.data[mask]
# Here we see how many pictures we end up with from the lfw_people dataset
original_len = len(X_people)
print(original_len)<jupyter_output>620
<jupyter_text>### Preparing the target var
A face that comes from the original dataset will have class "0". Later on we'll introduce another class<jupyter_code>y = np.zeros(len(X_people))<jupyter_output><empty_output><jupyter_text>## Andreas
Loading in the pictures beloning to "Andreas" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Andreas Kördel")
lst_an=[]
for i in range(1, 13):
new_lst=[i[0] for i in list(Image.open(f'Andreas ({i}).jpg').getdata())]
lst_an.append(new_lst)
lst_an=np.array(lst_an)<jupyter_output><empty_output><jupyter_text>## Axel
Loading in the pictures beloning to "Axel" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Axel Cajselius")
lst_ax=[]
for i in range(1, 5):
new_lst=[i[0] for i in list(Image.open(f'Axel ({i}).jpg').getdata())]
lst_ax.append(new_lst)
lst_ax=np.array(lst_ax)<jupyter_output><empty_output><jupyter_text>## Filip
Loading in the pictures beloning to "Filip" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Filip Lundqvist")
lst_fi=[]
for i in range(1, 6):
new_lst=[i[0] for i in list(Image.open(f'Filip ({i}).jpg').getdata())]
lst_fi.append(new_lst)
lst_fi=np.array(lst_fi)<jupyter_output><empty_output><jupyter_text>## Gustav
Loading in the pictures beloning to "Gustav" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Gustav Svensson")
lst_gu=[]
for i in range(1, 11):
new_lst=[i[0] for i in list(Image.open(f'Gustav ({i}).jpg').getdata())]
lst_gu.append(new_lst)
lst_gu=np.array(lst_gu)<jupyter_output><empty_output><jupyter_text>## Kevin
Loading in the pictures beloning to "Kevin" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Kevin Björk")
lst_ke=[]
for i in range(1, 11):
new_lst=[i[0] for i in list(Image.open(f'Kevin ({i}).jpg').getdata())]
lst_ke.append(new_lst)
lst_ke=np.array(lst_ke)<jupyter_output><empty_output><jupyter_text>## Nils
Loading in the pictures beloning to "Nils" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Nils Skoglund")
lst_ni=[]
for i in range(1, 12):
new_lst=[i[0] for i in list(Image.open(f'Nils ({i}).jpg').getdata())]
lst_ni.append(new_lst)
lst_ni=np.array(lst_ni)<jupyter_output><empty_output><jupyter_text>## Philip
Loading in the pictures beloning to "Philip" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Philip Gordin")
lst_pi=[]
for i in range(1, 6):
new_lst=[i[0] for i in list(Image.open(f'Philip ({i}).jpg').getdata())]
lst_pi.append(new_lst)
new_lst=[i[0] for i in list(Image.open(f'Philip (6).png').getdata())]
lst_pi.append(new_lst)
lst_pi=np.array(lst_pi)<jupyter_output><empty_output><jupyter_text>## Vincent
Loading in the pictures beloning to "Vincent" and putting the pixeldata into an array<jupyter_code>os.chdir("C:\\Users\\kebbe\\Nod Coding\\Projekt last\\Nod pictures\\Vincent Wallenborg")
lst_vi=[]
for i in range(1, 10):
new_lst=[i[0] for i in list(Image.open(f'Vincent ({i}).jpg').getdata())]
lst_vi.append(new_lst)
lst_vi=np.array(lst_vi)<jupyter_output><empty_output><jupyter_text>## Adding them together
Here we add the pixeldata from the "Nod" faces to the faces of lfw_people<jupyter_code>X_people = np.append(X_people, lst_an, axis=0)
X_people = np.append(X_people, lst_ax, axis=0)
X_people = np.append(X_people, lst_fi, axis=0)
X_people = np.append(X_people, lst_gu, axis=0)
X_people = np.append(X_people, lst_ke, axis=0)
X_people = np.append(X_people, lst_ni, axis=0)
X_people = np.append(X_people, lst_pi, axis=0)
X_people = np.append(X_people, lst_vi, axis=0)
X_people = X_people/255 # This line makes it so that our pixel values will be between 0 and 1
X_people.shape<jupyter_output><empty_output><jupyter_text>## Target var
Here we complement our previous target variable with a new class, the "Nod" class<jupyter_code>new_y = np.ones(len(X_people)-len(y))
y = np.append(y, new_y)<jupyter_output><empty_output><jupyter_text># PCA
We do a PCA of the dataset in order to take a look at the components. From this we might get an insight into what the machine actually looks at when analyzing faces<jupyter_code>X_train, X_test, y_train, y_test = train_test_split(X_people, y)
pca = PCA(n_components=100, whiten=True, random_state=0).fit(X_train)
pca_explained = pca.explained_variance_ratio_
fix, axes = plt.subplots(3, 5, figsize=(15, 12),
subplot_kw={'xticks': (), 'yticks': ()})
for i, (component, ax) in enumerate(zip(pca.components_, axes.ravel())):
ax.imshow(component.reshape(image_shape), cmap="bone")
ax.set_title(f"{i + 1}. component (ca {round(pca_explained[i]*100)} %)") # The percentage indicates how much variance the component captures<jupyter_output><empty_output><jupyter_text>## PCA of Nod only<jupyter_code>X_nod = np.append(lst_an, lst_ax, axis=0)
X_nod = np.append(X_nod, lst_fi, axis=0)
X_nod = np.append(X_nod, lst_gu, axis=0)
X_nod = np.append(X_nod, lst_ke, axis=0)
X_nod = np.append(X_nod, lst_ni, axis=0)
X_nod = np.append(X_nod, lst_pi, axis=0)
X_nod = np.append(X_nod, lst_vi, axis=0)
X_nod = X_nod/255
pca_nod = PCA(n_components=67, whiten=True, random_state=0).fit(X_nod)
pca_explained = pca_nod.explained_variance_ratio_
fix, axes = plt.subplots(3, 5, figsize=(15, 12),
subplot_kw={'xticks': (), 'yticks': ()})
for i, (component, ax) in enumerate(zip(pca_nod.components_, axes.ravel())):
ax.imshow(component.reshape(image_shape), cmap="bone")
ax.set_title(f"{i + 1}. component (ca {round(pca_explained[i]*100)} %)")<jupyter_output><empty_output><jupyter_text># Clustering
On the topic of trying to figure out how the machine thinks we allow the machine to categorize the images using clustering to see if we can find similarities within the clusters<jupyter_code># Using an elbow plot as a way of figuring out how many clusters are necessary.
elbow_viz = KElbowVisualizer(KMeans(), k=(1, 8), timings=False)
elbow_viz.fit(X_people)
plt.show()
kmeans = KMeans(n_clusters = 2)
kmeans.fit(X_people)
# we find the index positions of the images belonging to the different clusters
x=0
index_clus_1=[]
index_clus_0=[]
for i in kmeans.labels_:
if i==1:
index_clus_1.append(x)
else:
index_clus_0.append(x)
x+=1
# This cell converts the shape of the pixeldata from a flat list into a nested list. The components of the nested list are:
# pixel height
# pixel width
# RGB values (these three numbers will be identical given that we only look at black and white images)
for i in index_clus_1[0:15]: # we only visualize some images for convenience
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_people[i][x])
lst.append(X_people[i][x])
lst.append(X_people[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_people=[]
for l in X_people[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_people.append(lst)
# Here we remake the images from the pixel data
im2 = Image.fromarray((np.array(new_X_people, dtype='uint8')).reshape(shape))
im2.show()
# This cell converts the shape of the pixeldata from a flat list into a nested list. The components of the nested list are:
# pixel height
# pixel width
# RGB values (these three numbers will be identical given that we only look at black and white images)
for i in index_clus_0[0:15]:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_people[i][x])
lst.append(X_people[i][x])
lst.append(X_people[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_people=[]
for l in X_people[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_people.append(lst)
# Here we remake the images from the pixel data
im2 = Image.fromarray((np.array(new_X_people, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text>## Clustering - Nod only
Same process but only for the Nod images<jupyter_code>X_nod = np.append(lst_an, lst_ax, axis=0)
X_nod = np.append(X_nod, lst_fi, axis=0)
X_nod = np.append(X_nod, lst_gu, axis=0)
X_nod = np.append(X_nod, lst_ke, axis=0)
X_nod = np.append(X_nod, lst_ni, axis=0)
X_nod = np.append(X_nod, lst_pi, axis=0)
X_nod = np.append(X_nod, lst_vi, axis=0)
X_nod = X_nod/255
elbow_viz = KElbowVisualizer(KMeans(), k=(1, 8), timings=False)
elbow_viz.fit(X_nod)
plt.show()
kmeans = KMeans(n_clusters = 2)
kmeans.fit(X_nod)
x=0
index_clus_1=[]
index_clus_0=[]
for i in kmeans.labels_:
if i==0:
index_clus_0.append(x)
elif i==1:
index_clus_1.append(x)
x+=1
for i in index_clus_1[0:15]:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_nod[i][x])
lst.append(X_nod[i][x])
lst.append(X_nod[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_nod=[]
for l in X_nod[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_nod.append(lst)
im2 = Image.fromarray((np.array(new_X_nod, dtype='uint8')).reshape(shape))
im2.show()
for i in index_clus_0[0:15]:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_nod[i][x])
lst.append(X_nod[i][x])
lst.append(X_nod[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_nod=[]
for l in X_nod[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_nod.append(lst)
im2 = Image.fromarray((np.array(new_X_nod, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text># ML - Can the machine correctly classify the faces that belongs to nod?<jupyter_code># These are the classifiers we'll try out for the model
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import RidgeClassifier
from sklearn.neural_network import MLPClassifier<jupyter_output><empty_output><jupyter_text>## Model selector with scoring<jupyter_code>classifiers=[RandomForestClassifier(), SVC(), LogisticRegression(), DecisionTreeClassifier(), KNeighborsClassifier(), GaussianNB(), RidgeClassifier(), MLPClassifier()]
# These are the scorers we'll optimize the model for
kappa_scorer = make_scorer(cohen_kappa_score)
scorings = ["accuracy", "f1", "precision", "recall", kappa_scorer]
X_train, X_test, y_train, y_test = train_test_split(X_people, y)
# This cell compares all the classifiers and scorer against each other
# Note: long runtime, only run this cell once
#for i in classifiers:
# x=0
# for j in scorings:
# grid = GridSearchCV(i, cv=10, scoring = j, param_grid={})
#
# try:
# grid.fit(X_train, y_train)
# except:
# grid.fit(X_train.toarray(), y_train)
#
# print(i, grid.best_score_, j)
# x+=1
# if x==5:
# print("")<jupyter_output><empty_output><jupyter_text>## MLPClassifier() grid search with accuracy scoring<jupyter_code># This is the parameter space we'll explore
parameter_space = {
'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive'],
}
knn = GridSearchCV(MLPClassifier(), param_grid=parameter_space, cv=10, scoring="accuracy")
knn.fit(X_train, y_train)
print("Train set score: {:.2f}".format(knn.score(X_train, y_train)))
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))
# The two cells below are simply a visualization aid
plt.figure(figsize=(12, 8))
sns.heatmap(confusion_matrix(y_train, knn.predict(X_train)), annot=True, fmt="d", cmap="cividis", cbar=False)
plt.title("Confusion matrix for train data")
plt.show()
plt.figure(figsize=(12, 8))
sns.heatmap(confusion_matrix(y_test, knn.predict(X_test)), annot=True, fmt="d", cmap="cividis", cbar=False)
plt.title("Confusion matrix for test data")
plt.show()<jupyter_output><empty_output><jupyter_text>### The images that are wrongly classified<jupyter_code># Here we find the index location in X_test of the images that were wrongly classified
index_lst=[]
x=0
for i in X_test:
if y_test[x]!=knn.predict([X_test[x]]):
index_lst.append(x)
x+=1
# This cell converts the shape of the pixeldata from a flat list into a nested list. The components of the nested list are:
# pixel height
# pixel width
# RGB values (these three numbers will be identical given that we only look at black and white images)
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
# Here we remake the images from the pixel data
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text>### The images that are TP<jupyter_code># Here we find the index location in X_test of the Nod images that were correctly classified
index_lst=[]
x=0
for i in X_test:
if y_test[x]==knn.predict([X_test[x]]) and y_test[x]==1:
index_lst.append(x)
x+=1
# This cell converts the shape of the pixeldata from a flat list into a nested list. The components of the nested list are:
# pixel height
# pixel width
# RGB values (these three numbers will be identical given that we only look at black and white images)
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
# Here we remake the images from the pixel data
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text>## With PCA
We use the same exact steps here as in section 3.2 except that we no longer take the pixel data into consideration. Instead
we use the first 100 components of the PCA.<jupyter_code>pca = PCA(n_components=100, whiten=True, random_state=0).fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
knn = GridSearchCV(MLPClassifier(), param_grid=parameter_space, cv=10, scoring="accuracy")
knn.fit(X_train_pca, y_train)
print("Train set score: {:.2f}".format(knn.score(X_train_pca, y_train)))
print("Test set score: {:.2f}".format(knn.score(X_test_pca, y_test)))
plt.figure(figsize = (12, 8))
sns.heatmap(confusion_matrix(y_train, knn.predict(X_train_pca)), annot=True, fmt="d", cmap="cividis", cbar=False)
plt.title("Confusion matrix for train data (PCA)")
plt.show()
plt.figure(figsize = (12, 8))
sns.heatmap(confusion_matrix(y_test, knn.predict(X_test_pca)), annot=True, fmt="d", cmap="cividis", cbar=False)
plt.title("Confusion matrix for test data (PCA)")
plt.show()<jupyter_output><empty_output><jupyter_text># ML - Can the machine correctly classify the names belonging to the faces?
This is a sub-question that I want to answer. The main difference in this case is the we consider the names beloning to each face to be a class.<jupyter_code># These are the classes from the lfw_people datasets
y_people = people.target[mask]
# These are the classes from the Nod dataset
y_people = np.append(y_people, np.array([max(list(y_people))+1 for i in range(len(lst_an))]), axis=0)
y_people = np.append(y_people, np.array([max(list(y_people))+2 for i in range(len(lst_ax))]), axis=0)
y_people = np.append(y_people, np.array([max(list(y_people))+3 for i in range(len(lst_fi))]), axis=0)
y_people = np.append(y_people, np.array([max(list(y_people))+4 for i in range(len(lst_gu))]), axis=0)
y_people = np.append(y_people, np.array([max(list(y_people))+5 for i in range(len(lst_ke))]), axis=0)
y_people = np.append(y_people, np.array([max(list(y_people))+6 for i in range(len(lst_ni))]), axis=0)
y_people = np.append(y_people, np.array([max(list(y_people))+7 for i in range(len(lst_pi))]), axis=0)
y_people = np.append(y_people, np.array([max(list(y_people))+8 for i in range(len(lst_vi))]), axis=0)<jupyter_output><empty_output><jupyter_text>## Model selector
These steps are identical to the previous model<jupyter_code>classifiers=[RandomForestClassifier(), SVC(), LogisticRegression(), DecisionTreeClassifier(), KNeighborsClassifier(), GaussianNB(), RidgeClassifier(), MLPClassifier()]
X_train, X_test, y_train, y_test = train_test_split(X_people, y_people)
# Only run once
#for i in classifiers:
# grid = GridSearchCV(i, cv=10, scoring = "accuracy", param_grid={})
# try:
# grid.fit(X_train, y_train)
# except:
# grid.fit(X_train.toarray(), y_train)
# print(i, grid.best_score_, "accuracy")
# print("")<jupyter_output><empty_output><jupyter_text>## MLPClassifier() with accuracy scoring
These steps are identical to the previous model<jupyter_code>parameter_space = {
'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive'],
}
knn = GridSearchCV(MLPClassifier(), param_grid=parameter_space, cv=10, scoring="accuracy")
knn.fit(X_train, y_train)
print("Train set score: {:.2f}".format(knn.score(X_train, y_train)))
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))<jupyter_output>Train set score: 0.04
Test set score: 0.01
<jupyter_text>### Where were our faces assigned? - Andreas
Note: the steps are identical for the subsections below<jupyter_code># We find the index position of the person we're interested in
index_lst=[]
for i in X_people[original_len:(original_len+len(lst_an))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
# Here we'll see the name that were assigned to the face beloning to the person in question
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test_pca[q]])[0]])
except:
if max(list(y_people))+1 == knn.predict([X_test[q]])[0] or max(list(y_people))+2 == knn.predict([X_test[q]])[0] or max(list(y_people))+3 == knn.predict([X_test[q]])[0] or max(list(y_people))+4 == knn.predict([X_test[q]])[0] or max(list(y_people))+5 == knn.predict([X_test[q]])[0] or max(list(y_people))+6 == knn.predict([X_test[q]])[0] or max(list(y_people))+7 == knn.predict([X_test[q]])[0] or max(list(y_people))+8 == knn.predict([X_test[q]])[0]:
print("One of the Nods!")
else:
print("Unknown Name")
# Here we again recreate the image from pixel data to see which faces were assigned to the names above
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output>Unknown Name
<jupyter_text>### Where were our faces assigned? - Axel<jupyter_code>index_lst=[]
for i in X_people[(original_len+len(lst_an)):((original_len+len(lst_an))+len(lst_ax))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test[q]])[0]])
except:
print("Unknown name")
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text>### Where were our faces assigned? - Filip<jupyter_code>index_lst=[]
for i in X_people[((original_len+len(lst_an))+len(lst_ax)):(((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test[q]])[0]])
except:
print("Unknown name")
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text>### Where were our faces assigned? - Gustav<jupyter_code>index_lst=[]
for i in X_people[(((original_len+len(lst_an))+len(lst_ax))+len(lst_fi)):((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test[q]])[0]])
except:
print("Unknown name")
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output>Unknown name
Alvaro Uribe
Unknown name
Unknown name
Unknown name
<jupyter_text>### Where were our faces assigned? - Kevin<jupyter_code>index_lst=[]
for i in X_people[((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu)):(((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))+len(lst_ke))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test[q]])[0]])
except:
print("Unknown name")
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text>### Where were our faces assigned? - Nils<jupyter_code>index_lst=[]
for i in X_people[(((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))+len(lst_ke)):((((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))+len(lst_ke))+len(lst_ni))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test[q]])[0]])
except:
print("Unknown name")
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output>Unknown name
Unknown name
Unknown name
<jupyter_text>### Where were our faces assigned? - Vincent<jupyter_code>index_lst=[]
for i in X_people[(((((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))+len(lst_ke))+len(lst_ni))+len(lst_pi)):((((((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))+len(lst_ke))+len(lst_ni))+len(lst_pi))+len(lst_vi))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test[q]])[0]])
except:
print("Unknown name")
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output>Jack Straw
Unknown name
Unknown name
Unknown name
Unknown name
<jupyter_text>### Where were our faces assigned? - Philip<jupyter_code>index_lst=[]
for i in X_people[((((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))+len(lst_ke))+len(lst_ni)):(((((((original_len+len(lst_an))+len(lst_ax))+len(lst_fi))+len(lst_gu))+len(lst_ke))+len(lst_ni))+len(lst_pi))]:
x=0
for j in X_test:
if len(list(i))== len(list(j)) and len(list(i)) == sum([1 for k, l in zip(list(i), list(j)) if k == l]):
index_lst.append(x)
x+=1
for q in index_lst:
try:
print(people.target_names[knn.predict([X_test[q]])[0]])
except:
print("Unknown name")
for i in index_lst:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst.append(X_test[i][x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
n = np.array(lst_h)*255
shape = n.shape
new_X_test=[]
for l in X_test[i]:
lst=[]
lst.append(l*255)
lst.append(l*255)
lst.append(l*255)
new_X_test.append(lst)
im2 = Image.fromarray((np.array(new_X_test, dtype='uint8')).reshape(shape))
im2.show()<jupyter_output><empty_output><jupyter_text>## With PCA<jupyter_code># Again wee see if the model performs better when looking at pca components in comparison to raw pixel data
pca = PCA(n_components=100, whiten=True, random_state=0).fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
knn = GridSearchCV(MLPClassifier(), param_grid=parameter_space, cv=10, scoring="accuracy")
knn.fit(X_train_pca, y_train)
print("Train set accuracy: {:.2f}".format(knn.score(X_train_pca, y_train)))
print("Test set accuracy: {:.2f}".format(knn.score(X_test_pca, y_test)))<jupyter_output>Train set accuracy: 1.00
Test set accuracy: 0.23
<jupyter_text>## Using a model built from keras
Using the steps from https://stackabuse.com/image-recognition-in-python-with-tensorflow-and-keras we try to build a cnn model from scratch to see how it compares to the MLPClassier.
Note: this model only works for large enough data sets.<jupyter_code>from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, BatchNormalization, Activation
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.constraints import maxnorm
from keras.utils import np_utils
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
class_num = y_test.shape[1]
model = Sequential()
the_X_train=[]
for i in X_train:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(i[x])
lst.append(i[x])
lst.append(i[x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
the_X_train.append(lst_h)
X_train=np.array(the_X_train)
the_X_test=[]
for i in X_test:
x=0
lst_h=[]
for j in range(0, 87):
lst_w=[]
for k in range(0, 65):
lst=[]
lst.append(i[x])
lst.append(i[x])
lst.append(i[x])
lst_w.append(lst)
x+=1
lst_h.append(lst_w)
the_X_test.append(lst_h)
X_test=np.array(the_X_test)
model.add(Conv2D(65, (6, 6), input_shape=X_train.shape[1:], padding='same'))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Conv2D(130, (6, 6), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Conv2D(130, (6, 6), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Conv2D(260, (6, 6), padding='same'))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(256, kernel_constraint=maxnorm(3)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(128, kernel_constraint=maxnorm(3)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(class_num))
model.add(Activation('softmax'))
epochs = 25
optimizer = 'adam'
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=64) # Note that the validation data is equal to the test data. This is an incorrect use of the test data. The reason why I make this shortcut is that I just want an idea of how the model will perform
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))<jupyter_output><empty_output><jupyter_text>## Can the machine correctly classify the names belonging to the faces if we use Nod only?
If we remove all the noise (i.e. the lfw_people dataset), will the machine have an easier time connecting the faces of Nod to the correct name?<jupyter_code>X_people = np.append(lst_an, lst_ax, axis=0)
X_people = np.append(X_people, lst_fi, axis=0)
X_people = np.append(X_people, lst_gu, axis=0)
X_people = np.append(X_people, lst_ke, axis=0)
X_people = np.append(X_people, lst_ni, axis=0)
X_people = np.append(X_people, lst_pi, axis=0)
X_people = np.append(X_people, lst_vi, axis=0)
X_people = X_people/255
y_people = np.append(np.array([1 for i in range(len(lst_an))]), np.array([2 for i in range(len(lst_ax))]), axis=0)
y_people = np.append(y_people, np.array([3 for i in range(len(lst_fi))]), axis=0)
y_people = np.append(y_people, np.array([4 for i in range(len(lst_gu))]), axis=0)
y_people = np.append(y_people, np.array([5 for i in range(len(lst_ke))]), axis=0)
y_people = np.append(y_people, np.array([6 for i in range(len(lst_ni))]), axis=0)
y_people = np.append(y_people, np.array([7 for i in range(len(lst_pi))]), axis=0)
y_people = np.append(y_people, np.array([8 for i in range(len(lst_vi))]), axis=0)<jupyter_output><empty_output><jupyter_text>### Model selector<jupyter_code>classifiers=[RandomForestClassifier(), SVC(), LogisticRegression(), DecisionTreeClassifier(), KNeighborsClassifier(), GaussianNB(), RidgeClassifier(), MLPClassifier()]
X_train, X_test, y_train, y_test = train_test_split(X_people, y_people)
# Only run once
#for i in classifiers:
# grid = GridSearchCV(i, cv=10, scoring = "accuracy", param_grid={})
# try:
# grid.fit(X_train, y_train)
# except:
# grid.fit(X_train.toarray(), y_train)
# print(i, grid.best_score_, "accuracy")
# print("")<jupyter_output><empty_output><jupyter_text>### MLPClassifier() with accuracy scoring<jupyter_code>parameter_space = {
'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive'],
}
knn = GridSearchCV(MLPClassifier(), param_grid=parameter_space, cv=5, scoring="accuracy")
knn.fit(X_train, y_train)
print("Train set score: {:.2f}".format(knn.score(X_train, y_train)))
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))
# Again wee see if the model performs better when looking at pca components in comparison to raw pixel data
pca = PCA(n_components=25, whiten=True, random_state=0).fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
knn = GridSearchCV(MLPClassifier(), param_grid=parameter_space, cv=5, scoring="accuracy")
knn.fit(X_train_pca, y_train)
print("Train set accuracy: {:.2f}".format(knn.score(X_train_pca, y_train)))
print("Test set accuracy: {:.2f}".format(knn.score(X_test_pca, y_test)))<jupyter_output>Train set accuracy: 1.00
Test set accuracy: 0.59
|
no_license
|
/is it nod.ipynb
|
MadSeved/Project-6-Nod-Analytics-Bootcamp
| 38 |
<jupyter_start><jupyter_text>## ML Project
In this Capstone Project we will be engaging with the stock price data for several hundred stocks over 5 years, which can be found at:
(./Data/prices-split-adjusted.csv.zip). The data is from [Kaggle](https://www.kaggle.com/dgawlik/nyse#prices-split-adjusted.csv) and is available in its original form there under a CC0 license. We will be using a slightly preprocessed version in the repo.
In this notebook we will be looking at financial forecasting with machine learning. This is inherently one of the hardest problems in machine learning, because some of the most advanced and well-funded technical teams in the world are trying to use machine learning and other techniques to find patterns in the financial data. When they find patterns and if they trade on those findings, prices will move in a way that makes those patterns less pronounced over time. This is not to say that this is not a fun and rewarding area. Just do not get discouraged if you don't find an instant money machine.
### Outline:
0. Background
1. Preparing our tools
2. Importing and describing the data.
3. Exploring, cleaning and visualizing the data
3. Developing analytics
4. Preparing and splitting our data
5. Building our first model
6. Extending to other ML models
7. Ideas for further strategies
8. Wrapping up
### Options
As we progress you are encouraged to take this dataset further. You are also encouraged to explore any aspects of the data. Develop your own algorithms. Be explicit about your inquiry and success in predicting effects on our world.
### Warning: Not financial advice
This exercise is meant purely for educational purposes, uses many simplifications and is not intended, nor should be considered as financial advice. There are many risks involved in implementation of financial trading strategies that are not considered nor described here.### Setting up
If you have not yet set up your environment, you can easily do so with VS Code, and the python extension and Anaconda.
For VSCode go here: [https://code.visualstudio.com/]
and then you can follow these instructions:
[https://code.visualstudio.com/docs/python/data-science-tutorial]
0. Background
Machine learning is of increasing importance in finance. As volumes of data grow ever faster, the need for machine driven models to find patterns in that data becomes ever more important. In the ever-accelerating race to better process data into predictions about securities prices, machine learning has become an important tool, because it is good at finding patterns in large amounts of data. Today we will be examining patterns in stock prices themselves to practice developing models to predict future set prices. If there are systematic trends, patterns or reversals, then we may detect them.
While the chances that we discover totally new and unexploited price patterns today is low, we will practice organizing our data, creating and analyzing machine learning models that will give us the tools to develop state of the art signals of value.
Goals:
1. Become familiar and practice the process of building machine learning models as they relate to financial data.
2. Understand the special processing that is required when working with time series data such those found in finance.
### 1. Preparing our tools.
Let's review our standard imports:
- numpy for rapid numerical calculations with fast vectorized C implementations
- pandas for processing data
- matplotlib and Seaborn for visualizing charts
- scikit-learn (imported as sklearn) is the de facto standard machine learning library in the pydata ecosystem.
Additionally, we will be using [pandas_profiling](https://github.com/pandas-profiling/pandas-profiling) which is a newer convenience package that helps by putting together much of our initial *boilerplate* exploratory data analysis code.
<jupyter_code># Bring our tools in:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pandas_profiling import ProfileReport
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor
from sklearn.model_selection import TimeSeriesSplit
%matplotlib inline<jupyter_output><empty_output><jupyter_text>### Importing and describing the data
Now we are ready to import our data. The value rows can be set if you only want to import a small subset due to computer memory or speed constraints.<jupyter_code># We are using information from the data source description to know that date is a column containing just what is says
rows = None
stocks = pd.read_csv('./Data/prices-split-adjusted.csv.zip', nrows=rows, parse_dates=['date'], index_col='date')<jupyter_output><empty_output><jupyter_text>Now that we have our data successfully loaded, let's explore what we have. First, summarize the dataframe via the info method to validate the data reading and parsing. When looking at the info report, it is best practice to note that each column is the expected type, noting that strings are reported as object. Also note if there are null values, how many values there are and what the columns are.
We do not have a data dictionary in this case. But if you were lucky enough to have access to a data dictionary, this is a good time to check that the dictionary matches what you actually have. Discrepancies could be the result of mis-parsing, undocumented schema changes, documentation that is not up to date, or a number of other reasons. <jupyter_code>stocks.info() # Look at the descriptions of the columns<jupyter_output><class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 851264 entries, 2016-01-05 to 2016-12-30
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 symbol 851264 non-null object
1 open 851264 non-null float64
2 close 851264 non-null float64
3 low 851264 non-null float64
4 high 851264 non-null float64
5 volume 851264 non-null float64
dtypes: float64(5), object(1)
memory usage: 42.2+ MB
<jupyter_text>Everything looks as expected in info. The string column symbol is the trading symbol, also known in finance as the ticker. The dates were parsed as expected and all the other columns are numeric. Now we can look at the first few rows of data to get a sample view. <jupyter_code>stocks.head(10)<jupyter_output><empty_output><jupyter_text>While the head gives a good preview of a piece of the data, it may not be a great overall view of the entire dataset, especially for larger data sets or ones that may have been sorted at some point. However, we are more comfortable that the dates were parsed correctly. We can investigate numerical columns with describe. <jupyter_code>stocks.describe()<jupyter_output><empty_output><jupyter_text>
#### Pandas Profiling
Data exploration can start with a turnkey tool like pandas-profiling. The important this with this is to make sure you actually look at the report and digest the output. Make it the start of your investigation.Instructor note:
Let students take a minute to look at this report. See what they find. You may want to do a think, pair, share or another form of reflection. We run this report with minimal=True because the data set is large, and this avoids slow calculations.
Then go through the report and show students what they should be looking at for example: the number of variables, the number of observations, duplicate rows.
Be sure to point out that there is a warning that symbol has high cardinality. This warning is OK because we have data on a large number of stocks. It is expected with this data set and is typical of many finance data sets where there are many instruments.
Students may be interested to note the long tail in the numeric columns that can be seen in the histograms of each variable (in the variables section). The histogram combines data from many different stocks so the variation in variables like closing price "close", or in "volume" is greater and shows a long tail.
You can show the output in either a more dynamic widget or in an iframe using either of these lines of code:
profile.to_widgets()
profile.to_notebook_iframe()
<jupyter_code># Minimal avoids expensive calculations that won't have much insight for us and are slow.
profile = ProfileReport(stocks, minimal=True)
profile.to_widgets()<jupyter_output>Summarize dataset: 100%|██████████| 16/16 [00:01<00:00, 9.84it/s, Completed]
Generate report structure: 100%|██████████| 1/1 [00:01<00:00, 1.90s/it]
Render widgets: 100%|██████████| 1/1 [00:04<00:00, 4.72s/it]<jupyter_text>#### Exercise:
Examine this report and see what insights you notice. What do you notice about the data? Are there ways to slice the data that would give more information? Instructor notes:
Students can take a few minutes to do their own exploration. If students don't see much or you covered this thoroughly, you can suggest that they dig into a particular stock by running a report on an individual stock, letting them choose their own symbol. A sample code response follows the empty cell provided for student responses.
In the provided answer highlighting Microsoft, you can see that the variables have some tail, but not the same extremes as when many stocks were combined. Point out that if they were doing this at work, they would probably look at each stock individually and really get to know their data. Or at least do so for a representative sample, if not every stock. <jupyter_code># Blank cell left for student exploration
# Instructor sample answer
profile = ProfileReport(stocks[stocks['symbol']=='MSFT'], minimal=False)
profile.to_widgets()<jupyter_output>Summarize dataset: 100%|██████████| 16/16 [00:00<00:00, 94.12it/s, Completed]
Generate report structure: 100%|██████████| 1/1 [00:01<00:00, 1.36s/it]
Render widgets: 100%|██████████| 1/1 [00:03<00:00, 3.89s/it]<jupyter_text>Looking at an individual stock gives a much clearer impression of the distributions of each column. Even if you can not do this for every stock, taking a sample can be very helpful.
### 2. Exploring, cleaning and visualizing the data
#### Modeling our data
One of the most important steps in preparing your data is to think of how we want to think about it. For those who have worked with SQL, this is like identifying what the key to your table is. For this data exploration, we can think of the index into the data as being a a compound key of both the symbol (the ticker) and the date. We will process the data in a way that will make a time series model for each stock. Predicting the future based on what has happened in the past for that individual stock.
Thought Question: Can you think of other ways that you might want to consider this data? What questions might prompt you to think of this data having a different data model? The data model is not fixed but is a lens that lets us look at our data.
Instructor notes: (Sample answer) Perhaps we would be interested in studying certain holidays affect the variance in volumes? Perhaps we have a hypothesis that just a few stocks heavily traded on holidays and most others have light trading. Then we might only have the date as our key as we consider cross sectional data.
Hopefully students will have other answers. #### Visualizing our data
It is often helpful to look at our data visually to see if there are any issues that "look funny." With experience, just looking at the data can help us understand it in a short amount of time. This is often the step where domain expertise (in this case the financial markets) is especially useful. Be sure to look at the histograms of the report above to see if they make sense to you.
Exercise:
Are there other visualizations that would give you greater understanding?<jupyter_code># left blank for student answer
# Instructor notes
# Let's look at how many stocks have data for each date
counts = stocks[['symbol']].groupby('date').count()
counts.head(20)
# Instructor notes continued
# This is much clearer with a quick visualization
sns.barplot(x=counts.index, y=counts['symbol'])
plt.title('Number of stocks with data on each date')<jupyter_output><empty_output><jupyter_text>Instructor notes:
This display shows us that the most stocks are available on the latest date. This is an indication that we may have survivor bias. A bias in financial datasets where securities representing firms that go out of business are not included.
We cannot yet be sure that we have for the survivorship bias, but it is something to note. Students might want to investigate this later. ### 3. Feature engineering
For modelling data with machine learning, it is helpful to transform the data into a form that is closer to the theoretical expectations where the ML models should perform well. Let's transform the data into returns and generate other features. We will transform returns with logarithms based on financial research that log returns are closer to normally distributed and (statistically) stable.
The function below is just a sample of feature transformations. Taking the logarithms can help deal with skewed data as we saw we have in the pandas-profile report.
To be honest, which variables you use and how you transform them is largely dependent on domain expertise and traditions of the field. It can also be a matter of trial and error, although that can lead to overfitting. We will discuss overfitting a little bit later.<jupyter_code>def feature_target_generation(df):
"""
df: a pandas dataframe containing numerical columns
num_days_ahead: an integer that can be used to shift the prediction value from the future into a prior row.
"""
# The following line ensures the data is in date order
features = pd.DataFrame(index=df.index).sort_index()
features['f01'] = np.log(df.close / df.open) # intra-day log return
features['f02'] = np.log(df.open / df.close.shift(1)) # overnight log return
features['f03'] = df.volume # try both regular and log volume
features['f04'] = np.log(df.volume)
features['f05'] = df.volume.diff() # 1-day absolute change in volume
features['f06'] = df.volume.pct_change() # 1-day relative change in volume
# The following are rolling averages of different periods
features['f07'] = df.volume.rolling(5, min_periods=1).mean().apply(np.log)
features['f08'] = df.volume.rolling(10, min_periods=1).mean().apply(np.log)
features['f09'] = df.volume.rolling(30, min_periods=1).mean().apply(np.log)
# More of our original data: low, high and close
features['f10'] = df.low
features['f11'] = df.high
features['f12'] = df.close
# The Intraday trading spread measures how far apart the high and low are
features['f13'] = df.high - df.low
# These are log returns over different time periods
features['f14'] = np.log(df.close / df.close.shift(1)) # 1 day log return
features['f15'] = np.log(df.close / df.close.shift(5)) # 5 day log return
features['f16'] = np.log(df.close / df.close.shift(10)) # 10 day log return
return features<jupyter_output><empty_output><jupyter_text>In Machine Learning, we need to predict something, typically called our target or predictor.
Let's predict the value of the stock price 10 days into the future, using "prediction_horizon". Here 10 is a hyperparameter that is somewhat arbitrary. We may want to try different horizons to see if we are better at predicting the near future or the long term.
The ticker lets us start by testing on a single stock for speed in training.
We want to look at different periods of history to see how we do. We will use overlapping sets with n_splits. <jupyter_code># Let's generate a list of tickers so we can easily select them
ticker_list = stocks.symbol.unique()
# these are hyperparameters you can play with or tune
prediction_horizon = -5 # this is a negative number by convention
ticker = 'MSFT' # choose any ticker
n_splits = 5
# Make an individual model for each ticker/symbol
features = feature_target_generation(stocks[stocks.symbol==ticker])<jupyter_output><empty_output><jupyter_text>### 5. Preparing and splitting our data
It is important that we separate our training and test data. Since our rows are already time ordered, we can easily do splits. This is one of the areas where times series data is different from other machine learning problems. <jupyter_code># We are trying to predict the price prediction_horizon days in the future. So we take the future value and move it prediction_horizon into the past to line up our data in the Scikit-learn format.
y = features.f12.shift(prediction_horizon)
# The latest (prediction_horizon) rows will have nans because we have no future data, so let's drop them.
shifted = ~np.isnan(y)
X = features[y.notna()] # Remove the rows that do not have valid target values
y = y[shifted] # Remove the rows that do not have valid target values
# Split the history into different backtesting regimes
tscv = TimeSeriesSplit(n_splits=n_splits)
print(tscv)
# Review the features
features.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1762 entries, 2010-01-04 to 2016-12-30
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 f01 1762 non-null float64
1 f02 1761 non-null float64
2 f03 1762 non-null float64
3 f04 1762 non-null float64
4 f05 1761 non-null float64
5 f06 1761 non-null float64
6 f07 1762 non-null float64
7 f08 1762 non-null float64
8 f09 1762 non-null float64
9 f10 1762 non-null float64
10 f11 1762 non-null float64
11 f12 1762 non-null float64
12 f13 1762 non-null float64
13 f14 1761 non-null float64
14 f15 1757 non-null float64
15 f16 1752 non-null float64
dtypes: float64(16)
memory usage: 234.0 KB
<jupyter_text>### 6. Building our first model
This is a regression problem. Why is that so?
Instructor notes:
This is a regression problem because it is predicting a price, which is a continuous value. #### Linear regression
In our ML framework we can use linear regression, just as in standard statistics or econometrics.<jupyter_code>def model_ts_report(model, tscv, X, y, impute=False):
"""
Fit the model and then run time series backtests.
"""
# Loop through the backtests
for train_ind, test_ind in tscv.split(X):
# Report on the time periods
print(f'Train is from {X.iloc[train_ind].index.min()} to {X.iloc[train_ind].index.max()}. ')
print(f'Test is from {X.iloc[test_ind].index.min()} to {X.iloc[test_ind].index.max()}. ')
# Generate training and testing features and target for each fold.
X_train, X_test = X.iloc[train_ind], X.iloc[test_ind]
y_train, y_test = y.iloc[train_ind], y.iloc[test_ind]
if impute==True:
# Since linear regression cannot deal with NaN, we need to impute. There may be the better choices.
X_train.fillna(0, inplace=True)
X_test.fillna(0, inplace=True)
# Fit the model
model.fit(X_train, y_train)
# Predict and measure on the training data
y_pred_train = model.predict(X_train)
print("Training results:")
print("RMSE:", mean_squared_error(y_train, y_pred_train, squared=False))
# Predict and measure on the testing data
y_pred_test = model.predict(X_test)
print("Test results:")
print("RMSE:", mean_squared_error(y_test, y_pred_test, squared=False))
print("")
from sklearn.linear_model import LinearRegression
# Fit and report on a linear model
lm = LinearRegression()
model_ts_report(lm, tscv, X, y, impute=True)<jupyter_output>Train is from 2010-01-04 00:00:00 to 2011-03-08 00:00:00.
Test is from 2011-03-09 00:00:00 to 2012-05-03 00:00:00.
Training results:
RMSE: 0.7646691835738095
Test results:
RMSE: 0.8845462401138319
Train is from 2010-01-04 00:00:00 to 2012-05-03 00:00:00.
Test is from 2012-05-04 00:00:00 to 2013-07-03 00:00:00.
Training results:
RMSE: 0.8046921042807038
Test results:
RMSE: 0.9584479421164025
Train is from 2010-01-04 00:00:00 to 2013-07-03 00:00:00.
Test is from 2013-07-05 00:00:00 to 2014-08-29 00:00:00.
Training results:
RMSE: 0.838852087862089
Test results:
RMSE: 1.2199724751308523
Train is from 2010-01-04 00:00:00 to 2014-08-29 00:00:00.
Test is from 2014-09-02 00:00:00 to 2015-10-27 00:00:00.
Training results:
RMSE: 0.913565597444975
Test results:
RMSE: 1.8578363718516178
Train is from 2010-01-04 00:00:00 to 2015-10-27 00:00:00.
Test is from 2015-10-28 00:00:00 to 2016-12-22 00:00:00.
Training results:
RMSE: 1.1573573014110945
Test results:
RMSE: 1.5623450048029346
<jupyter_text>As we look at our results, we can see that in each period we do better in training than in testing. That is typical of finance and machine learning more generally.
Interestingly, we are able to explain 90% of the variance with our first linear model.
If you have questions about the root-mean-squared-error (RMSE), please see the Microsoft learn module on machine learning.
#### Ensemble Model
Let's try a Random Forest, which is a commonly used model that blends a group of decision trees, each of which have access to a sub-sample of features. It is commonly used because it it tends to work well with relatively little tuning of hyperparameters and is somewhat less likely to overfit. This is NOT a classic model commonly used in econometrics, mainly because it does not have a linear structure that corresponds with econometric theory. However, it is often used in predictions for items other than finance.
<jupyter_code># Initiate a Random Forest
rf = RandomForestRegressor()
model_ts_report(rf, tscv, X, y, impute=True) # Report on the random forest<jupyter_output>Train is from 2010-01-04 00:00:00 to 2011-03-08 00:00:00.
Test is from 2011-03-09 00:00:00 to 2012-05-03 00:00:00.
Training results:
RMSE: 0.22906548918172076
Test results:
RMSE: 1.1612559773386029
Train is from 2010-01-04 00:00:00 to 2012-05-03 00:00:00.
Test is from 2012-05-04 00:00:00 to 2013-07-03 00:00:00.
Training results:
RMSE: 0.2465235392103238
Test results:
RMSE: 1.365068531773533
Train is from 2010-01-04 00:00:00 to 2013-07-03 00:00:00.
Test is from 2013-07-05 00:00:00 to 2014-08-29 00:00:00.
Training results:
RMSE: 0.2520507729994823
Test results:
RMSE: 5.386514324019596
Train is from 2010-01-04 00:00:00 to 2014-08-29 00:00:00.
Test is from 2014-09-02 00:00:00 to 2015-10-27 00:00:00.
Training results:
RMSE: 0.277340627161955
Test results:
RMSE: 2.9180427728997618
Train is from 2010-01-04 00:00:00 to 2015-10-27 00:00:00.
Test is from 2015-10-28 00:00:00 to 2016-12-22 00:00:00.
Training results:
RMSE: 0.3826987727549569
Test results:
RMSE: 5.252229327251081
<jupyter_text>By using a Random Forest, we are able to bring down our training data error metrics by a lot (over 50% decrease on RMSE)! However, our test results are not as good for most of the time slices. This is an indication that we are *overfitting* our model to the training data. This model would likely not do as well in production as it would in our backtests. Why? Because our models have not done well on any of the new data, outside the time period during which they were trained.
### Open Ended Exercise
Now is your turn to go ahead and improve these models. Some areas that might help could be to:
- Tune the existing models (Random forest has a number of parameters that may help)
- Clean the existing data (Fill missing values better)
- Try other models such as Support Vector Regressor, Extra Trees Regressor or ElasticNet
- Try this for more stocks (Just becasue it did not work for one stock, it may still be useful for most stocks)
- Get more features, through transformations or outside data
Instructor notes:
Here is an extra trees regressor. This may do an even better job on training but it still is overfitting. To have a production useable model we would need to improve performance on the test set. <jupyter_code># train an extra trees regressor
et = ExtraTreesRegressor(n_estimators=100, max_depth=4)
model_ts_report(et, tscv, X, y, impute=True) # Report on the extra trees, an extension of the random forest
# This is a different package that is at the
from xgboost import XGBRegressor
# this is the change
xgb = XGBRegressor(n_jobs=4, n_estimators=200)
model_ts_report(xgb, tscv, X, y, impute=True) # Report on the extra trees, an extension of the random forest<jupyter_output>Train is from 2010-01-04 00:00:00 to 2011-03-08 00:00:00.
Test is from 2011-03-09 00:00:00 to 2012-05-03 00:00:00.
Training results:
RMSE: 0.0008843281173214488
Test results:
RMSE: 1.2315700301726569
Train is from 2010-01-04 00:00:00 to 2012-05-03 00:00:00.
Test is from 2012-05-04 00:00:00 to 2013-07-03 00:00:00.
Training results:
RMSE: 0.0008966339665067281
Test results:
RMSE: 1.4275462043930536
Train is from 2010-01-04 00:00:00 to 2013-07-03 00:00:00.
Test is from 2013-07-05 00:00:00 to 2014-08-29 00:00:00.
Training results:
RMSE: 0.0010277724828230952
Test results:
RMSE: 5.885238289642866
Train is from 2010-01-04 00:00:00 to 2014-08-29 00:00:00.
Test is from 2014-09-02 00:00:00 to 2015-10-27 00:00:00.
Training results:
RMSE: 0.004440281303724738
Test results:
RMSE: 3.149526692201462
Train is from 2010-01-04 00:00:00 to 2015-10-27 00:00:00.
Test is from 2015-10-28 00:00:00 to 2016-12-22 00:00:00.
Training results:
RMSE: 0.010506966137319634
Test results:
RMSE: 5.5270260[...]
|
no_license
|
/ml_kmeans_nb_regression/Assignment_11_9_2020_solution.ipynb
|
mokainzi/machine_learning
| 14 |
<jupyter_start><jupyter_text>### OpenAI Gym
We're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.
That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.
So here's how it works:<jupyter_code>import gym
env = gym.make("MountainCar-v0")
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)<jupyter_output>Observation space: Box(2,)
Action space: Discrete(3)
<jupyter_text>Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away.### Gym interface
The three main methods of an environment are
* __reset()__ - reset environment to initial state, _return first observation_
* __render()__ - show current environment state (a more colorful version :) )
* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info)
* _new observation_ - an observation right after commiting the action __a__
* _reward_ - a number representing your reward for commiting action __a__
* _is done_ - True if the MDP has just finished, False if still in progress
* _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.<jupyter_code>obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the riht slightly (around 0.0005)<jupyter_output>taking action 2 (right)
new observation code: [ -4.02551631e-01 1.12759220e-04]
reward: -1.0
is game over?: False
<jupyter_text>### Play with it
Below is the code that drives the car to the right.
However, it doesn't reach the flag at the far right due to gravity.
__Your task__ is to fix it. Find a strategy that reaches the flag.
You're not required to build any sophisticated algorithms for now, feel free to hard-code :)
_Hint: your action at each step should depend either on __t__ or on __s__._<jupyter_code>
# create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1)
s = env.reset()
actions = {'left': 0, 'stop': 1, 'right': 2}
# prepare "display"
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for t in range(TIME_LIMIT):
# change the line below to reach the flag
s, r, done, _ = env.step(actions['right'])
#draw game image on display
ax.clear()
ax.imshow(env.render('rgb_array'))
fig.canvas.draw()
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
assert s[0] > 0.47
print("You solved it!")<jupyter_output>You solved it!
|
permissive
|
/week1_intro/seminar_gym_interface.ipynb
|
drewnoff/Practical_RL
| 3 |
<jupyter_start><jupyter_text>## Checking null value<jupyter_code>for column in sorted(train.columns):
null_cnt = train[column].isnull().sum()
if null_cnt == False:
continue
ratio = train[column].isnull().sum() / train[column].shape[0]
print('Percentage of null data in column {} : {:.2f}%'.format(column , ratio * 100))
for column in sorted(test.columns):
null_cnt = test[column].isnull().sum()
if null_cnt == False:
continue
ratio = null_cnt / test[column].shape[0]
print('Percentage of null data in column {} : {:.2f}%'.format(column, ratio * 100))
msno.matrix(df=train, figsize=(8,8))
msno.bar(df=train, figsize=(8,8))
f, ax = plt.subplots(1,2, figsize=(8, 4))
train['Survived'].value_counts().plot.pie(ax=ax[0])
sns.countplot('Survived', data=train, ax=ax[1])
plt.show()
survived = train['Survived'].value_counts()
for idx, value in survived.iteritems():
if idx == 0:
print('People who are dead : {:.1f}%'.format((value/survived.sum()) * 100))
else:
print('People who survived : {:.1f}%'.format((value/survived.sum()) * 100))<jupyter_output>People who are dead : 61.6%
People who survived : 38.4%
<jupyter_text>## Exploratory Data Analysis### Pclass<jupyter_code>pclass = 100 * (train[['Pclass', 'Survived']].groupby('Pclass').sum() / train[['Pclass', 'Survived']].groupby('Pclass').count())
pclass.applymap(lambda x: '{:.2f}%'.format(x))
crosstab = pd.crosstab(train['Pclass'], train['Survived'])
crosstab['survived_rate'] = crosstab[1] / (crosstab[1] + crosstab[0])
crosstab['survived_rate'] = crosstab['survived_rate'].apply(lambda x: '{:.1f}%'.format(100*x))
crosstab
fig, ax = plt.subplots(1, 2, figsize=(9, 4))
train['Pclass'].value_counts().plot.bar(ax=ax[1])
sns.countplot('Pclass', data=train, hue='Survived', ax=ax[0])<jupyter_output><empty_output><jupyter_text>### Sex<jupyter_code>fig, ax = plt.subplots(1, 2, figsize=(9, 4))
train[['Sex', 'Survived']].groupby('Sex', as_index=True).mean().plot.bar(ax=ax[0])
sns.countplot('Sex', data=train, hue='Survived', ax=ax[1])
train[['Sex', 'Survived']].groupby('Sex', as_index=True).mean()
pd.crosstab(train['Sex'], train['Survived']).style.background_gradient(cmap='summer_r')<jupyter_output><empty_output><jupyter_text>### Pclass & Sex<jupyter_code>sns.factorplot('Pclass', 'Survived', hue='Sex', data=train)
sns.factorplot('Sex', 'Survived', col='Pclass', data=train, font_scale=3)<jupyter_output><empty_output><jupyter_text>### Age<jupyter_code>train['Age'].describe()
fig, ax = plt.subplots(1,1, figsize=(8,4))
train_ages = train[train['Age'].isnull() == False] #null 일단 제거
sns.kdeplot(train_ages[train_ages['Survived'] == 1]['Age'], ax = ax, color='red')
sns.kdeplot(train_ages[train_ages['Survived'] == 0]['Age'], ax = ax, color='green')
plt.figure(figsize=(8,4))
train['Age'][train['Pclass'] == 1].plot(kind='kde', color='red')
train['Age'][train['Pclass'] == 2].plot(kind='kde', color='green')
train['Age'][train['Pclass'] == 3].plot(kind='kde', color='blue')
plt.legend(['Class 1', 'Class 2', 'Class 3'])
print(train['Age'].max())
print(train['Age'].min())
as_dist = [] #age, survival distribution
for i in range(1, 80):
survived = train[train['Age'] <= i]['Survived'].sum() / train[train['Age'] <= i].shape[0]
as_dist.append(100 * survived)
print(as_dist)
plt.plot(as_dist)
plt.title('Survival rate by age (%)')
plt.xlabel('age')
plt.ylabel('survival rate')
plt.show()<jupyter_output>[85.71428571428571, 62.5, 66.66666666666666, 67.5, 70.45454545454545, 70.2127659574468, 68.0, 66.66666666666666, 61.29032258064516, 59.375, 57.35294117647059, 57.971014492753625, 59.154929577464785, 58.44155844155844, 59.036144578313255, 55.00000000000001, 53.98230088495575, 50.35971223021583, 48.170731707317074, 45.81005586592179, 42.64705882352941, 42.42424242424242, 41.86991869918699, 42.59927797833935, 41.19601328903654, 40.75235109717868, 41.839762611275965, 40.88397790055249, 40.625, 40.58679706601467, 40.654205607476634, 41.03139013452915, 41.03671706263499, 41.00418410041841, 41.649899396378274, 42.00385356454721, 41.634980988593156, 41.71322160148976, 41.560798548094375, 41.66666666666667, 41.43356643356643, 41.53846153846154, 41.35593220338983, 41.235392320534224, 41.243862520458265, 40.909090909090914, 40.48, 40.851735015772874, 41.09375, 41.23076923076923, 41.0958904109589, 41.17647058823529, 41.265060240963855, 41.220238095238095, 41.246290801186944, 41.23711340206185, 41.[...]<jupyter_text>### Age & Sex & Pclass<jupyter_code>fig, ax = plt.subplots(1,2,figsize=(16,5))
sns.violinplot('Pclass', 'Age', data=train, hue='Survived', ax=ax[0], split=True)
sns.violinplot('Sex', 'Age', data=train, hue='Survived', ax=ax[1], split=True)<jupyter_output><empty_output><jupyter_text>### Embarked<jupyter_code>sns.countplot('Embarked', data=train, hue='Survived')
rate = train[train['Survived'] == 1]['Embarked'].value_counts() / train['Embarked'].value_counts()
rate = rate.apply(lambda x: (x * 100))
rate.plot.bar()
plt.title('Survival rate by Embarked (%)')
plt.show()
fig, ax = plt.subplots(2,2, figsize=(10,7))
sns.countplot('Embarked', data=train, ax=ax[0,0])
sns.countplot('Embarked', data=train, hue='Sex', ax=ax[0,1])
sns.countplot('Embarked', data=train, hue='Pclass', ax=ax[1,0])
sns.countplot('Embarked', data=train, hue='Survived', ax=ax[1,1])<jupyter_output><empty_output><jupyter_text>## Family<jupyter_code>train['Family'] = train['SibSp'] + train['Parch'] + 1 # 1 = self
test['Family'] = test['SibSp'] + test['Parch'] + 1
print(train['Family'].max(), train['Family'].min())
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
sns.countplot('Family', data=train, ax=ax[0])
ax[0].set_title('Family Count')
sns.countplot('Family', data=train, hue='Survived', ax=ax[1])
ax[1].set_title('Survived vs Family')
train[['Family', 'Survived']].groupby('Family', as_index=True).mean().plot.bar(ax=ax[2])
ax[2].set_title('Survival rate by family size')<jupyter_output><empty_output><jupyter_text>## Fare<jupyter_code>fig, ax = plt.subplots(1,1,figsize=(10,5))
test.loc[test.Fare.isnull(), 'Fare'] = test['Fare'].mean()
train.loc[train.Fare.isnull(), 'Fare'] = train['Fare'].mean()
train['Fare'] = train['Fare'].apply(lambda x: np.log(x) if x > 0 else 0)
sns.distplot(train['Fare'], ax = ax)<jupyter_output><empty_output><jupyter_text>## Cabin<jupyter_code>train['Cabin'].isnull().sum() / train['Cabin'].shape[0] #too high null rate<jupyter_output><empty_output><jupyter_text>## Ticket<jupyter_code>sns.countplot('Survived', data=train, hue='Ticket')<jupyter_output><empty_output>
|
no_license
|
/MachineLearning/Titanic/LeeYouHan_EDA_First_Try.ipynb
|
hygoni/ToyTorch
| 11 |
<jupyter_start><jupyter_text>## xDeepFM : the eXtreme Deep Factorization Machine
This notebook will give you a quick example of how to train an xDeepFM model. xDeepFM [1] is a deep learning-based model aims at capturing both lower- and higher-order feature interactions for precise recommender systems. Thus it can learn feature interactions more effectively and manual feature engineering effort can be substantially reduced. To summarize, xDeepFM has the following key properties:
- It contains a component, named CIN, that learns feature interactions in an explicit fashion and in vector-wise level;
- It contains a traditional DNN component that learns feature interactions in an implicit fashion and in bit-wise level.
- The implementation makes this model quite configurable. We can enable different subsets of components by setting hyperparameters like use_Linear_part, use_FM_part, use_CIN_part, and use_DNN_part. For example, by enabling only the use_Linear_part and use_FM_part, we can get a classical FM model.
In this notebook, we test xDeepFM on two datasets: 1) a small synthetic dataset and 2) Criteo dataset<jupyter_code>import pandas as pd
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.model_selection import train_test_split
from deepctr.models import xDeepFM
from deepctr.inputs import SparseFeat, DenseFeat,get_feature_names
import os
import multiprocessing as mp
print('Number of CPU cores:', mp.cpu_count())
import warnings
warnings.filterwarnings('ignore')
# reading criteo_sample data
os.chdir(r'N:\ALGORITHMIC MARKETING\Assignment4')
data = pd.read_csv('data/criteo_sample.txt')
data.head(10)
# categorising the features into sparse/dense feature set
sparse_features = ['C' + str(i) for i in range(1, 27)]
dense_features = ['I'+str(i) for i in range(1, 14)]
# data imputation for missing values
data[sparse_features] = data[sparse_features].fillna('-1', )
data[dense_features] = data[dense_features].fillna(0,)
# creating target variable
target = ['label']
data.head()
# encoding function
def encoding(data,feat,encoder):
data[feat] = encoder.fit_transform(data[feat])
# encoding for categorical features
[encoding(data,feat,LabelEncoder()) for feat in sparse_features]
# Using normalization for dense feature
mms = MinMaxScaler(feature_range=(0,1))
data[dense_features] = mms.fit_transform(data[dense_features])
# creating a 4 bit embedding for every sparse feature
sparse_feature_columns = [SparseFeat(feat, vocabulary_size=data[feat].nunique(),embedding_dim=4) \
for i,feat in enumerate(sparse_features)]
# creating a dense feat
dense_feature_columns = [DenseFeat(feat, 1) for feat in dense_features]
# features to be used for dnn part of xdeepfm
dnn_feature_columns = sparse_feature_columns + dense_feature_columns
# features to be used for linear part of xdeepfm
linear_feature_columns = sparse_feature_columns + dense_feature_columns
feature_names = get_feature_names(linear_feature_columns + dnn_feature_columns)
dnn_feature_columns
linear_feature_columns
# creating train test splits
train, test = train_test_split(data, test_size=0.2)
train_model_input = {name:train[name].values for name in feature_names}
test_model_input = {name:test[name].values for name in feature_names}
model = xDeepFM(linear_feature_columns, dnn_feature_columns, dnn_hidden_units=(256, 256),\
cin_layer_size=(128, 128), \
cin_split_half=True, cin_activation='relu'\
,l2_reg_linear=1e-05,\
l2_reg_embedding=1e-05, l2_reg_dnn=0, l2_reg_cin=0, \
init_std=0.0001,seed=1024, dnn_dropout=0,dnn_activation='relu', \
dnn_use_bn=False, task='binary')
model.summary()
train_model_input
#compiling the model
model.compile("adam", "binary_crossentropy",metrics=['binary_crossentropy','accuracy'], )
# training the model
history = model.fit(train_model_input, train[target].values,
batch_size=128, epochs=10, verbose=2, validation_split=0.2, )
#predicting
pred_ans_xdeep = model.predict(test_model_input, batch_size=128)
pred_ans_xdeep[:10]
score = model.evaluate(test_model_input, test[target].values, verbose=0)
print('Test Cross Entropy: ', score[1])
print('Test Accuracy', score[2])
import matplotlib.pyplot as plt
train_history = history
# list all data in history
print(train_history.history.keys())
# summarize history for accuracy
plt.plot(train_history.history['acc'])
plt.plot(train_history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(train_history.history['binary_crossentropy'])
plt.plot(train_history.history['val_binary_crossentropy'])
plt.title('Model Loss')
plt.ylabel('Cross Entropy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
model.save('xDeepFM_Model')<jupyter_output><empty_output>
|
no_license
|
/Assignment4/Extreme Deep Factorization/2. ADM_XDeepFM_deepCTR.ipynb
|
AyushiDeval/AyushiDevalGitHub
| 1 |
<jupyter_start><jupyter_text>### Decision trees & model selection
<jupyter_code>import numpy as np
toy_data = np.load('data.npz')
X, y = toy_data['X'], toy_data['y']
print(X.shape, y.shape)
### again to save training time only 10% of the loaded data is used for training.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.9)
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(X_train[:,0],X_train[:,1],c=y_train,cmap='nipy_spectral');<jupyter_output><empty_output><jupyter_text># Decision trees out of the box
DecisionTreeClassifier has a number of parameters:
* max_depth : a limit on tree depth (default : no limit)
* min_samples_split : there should be at least this many samples to split further (default : 2)
* min_samples_leaf : there should be at least this many samples on one side of a split to consider it valid (default : 1).
* criterion : 'giny' or 'entropy' - split stuff over this parameter (default : giny)<jupyter_code>from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier()
tree.fit(X_train,y_train)<jupyter_output><empty_output><jupyter_text>Probably a slightly better way of evaluating quality is by including the mean and std of the metric.### Plot decision surface
This function takes your classifier and plots it's prediction at each point. Let's see how it works.
(this only works for two dimensions, so it is reasonable to play with the toy dataset before turning to Higgs)<jupyter_code>from sklearn.metrics import accuracy_score
def plot_decision_surface(clf, X, y, plot_step = 0.2, cmap='nipy_spectral', figsize=(12,8)):
"""Plot the decision boundary of clf on X and y, visualize training points"""
plt.figure(figsize=figsize)
x0_grid, x1_grid = np.meshgrid(np.arange(X[:, 0].min() - 1, X[:, 0].max() + 1, plot_step),
np.arange(X[:, 1].min() - 1, X[:, 1].max() + 1, plot_step))
y_pred_grid = clf.predict(np.stack([x0_grid.ravel(), x1_grid.ravel()],axis=1)).reshape(x1_grid.shape)
plt.contourf(x0_grid, x1_grid, y_pred_grid, cmap=cmap, alpha=0.5)
y_pred = clf.predict(X)
plt.scatter(*X[y_pred==y].T,c = y[y_pred==y],
marker='.',cmap=cmap,alpha=0.5,label='correct')
plt.scatter(*X[y_pred!=y].T,c = y[y_pred!=y],
marker='x',cmap=cmap,s=50,label='errors')
plt.legend(loc='best')
print("Accuracy = ",accuracy_score(y, y_pred))
plt.show()<jupyter_output><empty_output><jupyter_text>### Train quality<jupyter_code>plot_decision_surface(tree, X_train, y_train)<jupyter_output>Accuracy = 1.0
<jupyter_text>### Test quality
__Before you run it:__ guess what's going to happen with test accuracy vs train accuracy judging by the train plot?<jupyter_code>plot_decision_surface(tree, X_test, y_test)<jupyter_output>Accuracy = 0.7344019728729964
<jupyter_text>```
```
```
```
```
```
```
```
### We need a better tree!
Try adjusting parameters of DecisionTreeClassifier to improve test accuracy.
* Accuracy >= 0.72 - not bad for a start
* Accuracy >= 0.75 - better, but not enough
* Accuracy >= 0.77 - pretty good
* Accuracy >= 0.78 - great! (probably the best result for a single tree)
Feel free to modify the DecisionTreeClassifier above instead of re-writing everything.
__Note:__ some of the parameters you can tune are under the "Decision trees out of the box" header.## Ensembles
Let's build our own decision tree bagging and see if it works. Implement __`BagOfTrees`__ class below<jupyter_code>class BagOfTrees:
def __init__(self, n_estimators=10, **kwargs):
self.trees = []
for i in range(n_estimators):
self.trees.append(DecisionTreeClassifier(**kwargs))
def fit(self, X, y):
# Fit each of the trees on a random subset of X and y.
# hint: you can select random subsample of data like this:
# >>> ix = np.random.randint(0, len(X), len(X))
# >>> X_sample, y_sample = X[ix], y[ix]
<YOUR CODE>
def predict(self, X):
trees = self.trees
# Compute predictions of each tree and aggregate them into the ensemble prediction
# Note: you can use tree.predict(X) or tree.predict_proba(X) to get individual predicitons
return <YOURCODE - np.array of prediction indices>
# once you thing you're done, see if your code passes asserts below
model = BagOfTrees(n_estimators=100, min_samples_leaf=5)
model.fit(X_train,y_train)
print('\n'.join(map(str, model.trees[:2])))
pred = model.predict(X_test[::100])
print("predictions:", pred)
assert isinstance(pred, np.ndarray), "prediction must be a numpy array"
assert str(pred.dtype).startswith('int'), "prediction dtype must be integer (int32/int64)"
assert pred.ndim == 1, "prediction must be a vector (1-dimensional)"
assert len(pred) == len(X_test[::100]), "must predict exactly one answer for each input (expected length %i, got %i)" % (len(X_test[::100]), len(pred))
assert any(model.trees[0].predict(X_train) != model.trees[1].predict(X_train)), "All trees are the same. Did you forget to train each tree on a random part of data?"
for i, tree in enumerate(model.trees[:5]):
print("tree %i individual accuracy = %.5f"%(i+1, accuracy_score(y_test,tree.predict(X_test))))
print("Ensemble accuracy:", accuracy_score(model.predict(X_test), y_test)) # should be >= 0.79
plot_decision_surface(model, X_test, y_test)<jupyter_output><empty_output><jupyter_text>```
```
```
```
```
```
```
```
```
```
```
```
### Using existing ensembles : Random Forest
RandomForest combines bagging and random subspaces: each tree uses a fraction of training samples and while split in that tree is chosen among a subset of features. This leads to a slightly better performance.
__Note:__ try re-running your code a few times and see what happens to accuracy.<jupyter_code>from sklearn.ensemble import RandomForestClassifier
# Task: create and fit a random forest with 100 estimators and at least 5 samples per leaf
model = <YOUR CODE>
<YOUR CODE>
plot_decision_surface(model, X_test, y_test)
acc = accuracy_score(model.predict(X_test), y_test)
assert acc >= 0.792, "acc is below 0.792. Try changing random forest hyperparameters."<jupyter_output><empty_output><jupyter_text>### Using existing ensembles : Gradient Boosting
__Note:__ if you don't have xgboost, use from sklearn.ensemble import GradientBoostingClassifier as XGBClassifier<jupyter_code>from xgboost import XGBClassifier
for n_estimators in range(1,10):
model = XGBClassifier(max_depth=3, learning_rate=0.5, n_estimators=n_estimators)
model.fit(X_train, y_train)
print("n_estimators = ", n_estimators)
plot_decision_surface(model, X_test, y_test)<jupyter_output>n_estimators = 1
Accuracy = 0.7842170160295932
<jupyter_text>### HIGGS data (100K subsample)<jupyter_code>data = np.genfromtxt("/mnt/mlhep2018/datasets/HIGGS.csv",
delimiter=",", dtype=np.float32, max_rows=100000)
X, y = data[:, 1:], data[:, 0]
print(X.shape, y.shape)
from sklearn.model_selection import cross_val_score
dt_cv = cross_val_score(DecisionTreeClassifier(),
X, y,
cv=4, n_jobs=4, scoring="roc_auc")
print(dt_cv.mean(), dt_cv.std())<jupyter_output><empty_output><jupyter_text>### Grid search
__Bonus Quest:__ Find optimal parameters for GradientBoostingClassifier using grid search.
This time please use a special validation set (i.e. don't make decisions based on X_test, y_test).
You can implement a loop that searches over max_depth and learning_rate for a __fixed n_estmators=100__.<jupyter_code>
< YOUR CODE HERE >
final_model = <your_code>
plot_decision_surface(final_model,X_test,y_test)<jupyter_output><empty_output><jupyter_text>```
```
```
```
```
```
```
```
```
```
```
```
```
```
### Automatic ML: model selection
While grid search is reasonably good for parameter optimization, it oftentimes spends a lot of times exploring hopeless regions. There are other approaches to finding optimal model that can result in drastically smaller cpu time.
The first approach is __bayesian optimization__, whos main idea is to model probability density function of a score (e.g. accuracy) at every point and explore points that are most likely to give optimal solution. The "wilingness to try this point" is called the acquisition function.
The most popular bayesian optimization methods are __gaussian process optimization(GP)__ and __tree-structured parzen estimators (TPE)__, differning mainly in the way they estimate the distributions.
This time we're gonna see TPE in action using the [modelgym](https://github.com/yandexdataschool/modelgym) package.
_warning: the package is still in early development, some things may suddenly break down_<jupyter_code>from modelgym.models import CtBClassifier # CatBoost gradient boosting
from modelgym.utils import XYCDataset, ModelSpace
from modelgym.metrics import Accuracy
from modelgym.trainers import TpeTrainer
trainer = TpeTrainer([ModelSpace(CtBClassifier, {'iterations':25})]) #you can add more models here...
trainer.crossval_optimize_params(Accuracy(), XYCDataset(X_train, y_train), cv=3)
trainer.get_best_results()<jupyter_output><empty_output><jupyter_text>An alternative approach here is __stochastic optimization__. If bayesian optimization is like a scalpel that cuts precisely where it should, stochastic optimization is a shotgun that shoots a bunch of pellets and sees which of them are most successful.
We're gonna see this in action with the help of [tpot](https://github.com/EpistasisLab/tpot/) - a library for hyperparameter optimization via genetic programming.<jupyter_code>from tpot import TPOTClassifier
trainer = TPOTClassifier(population_size=50, generations=10, # genetic algorithm params
cv=3, # 3-fold cross-validation
n_jobs=4, # parallel processes
max_eval_time_mins = 0.5, # model should train in under 30 seconds
verbosity=2,
)
trainer.fit(X_train, y_train)
best_model = trainer.fitted_pipeline_
print("TPOT test accuracy =", accuracy_score(y_test, best_model.predict(X_test)))<jupyter_output><empty_output>
|
no_license
|
/Tutorials/mlhep2018/seminar-02-tree-ensembles-1-bagging-xgboost.ipynb
|
eparrish64/NICADD-ML-Tutorials
| 12 |
<jupyter_start><jupyter_text># Time Series Basics## Correlated Stock Prices
You're interested in the performance of a particular stock. You use the [autocorrelation function](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.autocorr.html) in Pandas to assess how autocorrelated your stock's values are.
Autocorrelation makes explicit the idea of temporal correlation we discussed previously. Suppose we wanted to see how correlated a stock's prices are with the immediately preceding day's stock prices.
| Day | Price | Price with Lag = 1 | Price with Lag = 2 |
|-----|-------|--------------------|--------------------|
| 1 | 25 | NA | NA |
| 2 | 26 | 25 | NA |
| 3 | 28 | 26 | 25 |
| 4 | 24 | 28 | 26 |
| 5 | 23 | 24 | 28 |
Autocorrelation with a lag of 1 will calculate the correlation between column "Price" and column "Price with Lag = 1." Autocorrelation with a lag of $k$ will calculate the correlation between stock price and the stock price of $k$ days before in a similar manner.
I build a loop that iterates through days (we'll assume our stock price is the closing price at every day) 1 to 365 to assess how correlated a stock price is with the stock price from $i$ days ago. (Sample code seen below.)
```
for i in range(1, 366):
print(df[stock_prices].autocorr(lag=i))
```#### 1. Suppose my highest values of autocorrelation are found when $i = 1, 7, 30, 365$. What do each of these suggest about the performance of this particular stock?Answer: These suggest that our stock on day 𝑡 is very highly correlated with:
the immediately preceding day the stock one week before (day 𝑡−7)
the stock about one month before (day 𝑡−30), and
the stock one year before (day 𝑡−365).Stock prices vary quite rapidly. Looking at almost any plot of stock price over time, we'll see a very "wiggly" function that moves around erratically. Building a model for this can be difficult.
One way to "de-noise" or "smooth" this is to create a [moving average](http://www.investopedia.com/terms/m/movingaverage.asp) of stock prices. Suppose I wanted to create a moving average of stock prices across $k$ days. In this case, I create a new column that takes the current day and $k-1$ previous days (for $k$ total days) and average the stock prices of these days.
For example, I have a column of stock prices and a column associated with a moving average for three days. Then, my row for Day 5 includes the Day 5 stock price and the average of Day 3, Day 4, and Day 5 stock prices.
| Day | Price | Moving Average k = 3 |
|-----|-------|----------------------|
| 1 | 25 | NA |
| 2 | 26 | NA |
| 3 | 28 | 26.33 |
| 4 | 24 | 26 |
| 5 | 23 | 25 |
#### 2. As the number of periods $k$ increases, how do I expect my plotted curve to change?**Answer:**
The number of moving average increased in a a given periods of $k$, the curve becomes more smoother. Averaging three days may gives little smoother moving average curve. Hoewever the number of the $k$ inceased from 10 days to 50 days to 100 days will push the curve to become smoother and smmoother and finally to become flat function.#### 3. Suppose we use our moving average to predict values of the stock price. As $k$ increases, how is the bias of our predictions affected? **Answer:** As $k$ increases, the model flexiblity declined. This induced larger biase for the predictions.#### 4. As $k$ increases, how is the variance of our predictions affected?**Answer:** The increasing of $k$, it makes the model to become less flexible. This will cuase to reduce the variance of the predictions.## Stock price exploration #### Using the `yfinance` package, download stock data from the past three years for a company you are interested in. <jupyter_code>#!pip install yfinance
import yfinance as yf<jupyter_output><empty_output><jupyter_text>#### Examine the data.<jupyter_code>df_tesla = yf.download('TSLA', start="2017-01-01", end='2020-10-20')
df_tesla.head()<jupyter_output><empty_output><jupyter_text>#### We'll be working with the 'Adj Close' column. Rename that column 'price' and make your DataFrame just that column and the datetime index. <jupyter_code>df_tesla = df_tesla[['Adj Close']]
df_tesla
df_tesla.columns = ['price']
df_tesla.columns<jupyter_output><empty_output><jupyter_text>#### Make a column that is the `price` of the previous day.<jupyter_code>df_tesla['price'].shift(1)
df_tesla['prev_price'] = df_tesla['price'].shift(1)
train = df_tesla[:'2019'].dropna()
test = df_tesla['2020':]<jupyter_output><empty_output><jupyter_text>#### Split the DataFrame into training and test sets so that the test set is the most recent year of data (you can use pandas slicing, scikit-learn, or sktime packages to do this).<jupyter_code>import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>#### Plot the stock price with different colors for the training and test sets.<jupyter_code>fig, ax = plt.subplots(figsize = (14, 6))
ax.plot_date(train.index, train.price, '--')
ax.plot_date(test.index, test.price, '--')<jupyter_output><empty_output><jupyter_text>#### Find the autocorrelation of the training data.<jupyter_code>train['price'].autocorr()<jupyter_output><empty_output><jupyter_text>#### Plot the autocorrelation using statsmodels `plot_acf`.<jupyter_code>from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plot_acf(train['price']);<jupyter_output><empty_output><jupyter_text>#### Plot the partial autocorrelation values using statsmodels `plot_pacf`.<jupyter_code>plot_pacf(train['price']);<jupyter_output><empty_output><jupyter_text>#### Do any values show high autocorrelation?**Answer:**
The plot has a lag at 0, and 1 are outside of the confidence band (blue region) and hence are statistically significant to affect the price. Those values laid under the blue region are insignificant to affect the price.#### Make a baseline model that is just the last value from the training set.<jupyter_code>baseline = train['price'][-1]<jupyter_output><empty_output><jupyter_text>#### Score it on the test set using MAE<jupyter_code>import numpy as np
preds = np.ones(len(test))*baseline
np.mean(abs(test['price'] - preds))<jupyter_output><empty_output><jupyter_text>#### Using scikit-learn's LinearRegression class, make a model to predict the stock price based on the stock price from the day before. This is a very basic model that doesn't consider trend or seasonality.<jupyter_code>from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(train[['prev_price']], train.price)
preds = lr.predict(test[['prev_price']])<jupyter_output><empty_output><jupyter_text>#### Predict, plot, and score on MAE.<jupyter_code>mae = np.mean(abs(test['price'] - preds))<jupyter_output><empty_output><jupyter_text>#### How does the model do?<jupyter_code>fig, ax = plt.subplots(figsize = (14, 6))
ax.plot_date(test.index, test.price, '--', label = 'Truth')
ax.plot_date(test.index, preds, '--', label = 'Prediction')
plt.legend()
plt.title(f'Our Regression Model with MAE: {mae: .5f}', loc = 'left');<jupyter_output><empty_output>
|
no_license
|
/Time-series-basics/10.01-lab-time-series-basics.ipynb
|
SolomonLemma/Python_ML_SQL
| 14 |
<jupyter_start><jupyter_text># Regression Analysis: Seasonal Effects with Sklearn Linear Regression
In this notebook, you will build a SKLearn linear regression model to predict Yen futures ("settle") returns with *lagged* CAD/JPY exchange rate returns. <jupyter_code># Currency pair exchange rates for CAD/JPY
cad_jpy_df = pd.read_csv(
Path("cad_jpy.csv"), index_col="Date", infer_datetime_format=True, parse_dates=True
)
cad_jpy_df.head()
# Trim the dataset to begin on January 1st, 1990
cad_jpy_df = cad_jpy_df.loc["1990-01-01":, :]
cad_jpy_df.head()<jupyter_output><empty_output><jupyter_text># Data Preparation### Returns<jupyter_code># Create a series using "Price" percentage returns, drop any nan"s, and check the results:
# (Make sure to multiply the pct_change() results by 100)
# In this case, you may have to replace inf, -inf values with np.nan"s
returns = cad_jpy_df[['Price']].pct_change()*100
returns = returns.replace(-np.inf, np.nan).dropna()
cad_jpy_df['Return'] = returns
cad_jpy_df.tail()<jupyter_output><empty_output><jupyter_text>### Lagged Returns <jupyter_code># Create a lagged return using the shift function
cad_jpy_df['Lagged_Return'] = cad_jpy_df['Return'].shift()
cad_jpy_df = cad_jpy_df.dropna()
cad_jpy_df.tail()<jupyter_output><empty_output><jupyter_text>### Train Test Split<jupyter_code># Create a train/test split for the data using 2018-2019 for testing and the rest for training
train = cad_jpy_df[:'2017']
test = cad_jpy_df['2018':]
# Create four dataframes:
# X_train (training set using just the independent variables), X_test (test set of of just the independent variables)
# Y_train (training set using just the "y" variable, i.e., "Futures Return"), Y_test (test set of just the "y" variable):
X_train = train['Lagged_Return'].to_frame()
X_test = test['Lagged_Return'].to_frame()
y_train = train['Return']
y_test = test['Return']
# Preview the X_train data
X_train.head()<jupyter_output><empty_output><jupyter_text># Linear Regression Model<jupyter_code># Create a Linear Regression model and fit it to the training data
from sklearn.linear_model import LinearRegression
# Fit a SKLearn linear regression using just the training set (X_train, Y_train):
model = LinearRegression()
model.fit(X_train, y_train)<jupyter_output><empty_output><jupyter_text># Make predictions using the Testing Data
**Note:** We want to evaluate the model using data that it has never seen before, in this case: `X_test`.<jupyter_code># Make a prediction of "y" values using just the test dataset
predictions = model.predict(X_test)
# Assemble actual y data (Y_test) with predicted y data (from just above) into two columns in a dataframe:
results = y_test.to_frame()
results['Predicted Return'] = predictions
# Plot the first 20 predictions vs the true values
results[:20].plot(subplots=True)<jupyter_output><empty_output><jupyter_text># Out-of-Sample Performance
Evaluate the model using "out-of-sample" data (`X_test` and `y_test`)<jupyter_code>from sklearn.metrics import mean_squared_error
# Calculate the mean_squared_error (MSE) on actual versus predicted test "y"
# (Hint: use the dataframe from above)
mse = mean_squared_error(
results["Return"],
results["Predicted Return"]
)
# Using that mean-squared-error, calculate the root-mean-squared error (RMSE):
rmse = np.sqrt(mse)
print(f"Out-of-Sample Root Mean Squared Error (RMSE): {rmse}")<jupyter_output>Out-of-Sample Root Mean Squared Error (RMSE): 0.6445805658569028
<jupyter_text># In-Sample Performance
Evaluate the model using in-sample data (X_train and y_train)<jupyter_code># Construct a dataframe using just the "y" training data:
in_sample_results = y_train.to_frame()
# Add a column of "in-sample" predictions to that dataframe:
in_sample_results["In-sample Predictions"] = model.predict(X_train)
# Calculate in-sample mean_squared_error (for comparison to out-of-sample)
in_sample_mse = mean_squared_error(
in_sample_results["Return"],
in_sample_results["In-sample Predictions"]
)
# Calculate in-sample root mean_squared_error (for comparison to out-of-sample)
in_sample_rmse = np.sqrt(in_sample_mse)
print(f"In-sample Root Mean Squared Error (RMSE): {in_sample_rmse}")<jupyter_output>In-sample Root Mean Squared Error (RMSE): 0.841994632894117
|
no_license
|
/regression_analysis.ipynb
|
joycembika/Time-Series-Analysis
| 8 |
<jupyter_start><jupyter_text># Part 1: Personal 4 day Python & Data learning project
Data was always something that surrounded my projects at school and work. However it was not until I was asked "What is your favorite data cleansing technique?" that I really took a long hard look at how I interacted with data and my true competency with how I looked at data and how I worked with data. With this recognition in mind, I thus set off on a 4 day journey through Python and machine learning with data having had no prior experience. This is by no means the end of the learning journey but a form of inertia to get the ball rolling with my interest in data.
The Airbnb data set used was from http://insideairbnb.com/about.html
Python tutorials from NYU's data science bootcamp, datacamp.com, and independent online tutorials.
During these five days, my objectives are as follows:
1. Learn how to apply fundamental python on top of merely knowing the syntax
2. Learning about the different packages you could use in Python to deal with data
3. Effective ways to clean data and how to justify my choices in dropping/including data
4. How to identify relevant and interesting relationships to study
5. How to effectively visualize data with Python visualization packages
6. (If time allows) to attempt creating a preliminary predictive model## Step 1: importing the necessary packages and importing the data<jupyter_code>import sys
import datetime
import pandas as pd
import matplotlib as ml
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import statsmodels.api as sm
%matplotlib inline<jupyter_output>/Applications/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
from pandas.core import datetools
<jupyter_text>Here we import the csv files with San Francisco Airbnb listings information that was downloaded from insideairbnb.com<jupyter_code>full_df = pd.read_csv('sf_airbnb2017.csv', low_memory=False)
full_df.head(10)
print(full_df.shape)<jupyter_output>(8933, 96)
<jupyter_text>ImportantI ran the head and tail functions in order to look at the first 5 and last 5 entries of the data set. I had also tried to look at the column names involved in this data set with .columns. While the columns were listed, a deprecation type warning popped up.
It seemed like I had to set a dtype option which would have required me defining a type for each column like {maximum_nights:int}. Alternatively I could have also set low_memory=False. The reason that such a warning message pops up is that when dtypes aren't initially specified, there is a guessing process involved after reading the data, which requires huge amounts of memory.
As this is a relatively small data set of 8933 rows and 96 columns, I will ignore the warning for now until I have cleaned the data.## Step 2: Initial Data Exploration and Cleaning
It is necessary before cleaning the data to understand the data and what is in there. All the column and column types were logged and further analysis conducted.<jupyter_code>print(full_df.columns)
print(full_df.dtypes)<jupyter_output>id int64
listing_url object
scrape_id int64
last_scraped object
name object
summary object
space object
description object
experiences_offered object
neighborhood_overview object
notes object
transit object
access object
interaction object
house_rules object
thumbnail_url object
medium_url object
picture_url object
xl_picture_url object
host_id int64
host_url object
host_name object
host_since [...]<jupyter_text>### Cleaning data labelling and column names
ImportantWe are lucky enough here to get a data set with properly labelled column names. However, there are many instance when the data column names might come in random forms like "1 review for cleanliness". The most ideal form of column heading seems to be when individual words are separated by an underscore and numbering was removed. The below code is an example of how I learned to do this:<jupyter_code>test = "1 review for cleanliness"
#this splits the number from the text, maxsplit=1 means the data gets split once
test.split(maxsplit=1)
#adding [1] only returns the 2nd element of the array with the text we want
test.split(maxsplit=1)[1]
#using a replace method then adds the underscores that we want
test.split(maxsplit=1)[1].replace(' ','_')<jupyter_output><empty_output><jupyter_text>Assuming that all the columns have the same type of messy data structure and we want to apply this above method to all column names, we could use a for loop to do so like this:<jupyter_code>###create a new array to store the new names
#new_column_names = []
###loop over all the column names and append the reformatted names to the array created
#for var in dataset.columns:
# new_column_names.append(var.split(maxsplit=1)[1].replace(' ','_'))
###rename all the column names with the new column names in new_column_names
#dataset.columns = new_column_names<jupyter_output><empty_output><jupyter_text>### Dropping uncessary columns
AnalysisThis seems to be a relatively comprehensive data set with a lot of different column headings, some of which do not have relevant data, while are repetitive in nature.
The initial plan here is to drop the irrelevant columns first before delving into users with incomplete information.
As such, the below columns are subsequently dropped for the following reasons:
1. Url information is unecessary and not intuitively related to any other metric: [listing_url, thumbnail_url, medium_url, picture_url, xl_picture_url, host_url, host_thumbnail_url, host_picture_url]
2. Scraping information is constant across the board: [scrape_id, last_scraped]
3. Too many NaN, NA, or missing values for the data to be of use: [host_acceptance_rate, license, jurisdiction]
4. Overlapping/redundant information: [has_availability, maximum_nights, country_code, country]
With the above filter in place, a new reduced data set of 45 factors was selected:<jupyter_code>filtered_df = full_df[['id','name','summary','space','access','interaction','house_rules','host_since','host_response_time','host_is_superhost','host_listings_count','host_verifications','host_has_profile_pic','host_identity_verified','neighbourhood','zipcode','latitude','longitude','property_type','room_type','accommodates','bathrooms','bedrooms','beds','bed_type','amenities','price','security_deposit','cleaning_fee','extra_people','minimum_nights','availability_30','availability_60','availability_90','availability_365','number_of_reviews','review_scores_rating','review_scores_accuracy','review_scores_cleanliness','review_scores_checkin','review_scores_communication','review_scores_location','review_scores_value','instant_bookable','cancellation_policy']]
filtered_df.to_csv('filtered_data.csv')
filtered_df.dtypes<jupyter_output><empty_output><jupyter_text>### Cleaning numeric data and converting string to float
Analysis
Looking over the data types, it would seem that most of them are objects where I would have expected a string value. Some online forums suggested that 'object' was how some string items were classified and I tried to perform some string methods on the "object" variables to see whether they would work.<jupyter_code>print(filtered_df['name'][0].split(' '))<jupyter_output>["Lands'", 'End', 'hideaway']
<jupyter_text>Furthermore, there seems to be some classification errors involved with variables that involve price, namely price, security_deposit, cleaning_fee, and extra_people. This is largely due to the presence of '$' dollar signs. They're likely to be classified as an object right now. As they are currently objects, the replace method will be used to remove the dollar sign and number then can be converted to a numeric value.<jupyter_code>numeric_cols = ['price','security_deposit','cleaning_fee','extra_people']
filtered_df[numeric_cols].head(5)
filtered_df[numeric_cols].dtypes<jupyter_output><empty_output><jupyter_text>We observe that there are NaN values present in some of the listings and it seems reasonable to assume here that the appropriate value should be 0. The type of values should also be float and not object.<jupyter_code>for var in filtered_df[numeric_cols]:
filtered_df.replace({var: {'\$': ''}}, regex=True, inplace=True)
filtered_df.replace({var: {'\,': ''}}, regex=True, inplace=True)
filtered_df[var] = filtered_df[var].astype('float64', copy=False)
filtered_df[var] = filtered_df[var].fillna(0)
filtered_df[numeric_cols].dtypes<jupyter_output><empty_output><jupyter_text>ImportantProbably one of the most challenging parts of this data cleansing so far has been attempting to clean the numerical data across multiple columns. I had initially tried to clean the data individually by simply using a replace function for '$' but then realized that there was a type error because commas ',' were still preventing the values from being turned into floats.
Another issue was with attempting to apply the fillna method across multiple columns.
It was also a challenge to find the right regex format that would not only keep all numbers but the decimal point within the string as well.
While I don't think I fully figured out the proper syntax for regex despite having used it in Javascript before, I managed to overcome the difficulty of changing values across multiple columns by creating an array numeric_cols with columns of numerical data and looping through it. Looping through each column heading within the array, I was now able to apply all the replacements through regex and filling all the NaN values with 0.
### Applying datetime package to recode a column
I next moved on to cleaning the data for 'host_since' to make the data more useable. The current data only showed the current date the current host joined in string form and the datetime package was applied to retrieve the date in a more useable format.
While 'years since joining' would have been useful, a more comparable metric would be 'years since joining'. A new column was thus created as 'years_being_host' in which the number of years since joining was calculated, with NA values filled with 0. The final result was then converted to type int64 and the 'host_since' column was dropped.<jupyter_code>#using datetime package and subtracting years from 2018 to find how many years as host
filtered_df['years_being_host'] = 2018 - pd.DatetimeIndex(filtered_df['host_since']).year
#filled NA values with 0 -> indicating 0 years being host
filtered_df['years_being_host'] = filtered_df['years_being_host'].fillna(0)
#changed the type to integer to match with nature of data
filtered_df['years_being_host'] = filtered_df['years_being_host'].astype('int64', copy=False)
#dropped 'host_since' as it wasn't being used anymore
filtered_df.drop(['host_since'], axis = 1, inplace=True)
filtered_df['years_being_host'].head()<jupyter_output>/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
after removing the cwd from sys.path.
/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentatio[...]<jupyter_text>### Columns/Rows with too many missing values
After recoding the numeric and date data, the next thing I noticed was that there were a few columns in which there were a lot of missing values, namely all the rating values and descriptions.
ImportantIt was also at this time that I came across 'apply' in which you can write a function and apply it to the dataframe. In this case, in order to find out the number of missing values, a function was needed to count the number of missing values in each column. While I had initially written a function along the lines of defining a counter, a quick google search returned a more 'elegant' python function in 'sum(x.isnull())' which adds to a counter whenever isnull() returns true.
By applying the num_missing function across axis=0, we are counting down the column and thus finding the missing values in each column. The following table was thus produced:<jupyter_code>def num_missing(x):
return sum(x.isnull())
print (filtered_df.apply(num_missing, axis=0))<jupyter_output>id 0
name 4
summary 307
space 2562
access 3457
interaction 3630
house_rules 2917
host_response_time 2744
host_is_superhost 2
host_listings_count 2
host_verifications 0
host_has_profile_pic 2
host_identity_verified 2
neighbourhood 12
zipcode 113
latitude 0
longitude 0
property_type 0
room_type 0
accommodates 0
bathrooms 31
bedrooms 8
beds 15
bed_type 0
amenities 0
price 0
security_deposit 0
cleaning_fee [...]<jupyter_text>AnalysisThere initially seems to be over a quarter of entries that are missing information in the space, access, interaction, and house_rules categories. However, the lack of information on these fields do not affect an overall visualization as of yet until we really delve into predictive models regarding the use of specific words within these descriptions.
What seems to be data that can be filtered out however, are the listings that are lacking in review ratings. A quick glance at some of the data fields indicate that many of the listings that are missing rating information in one field are very likely to be missing rating information across all the other fields as well. As such, we will have to drop the rows that are missing information in ratings at that seems to be an indication of a lack of reviews or stays and the data might not be indicative of overall trends.<jupyter_code>#array created of every variable that is related to review scores
ratings = ['review_scores_rating','review_scores_accuracy','review_scores_cleanliness','review_scores_checkin','review_scores_communication','review_scores_location','review_scores_value']
#rows that are missing any of the above variables are dropped from the data frame
filtered_df.dropna(axis=0, subset=[ratings], inplace=True)<jupyter_output>/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
after removing the cwd from sys.path.
<jupyter_text>We next look at cleaning up some of the other variables that do not have that many missing values. The way such missing entries are dealt with are detailed below:
1. name and summary: these rows are removed as they make up < 3% of the overall data and listings without names or basic summaries are not representative listings
2. host_is_superhost, host_has_profile_pic, host_identity_verified: we will assume that these individuals do not belong to any of these groups and has a value of false
3. host_listings_count, bathrooms, bedrooms, beds: we will take the mode value of 1, assuming that these hosts are similar to most hosts
4. neighborhood and zipcode: we will ignore these missing values as the exact locations are saved under latitude and longitude<jupyter_code>#drops entries without name or summary
filtered_df.dropna(axis=0,subset=[['name','summary']],inplace=True)
#fills NA values across multiple columns with 'f' or 1 as specified above
values = {'host_is_superhost':'f','host_has_profile_pic':'f','host_identity_verified':'f','host_listings_count':1,'bathrooms':1,'bedrooms':1,'beds':1}
filtered_df.fillna(value = values, inplace=True)
print (filtered_df.apply(num_missing, axis=0))<jupyter_output>/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
/Applications/anaconda3/lib/python3.6/site-packages/pandas/core/generic.py:3660: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._update_inplace(new_data)
<jupyter_text>The number of entries/rows after this first stage of cleaning is 6720, down from 8933 initially. More data cleaning will be done before a model is built later.<jupyter_code>filtered_df.shape<jupyter_output><empty_output><jupyter_text>
# Step 3: Basic Statistics and VisualizationsNow that the data has been cleaned, more exploratory data analysis and visualizations can be conducted. A different statistical analysis will be run on different variables to learn each method while getting a full view of most of the variables.
### Descriptive Statistics and Histogram
The describe method was used to obtain the basic descriptive statistics for overall review ratings. From the scores below, it can be seen that most of the review scores ratings fall within the 97-100 range with a range of around 95. With even the lowest quartile hitting the 93 mark. The minimum however is a 20, which seems to indicate outliers in terms of ratings.
A histogram was created for the review values and they do indeed show outliers below 40.<jupyter_code>filtered_df['review_scores_rating'].describe()
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.hist(filtered_df['review_scores_rating'], bins=10)
plt.title('Frequency of Review Scores')
plt.xlabel('Rating Values')
plt.ylabel('# of Ratings')
plt.show()<jupyter_output><empty_output><jupyter_text>### Correlation, Covariance, and Scatter Plots
I next wanted to see whether there was a relationship between the number of reviews and the availability of a certain place. My initial hypothesis was that the number of reviews would drive traffic towards that particular listing and the number of days of availability would likely be lower.
An initial glance at the covariance and correlation numbers however indicate a positive relationship, with availabilities rising with the number of reviews. Despite this having contradicted my previous hypothesis, the degree of correlation is still very low, 0.076 for 30 day availability and rising to 0.25 for 365 day availability.
Further regression and confidence interval testing could be conducted to test the relationship.<jupyter_code>avails = ['number_of_reviews','availability_30','availability_60','availability_90','availability_365']
filtered_df[avails].cov()
filtered_df[avails].corr()<jupyter_output><empty_output><jupyter_text>I first learned to plot the scatter graph with matplotlab before utilizing the seaborn package for easier visualization of the confidence intervals.
In accordance with the low positive correlation values, there doesnt seem to be a very clear relationship at a first glance of the scatter plot.
The best fit lines created with the seaborn plots do show a stronger positive correlation as we move from 30 day availabilities to 365 day availabilities. **This stronger correlation should not be exaggerated however as there might not be a clear relationship between 30 and 365 day bookings due to seasonality and other factors.**<jupyter_code>fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(filtered_df['number_of_reviews'],filtered_df['availability_30'])
plt.title('Scatter plot of number of reviews and 30 day availability')
plt.xlabel('Number of Reviews')
plt.ylabel('Availability in 30 day period')
plt.show()
sns.regplot(filtered_df['number_of_reviews'],filtered_df['availability_30'],ci=95)
sns.regplot(filtered_df['number_of_reviews'],filtered_df['availability_365'],ci=95)<jupyter_output><empty_output><jupyter_text>Just to consolidate my knowledge of how to identify relationships between variables, I chose an intuitive pair of variables in "accomodates" and "bedrooms" to look at. The correlation as expected was heavily positive at 0.73 and the resulting scatter plot tended towards the top right hand corner, fitting in with the expected relationship between these two variables.<jupyter_code>filtered_df[['accommodates','bedrooms']].corr()
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(filtered_df['accommodates'],filtered_df['bedrooms'])
plt.show()<jupyter_output><empty_output><jupyter_text>### Multiple Linear Regression
To test the capability of other packages and to understand more about the relationship between multiple variables and 30 day availability, I used statsmodel.api to construct a simple multiple linear regression model.
Important
While this is certainly a preliminary model, it would seem that the factors chosen below are not very predictive of 30 day availability, with an R-squared value of only 0.025. As only purely numerical variables are included, the result would be much more accurate if we had been able to include nominal values as well such as the type of bed or the type of amenities.
To do so we would have to recode the appropriate labels. For "type_of_bed" for example, a different numerical value can be given to each type of bed: 1 for proper bed and 2 for pull out sofa.
For amenities, it would be difficult to account for the different kinds of amenities combinations available (tv and wifi vs. parking, wifi, and satellite tv). One possible solution would be to create a scoring system that rates these different amenities. It would be even more useful if these overall scores could be modified to fit the preferences of the individual customer.<jupyter_code>y = filtered_df['availability_30']
X = filtered_df[['number_of_reviews','price','accommodates','bathrooms','bedrooms','beds','years_being_host','minimum_nights']]
X = sm.add_constant(X)
model11 = sm.OLS(y, X).fit()
model11.summary()<jupyter_output><empty_output><jupyter_text>### Boxplots
It seemed interesting to see where most of the reviews lie and a boxplot seemed to be an intuitive method of displaying these figures side by side.
While a single boxplot was simple and required the same syntax as some of the above graphs, trying to plot all the reviews side by side as subplots was challenging. It was through looping through an array of review column headings and creating a subplot/boxplot for each one that this was eventually created.
The box plots show that most of the reviews tend within the 8-10 range, with almost all of accuracy, checkin, and communication ratings coming in at 10. <jupyter_code>#array of different reviews
reviews = ['review_scores_accuracy','review_scores_cleanliness','review_scores_checkin','review_scores_communication','review_scores_location','review_scores_value']
fig = plt.figure(figsize=(20,5))
#iterate over all reviews, while keeping a counter for the position where we're at to determine subplot position
for i, review in enumerate(reviews):
plt.subplot(1,len(reviews),i+1)
plt.boxplot(filtered_df[review])
plt.xlabel(review)
#format graphs to be more spaced out and display
plt.tight_layout()
plt.show()<jupyter_output><empty_output><jupyter_text>### Histograms
For nominal values such as "neighbourhood", the best way to compare across all the unique values within this variable is through a bar graph.
Through the histogram, we can see that the highest number of listings by far is the Mission district. Meanwhile, Bernal Heights, Richmond District, SoMa, and NOPA are the next densest areas for listings.<jupyter_code>fig = plt.figure(figsize=(30,5))
ax = fig.add_subplot(1,1,1)
ax.hist(filtered_df['neighbourhood'],bins=112)
plt.setp(ax.get_xticklabels(), rotation=50, horizontalalignment='right')
plt.tight_layout()
plt.title('Number of listings by neighbourhood',fontdict = {'fontsize':30})
plt.xlabel('Neighbourhoods',fontdict={'fontsize':20})
plt.ylabel('# of Listings',fontdict={'fontsize':20})
plt.show()<jupyter_output><empty_output><jupyter_text>It is also possible to do a deeper dive into any particular district of type of housing by specifying the data frame from which to read. In the below case, we take a deeper look at the Mission District with regards to the relationship between 'bedrooms' and 'price'.
While there is a positive correlation that could have been predicted, there are a few outliers in the 1 and 2 bedroom categories. These outliers could be easily explainable by them being "luxury" properties that command the premium fee.<jupyter_code>mission = filtered_df[filtered_df['neighbourhood']=='Mission District']
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(mission['bedrooms'],mission['price'])
plt.title('Relationship between bedrooms and price in Mission District')
plt.xlabel('# of bedrooms')
plt.ylabel('Price')
plt.show()
filtered_df.to_csv('new_filtered_data.csv')<jupyter_output><empty_output>
|
no_license
|
/Part 1 of 4 Day Learning Python project.ipynb
|
finnqiao/data_learning
| 22 |
<jupyter_start><jupyter_text>Here is a California Census Data and I'm using various features of an individual to predict what class of income they belogn in (>50k or <=50k).
Below are some information about the data:
Column Name
Type
Description
age
Continuous
The age of the individual
workclass
Categorical
The type of employer the individual has (government, military, private, etc.).
fnlwgt
Continuous
The number of people the census takers believe that observation represents (sample weight). This variable will not be used.
education
Categorical
The highest level of education achieved for that individual.
education_num
Continuous
The highest level of education in numerical form.
marital_status
Categorical
Marital status of the individual.
occupation
Categorical
The occupation of the individual.
relationship
Categorical
Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
race
Categorical
White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.
gender
Categorical
Female, Male.
capital_gain
Continuous
Capital gains recorded.
capital_loss
Continuous
Capital Losses recorded.
hours_per_week
Continuous
Hours worked per week.
native_country
Categorical
Country of origin of the individual.
income
Categorical
">50K" or "<=50K", meaning whether the person makes more than \$50,000 annually.
<jupyter_code>import pandas as pd
census = pd.read_csv("census_data.csv")
census.head()
census['income_bracket'].unique()
def label_fix(label):
if label==' <=50K':
return 0
else:
return 1
census['income_bracket'] = census['income_bracket'].apply(label_fix)<jupyter_output><empty_output><jupyter_text>### Perform Train Test Split on census data<jupyter_code>from sklearn.model_selection import train_test_split
x_data = census.drop('income_bracket',axis=1)
y_labels = census['income_bracket']
X_train, X_test, y_train, y_test = train_test_split(x_data,y_labels,test_size=0.3,random_state=101)<jupyter_output><empty_output><jupyter_text>### Create the Feature Columns for tf.esitmator<jupyter_code>census.columns<jupyter_output><empty_output><jupyter_text>** Import Tensorflow **<jupyter_code>import tensorflow as tf<jupyter_output><empty_output><jupyter_text>** Create the tf.feature_columns for the categorical values. Use vocabulary lists or just use hash buckets. **<jupyter_code>gender = tf.feature_column.categorical_column_with_vocabulary_list("gender", ["Female", "Male"])
occupation = tf.feature_column.categorical_column_with_hash_bucket("occupation", hash_bucket_size=1000)
marital_status = tf.feature_column.categorical_column_with_hash_bucket("marital_status", hash_bucket_size=1000)
relationship = tf.feature_column.categorical_column_with_hash_bucket("relationship", hash_bucket_size=1000)
education = tf.feature_column.categorical_column_with_hash_bucket("education", hash_bucket_size=1000)
workclass = tf.feature_column.categorical_column_with_hash_bucket("workclass", hash_bucket_size=1000)
native_country = tf.feature_column.categorical_column_with_hash_bucket("native_country", hash_bucket_size=1000)<jupyter_output><empty_output><jupyter_text>** Create the continuous feature_columns for the continuous values using numeric_column **<jupyter_code>age = tf.feature_column.numeric_column("age")
education_num = tf.feature_column.numeric_column("education_num")
capital_gain = tf.feature_column.numeric_column("capital_gain")
capital_loss = tf.feature_column.numeric_column("capital_loss")
hours_per_week = tf.feature_column.numeric_column("hours_per_week")<jupyter_output><empty_output><jupyter_text>** Put all these variables into a single list with the variable name feat_cols **<jupyter_code>feat_cols = [gender,occupation,marital_status,relationship,education,workclass,native_country,
age,education_num,capital_gain,capital_loss,hours_per_week]<jupyter_output><empty_output><jupyter_text>### create the Input Function
** Batch_size is up to you.**<jupyter_code>input_func=tf.estimator.inputs.pandas_input_fn(x=X_train,y=y_train,batch_size=100,num_epochs=None,shuffle=True)<jupyter_output><empty_output><jupyter_text>#### Create your model with tf.estimator
**Create a LinearClassifier**<jupyter_code>model = tf.estimator.LinearClassifier(feature_columns=feat_cols)<jupyter_output>INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: C:\Users\Marcial\AppData\Local\Temp\tmpyyyqnxgv
INFO:tensorflow:Using config: {'_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_session_config': None, '_save_checkpoints_secs': 600, '_model_dir': 'C:\\Users\\Marcial\\AppData\\Local\\Temp\\tmpyyyqnxgv', '_keep_checkpoint_max': 5, '_log_step_count_steps': 100, '_save_summary_steps': 100, '_tf_random_seed': 1}
<jupyter_text>** Train your model on the data, for at least 5000 steps. **<jupyter_code>model.train(input_fn=input_func,steps=5000)<jupyter_output>INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into C:\Users\Marcial\AppData\Local\Temp\tmpyyyqnxgv\model.ckpt.
INFO:tensorflow:step = 1, loss = 69.3147
INFO:tensorflow:global_step/sec: 96.9254
INFO:tensorflow:step = 101, loss = 111.492 (1.033 sec)
INFO:tensorflow:global_step/sec: 105.482
INFO:tensorflow:step = 201, loss = 86.7846 (0.948 sec)
INFO:tensorflow:global_step/sec: 104.543
INFO:tensorflow:step = 301, loss = 42.916 (0.957 sec)
INFO:tensorflow:global_step/sec: 104.051
INFO:tensorflow:step = 401, loss = 395.305 (0.961 sec)
INFO:tensorflow:global_step/sec: 103.836
INFO:tensorflow:step = 501, loss = 106.833 (0.963 sec)
INFO:tensorflow:global_step/sec: 103.406
INFO:tensorflow:step = 601, loss = 308.781 (0.967 sec)
INFO:tensorflow:global_step/sec: 103.19
INFO:tensorflow:step = 701, loss = 151.164 (0.969 sec)
INFO:tensorflow:global_step/sec: 102.606
INFO:tensorflow:step = 801, loss = 254.651 (0.975 sec)
INFO:tensorflow:global_step/sec: 104.324
IN[...]<jupyter_text>### Evaluation
** Create a prediction input function. Provide shuffle=False. **<jupyter_code>pred_fn = tf.estimator.inputs.pandas_input_fn(x=X_test,batch_size=len(X_test),shuffle=False)<jupyter_output><empty_output><jupyter_text>** Use model.predict() and pass in your input function. This will produce a generator of predictions, which you can then transform into a list, with list() **<jupyter_code>predictions = list(model.predict(input_fn=pred_fn))<jupyter_output>INFO:tensorflow:Restoring parameters from C:\Users\Marcial\AppData\Local\Temp\tmpyyyqnxgv\model.ckpt-5000
<jupyter_text>** Each item in your list will look like this: **<jupyter_code>predictions[0]<jupyter_output><empty_output><jupyter_text>** Create a list of only the class_ids key values from the prediction list of dictionaries, these are the predictions you will use to compare against the real y_test values. **<jupyter_code>final_preds = []
for pred in predictions:
final_preds.append(pred['class_ids'][0])
final_preds[:10]<jupyter_output><empty_output><jupyter_text>** Import classification_report from sklearn.metrics and then see if you can figure out how to use it to easily get a full report of your model's performance on the test data. **<jupyter_code>from sklearn.metrics import classification_report
print(classification_report(y_test,final_preds))<jupyter_output> precision recall f1-score support
0 0.88 0.92 0.90 7436
1 0.70 0.59 0.64 2333
avg / total 0.84 0.84 0.84 9769
|
no_license
|
/class_of_income_prediction_california_census_data.ipynb
|
gayathrikumari/California_census
| 15 |
<jupyter_start><jupyter_text>##Rebuild train and test sets from input files
1. upload the interactions_train.csv and interactions_test.csv
2. execute next cell<jupyter_code>train_set=pd.read_csv('./interactions_train_alt.csv').dropna()
test_matrix=pd.read_csv('./interactions_test_alt.csv').dropna()
full_data = pd.concat([train_set, test_matrix])
full_matrix = full_data.pivot_table(index='u', columns='i', values='rating', dropna=False)
print(f'Shape of fullMatrix User-Movie-Matrix:\t{full_matrix.shape}')
#remplace rating from 0 to 5 to a boolean information : > 3 like (1), otherwise dislike (0)
full_matrix = full_matrix.applymap(lambda x : x if np.isnan(x) else int(x>3))
#replace all missing rating by -1 (as rating are from 0 to 5)
#the -1 will be then used in the model loss function as a mask
full_matrix.fillna(-1, inplace=True)
full_matrix<jupyter_output><empty_output><jupyter_text>#Build the model<jupyter_code>def BuildAEModel(n_recipes, activation=None):
inputs = tf.keras.layers.Input((n_recipes,))
#encoded_layer1 = tf.keras.layers.Dense(8192,activation=None, name='Encoder_Layer_1')(inputs)
#encoded_layer2 = tf.keras.layers.Dense(4096,activation=None, name='Encoder_Layer_2')(inputs)
#encoded_layer3 = tf.keras.layers.Dense(2048,activation=None, name='Encoder_Layer_3')(inputs)
embedded = tf.keras.layers.Dense(256,activation=activation, name='embedder')(inputs)
#decoded_layer1 = tf.keras.layers.Dense(2048,activation=None, name='Decoder_Layer_1')(embedded)
#decoded_layer2 = tf.keras.layers.Dense(4096,activation=None, name='Decoder_Layer_2')(decoded_layer1)
#decoded_layer3 = tf.keras.layers.Dense(8192,activation=None, name='Decoder_Layer_3')(decoded_layer2)
outputs = tf.keras.layers.Dense(n_recipes, activation='linear', name = 'Reconstructor')(embedded)
model = tf.keras.Model(inputs=inputs, outputs = [outputs])
return model<jupyter_output><empty_output><jupyter_text>## define a specific loss function
- to compare recipes rating for only rated recipes, ie recipes that have values -1
- For those recipes, mse will be computed<jupyter_code>def customMaskedMSE(ytrue, ypred):
mask = tf.not_equal(ytrue, -1)
return tf.keras.backend.mean(tf.keras.backend.square(tf.boolean_mask(ytrue - ypred, mask)))
def customMaskedMAE(ytrue, ypred):
mask = tf.not_equal(ytrue, -1)
return tf.keras.backend.mean(tf.keras.backend.abs(tf.boolean_mask(ytrue - ypred, mask)))<jupyter_output><empty_output><jupyter_text>## start the training<jupyter_code>#training of model on full data set
my_model = BuildAEModel(full_matrix.shape[1], 'relu')
my_model.summary()
adam = tf.keras.optimizers.Adam(0.001)
my_model.compile('adam',loss=customMaskedMSE)
hist = my_model.fit(full_matrix.values, full_matrix.values,
epochs=NB_EPOCHS, batch_size=BATCH_SIZE)
plt.plot(hist.history['loss'][4:])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
#evaluation on the train set
my_model.evaluate(full_matrix.values, full_matrix.values, verbose=1)<jupyter_output>
6384/1 [===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================[...]<jupyter_text># Save model on google drive for local evaluation<jupyter_code>from google.colab import drive
drive.mount('/content/drive')
my_model.save("/content/drive/My Drive/ML/ae_v3.h5",include_optimizer=False)<jupyter_output><empty_output><jupyter_text>##compare predictions with test data for a user
This is for quick analysis, refer to notebook DisplayReco for better analysis<jupyter_code>#getting user train ratings
#id = 24240
id=33
user = full_matrix[full_matrix.index == id].T
user_ratings = user[user[id]!=-1].T
display(user_ratings)
#get predictions from model
user_train = full_matrix[full_matrix.index == id].values
preds = my_model.predict(user_train, verbose=1)
preds = pd.DataFrame(preds, columns = full_matrix.columns)
#user mse
np.square(preds[user_ratings.columns].values - user_ratings.values).mean()
#user prediction for train recipes
preds[user_ratings.columns]
#top 10 recipes
reco = preds[set(preds.columns) - set (user_ratings.columns)].T.sort_values([0],ascending=False)
reco_top10 = reco.head(10)
display(reco_top10)
reco_top10.index
<jupyter_output><empty_output>
|
no_license
|
/src/ae/AutoEncoderColabv3.ipynb
|
AlexDuvalinho/Cooking-Recipe-Reco
| 6 |
<jupyter_start><jupyter_text># UPAR Circle HeatmapThis notebooks takes the genes that are a hit from the Brca NCI-Nature_2016 UPAR pathway and maps them on a large circle heat map. <jupyter_code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.stats
import re
import sys
import statsmodels.stats.multitest
import gseapy as gp
from gseapy.plot import barplot, dotplot
import cptac
import cptac.utils as u
import plot_utils as p
import statsmodels.stats.multitest
from bokeh.palettes import RdBu
from bokeh.models import LinearColorMapper, ColumnDataSource, ColorBar
from bokeh.models.ranges import FactorRange
from bokeh.plotting import figure, show
from bokeh.io import output_notebook, export_png, export_svgs
from bokeh.layouts import row
import math as math
def plotCircleHeatMap ( df, circle_var, color_var, x_axis, y_axis,plot_width= 1000, plot_height = 650, x_axis_lab = "no_label", y_axis_lab = "", show_plot = True, save_png = "plot.png"):
# circle_var designed for pvalues. Normalized by taking log 10 of values and multiplying by 5
#added a new column to make the plot size
df["size2"] = df[circle_var].apply(lambda x: -1*(np.log(x)))
df['size'] = (df["size2"])*3
#find values to set color bar min/ max as
maxval = df[color_var].max()
minval = df[color_var].min()
if maxval > abs(minval):
minval = maxval * -1
if maxval < abs(minval):
maxval = minval * -1
colors = list((RdBu[9]))
exp_cmap = LinearColorMapper(palette=colors, low = minval, high = maxval)
p = figure(x_range = FactorRange(), y_range = FactorRange(), plot_width= plot_width,
plot_height=plot_height,
toolbar_location=None, tools="hover")
p.scatter(x_axis,y_axis,source=df, fill_alpha=1, line_width=0, size="size",
fill_color={"field":color_var, "transform":exp_cmap})
p.x_range.factors = sorted(df[x_axis].unique().tolist())
p.y_range.factors = sorted(df[y_axis].unique().tolist(), reverse = True)
p.xaxis.major_label_orientation = math.pi/2
if (x_axis_lab != "no_label" ):
p.xaxis.axis_label = x_axis_lab
if (x_axis_lab != "no_label" ):
p.yaxis.axis_label = y_axis_lab
bar = ColorBar(color_mapper=exp_cmap, location=(0,0))
p.add_layout(bar, "right")
# Create Circle Legend
circle_legend = create_circle_legend(df, circle_var, color_var)
if show_plot:
output_notebook()
show(row(p, circle_legend))
if save_png != "plot.png":
export_png(p, filename= save_png)
'''
@Param df: Dataframe. Same as df passed to plotCircleHeatMap.
@Param lowest_pval: Float. Lowest p-value to include in the legend.
@Param highest_pval: Float. Highest p-value to include in the legend.
Returns: df to be used in creating the circle legend.
'''
def create_circle_legend_df(lowest_pval = 1e-6, highest_pval = .05):
lowest_pval_str = "{:.1e}".format(lowest_pval, '.2f')
med_pval_str = "{:.1e}".format(lowest_pval * float(100), '.2f')
highest_pval_str = "{:.1e}".format(highest_pval, '.2f')
data = {'P_Value': [lowest_pval, (lowest_pval * float(100)), highest_pval],
'y_axis': [lowest_pval_str, med_pval_str, highest_pval_str],
'x_axis': ['', '', ''],
'Correlation': [.5, .5, .5]}
fake_df = pd.DataFrame (data, columns = ['x_axis', 'y_axis', 'P_Value', "Correlation"])
fake_df["size2"] = fake_df['P_Value'].apply(lambda x: -1*(np.log(x)))
fake_df['size'] = (fake_df["size2"])*3
return fake_df
'''
@Param df: Dataframe. Same as df passed to plotCircleHeatMap.
@Param circle_var: Column Label. Same as passed to plotCircleHeatMap.
@Param color_var: Column Label. Same as passed to plotCircleHeatMap.
@Param x_axis: Column Label. Used on the x-axis.
@Param y_axis: Column Label. Used on the y-axis.
@Param lowest_pval: Float. Lowest p-value to include in the legend.
@Param highest_pval: Float. Highest p-value to include in the legend.
Returns: df to be used in creating the circle legend.
'''
def create_circle_legend(df, circle_var, color_var, x_axis = 'x_axis', y_axis = 'y_axis',
lowest_pval = 1e-6, highest_pval = .05, plot_height = 200, plot_width = 120):
# Use the smallest pval
if df[circle_var].min() < lowest_pval:
lowest_pval = df[circle_var].min()
circle_df = create_circle_legend_df(lowest_pval, highest_pval)
circle = figure(x_range = FactorRange(), y_range = FactorRange(), plot_width= plot_width,
plot_height=plot_height, toolbar_location=None, tools="hover")
circle.scatter(x_axis, y_axis, source = circle_df, fill_alpha=1, line_width=0, size="size")
circle.x_range.factors = sorted(circle_df[x_axis].unique().tolist())
circle.y_range.factors = sorted(circle_df[y_axis].unique().tolist(), reverse = True)
circle.xaxis.major_label_orientation = math.pi/2
circle.xaxis.axis_label = 'FDR P-Values'
return circle
<jupyter_output><empty_output><jupyter_text>Load df with all of the genes that are FDR significant. Then get list of just the gene names and use them to run a GSEA. <jupyter_code>prot_FDR = pd.read_csv("../Step3.1_Pearson_dfs_by_cancer/csv_files/Brca_EGFR_all_pearson_FDR.csv")
df_FDR= prot_FDR.drop(['Unnamed: 0'], axis=1)
df_FDR = df_FDR.set_index("Comparison")
df1_transposed = df_FDR.T
df1_transposed
brca_prot = df1_transposed.columns.values.tolist()
brca_genes = []
for gene in brca_prot :
brca_genes.append((re.sub("_proteomics", "", gene)))
len(brca_genes)<jupyter_output><empty_output><jupyter_text>Run GSEA using reactome 2016 set<jupyter_code>brca_enr = gp.enrichr(gene_list = brca_genes, description='Tumor_partition', gene_sets='NCI-Nature_2016',
outdir='test/enrichr_kegg')
brca_enr.res2d.head(2)
#Get append version of the df with all cancer type, fdr sig trans results
df_FDR_append = pd.read_csv("../Step3.2_combining_pearson_dfs/csv_files/pancan_EGFR_pearson_sig_all_prot_append_FDR.csv")
df_FDR_append = df_FDR_append.drop(['Unnamed: 0'], axis=1)
#get just the upa genes
brca_df = brca_enr.res2d
upa = brca_df.iloc[0,9]
upa = upa.split(';')
upa.remove("EGFR")
len(upa)
#filter down df with just upa genes
upa_column_names = []
for gene in upa:
gene += "_proteomics"
upa_column_names.append(gene)
df_FDR_upa = df_FDR_append[df_FDR_append.Comparison.isin(upa_column_names)]
df_FDR_upa = df_FDR_upa.replace(to_replace ='_proteomics', value = '', regex = True)
#Make plot using plot utils
plotCircleHeatMap(df_FDR_upa, "P_value","Correlation","Comparison","Cancer Type",plot_width= 1000, plot_height = 650, x_axis_lab= "Proteomics")
luad_FDR = pd.read_csv("../Step3.1_Pearson_dfs_by_cancer/csv_files/Luad_EGFR_all_pearson_FDR.csv")
df_FDR= luad_FDR.drop(['Unnamed: 0'], axis=1)
df_FDR = df_FDR.set_index("Comparison")
df1_transposed = df_FDR.T
df1_transposed
luad_prot = df1_transposed.columns.values.tolist()
luad_genes = []
for gene in luad_prot :
luad_genes.append((re.sub("_proteomics", "", gene)))
len(luad_genes)
luad_enr = gp.enrichr(gene_list = luad_genes, description='Tumor_partition', gene_sets='Reactome_2016',
outdir='test/enrichr_kegg')
luad_enr.res2d.head(7)
#get just the upa genes
luad_df = luad_enr.res2d
genes = luad_df.iloc[3,9]
genes = genes.split(';')
genes.remove("GRB2")
len(genes)
genes
#filter down df with just upa genes
pathway_names = []
for gene in genes:
gene += "_proteomics"
pathway_names.append(gene)
df_FDR_pathway = df_FDR_append[df_FDR_append.Comparison.isin(pathway_names)]
df_FDR_pathway
#Make plot using plot utils
p.plotCircleHeatMap(df_FDR_pathway, "P_value","Correlation","Comparison","Cancer Type",plot_width= 1000, plot_height = 650)<jupyter_output>/Users/Lindsey/anaconda3/lib/python3.7/site-packages/plot_utils/__init__.py:40: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df["size2"] = df[circle_var].apply(lambda x: -1*(np.log(x)))
/Users/Lindsey/anaconda3/lib/python3.7/site-packages/plot_utils/__init__.py:41: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['size'] = (df["size2"])*3
|
no_license
|
/EGFR/old_work/Step3.5_Make_Figure3_figures/UPAR_pathway.ipynb
|
PayneLab/WhenMutationsDontMatter
| 3 |
<jupyter_start><jupyter_text>## [作業重點]
使用 Sklearn 中的線性迴歸模型,來訓練各種資料集,務必了解送進去模型訓練的**資料型態**為何,也請了解模型中各項參數的意義## 作業
試著使用 sklearn datasets 的其他資料集 (wine, boston, ...),來訓練自己的線性迴歸模型。### HINT: 注意 label 的型態,確定資料集的目標是分類還是回歸,在使用正確的模型訓練!<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, accuracy_score
#Wine Logistics regression
wine = datasets.load_wine()
X = wine.data[:, np.newaxis, 6]
print("Data shape: ", X)
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(X, wine.target, test_size=0.1, random_state=1)
# 建立模型
logreg = linear_model.LogisticRegression(solver ='lbfgs', multi_class = 'auto' , max_iter = 10000)
# 訓練模型
logreg.fit(x_train, y_train)
y_pred = logreg.predict(x_test)
# 可以看回歸模型的參數值
print('Coefficients: ', logreg.coef_)
# 預測值與實際值的差距,使用 MSE
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
# 預測測試集
acc = accuracy_score(y_test, y_pred)
print("Accuracy: ", acc)
plt.scatter(x_test, y_test, color='black')
plt.plot(x_test, y_pred, color='blue', linewidth=3)
plt.show()
from sklearn.datasets import load_boston
#boston linear regression
# 為方便視覺化,我們只使用資料集中的 1 個 feature (column)
boston = datasets.load_boston()
X = boston.data[:, np.newaxis, 6]
print("Data shape: ", boston.data.shape)
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(X, boston.target, test_size=0.1, random_state=1)
# 建立一個線性回歸模型
regr = linear_model.LinearRegression()
# 將訓練資料丟進去模型訓練
regr.fit(x_train, y_train)
# 將測試資料丟進模型得到預測結果
y_pred = regr.predict(x_test)
# 可以看回歸模型的參數值
print('Coefficients: ', regr.coef_)
# 預測值與實際值的差距,使用 MSE
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
# 畫出回歸模型與實際資料的分佈
plt.scatter(x_test, y_test, color='black')
plt.plot(x_test, y_pred, color='blue', linewidth=3)
plt.show()
from sklearn.datasets import load_breast_cancer
#breast_cancer linear regression
# 為方便視覺化,我們只使用資料集中的 1 個 feature (column)
breast_cancer = load_breast_cancer()
X = breast_cancer.data[:, np.newaxis, 6]
print("Data shape: ", breast_cancer.data.shape)
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(X, breast_cancer.target, test_size=0.1, random_state=1)
# 建立一個線性回歸模型
regr = linear_model.LinearRegression()
# 將訓練資料丟進去模型訓練
regr.fit(x_train, y_train)
# 將測試資料丟進模型得到預測結果
y_pred = regr.predict(x_test)
# 可以看回歸模型的參數值
print('Coefficients: ', regr.coef_)
# 預測值與實際值的差距,使用 MSE
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
# 畫出回歸模型與實際資料的分佈
plt.scatter(x_test, y_test, color='black')
plt.plot(x_test, y_pred, color='blue', linewidth=3)
plt.show()
#breast_cancer Logistics regression
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(breast_cancer.data, breast_cancer.target, test_size=0.1, random_state=4)
# 建立模型
logreg = linear_model.LogisticRegression(solver ='lbfgs', multi_class = 'auto', max_iter = 10000)
# 訓練模型
logreg.fit(x_train, y_train)
# 預測測試集
y_pred = logreg.predict(x_test)
acc = accuracy_score(y_test, y_pred)
print("Accuracy: ", acc)<jupyter_output>Accuracy: 0.8596491228070176
|
permissive
|
/HomeWork/Day_038_HW.ipynb
|
rugl/3rd-ML100Days
| 1 |
<jupyter_start><jupyter_text># Springboard Regression Case Study - The Red Wine Dataset - Tier 3Welcome to the Springboard Regression case study! Please note: this is ***Tier 3*** of the case study.
This case study was designed for you to **use Python to apply the knowledge you've acquired in reading *The Art of Statistics* (hereinafter *AoS*) by Professor Spiegelhalter**. Specifically, the case study will get you doing regression analysis; a method discussed in Chapter 5 on p.121. It might be useful to have the book open at that page when doing the case study to remind you of what it is we're up to (but bear in mind that other statistical concepts, such as training and testing, will be applied, so you might have to glance at other chapters too).
The aim is to ***use exploratory data analysis (EDA) and regression to predict alcohol levels in wine with a model that's as accurate as possible***.
We'll try a *univariate* analysis (one involving a single explanatory variable) as well as a *multivariate* one (involving multiple explanatory variables), and we'll iterate together towards a decent model by the end of the notebook. The main thing is for you to see how regression analysis looks in Python and jupyter, and to get some practice implementing this analysis.
Throughout this case study, **questions** will be asked in the markdown cells. Try to **answer these yourself in a simple text file** when they come up. Most of the time, the answers will become clear as you progress through the notebook. Some of the answers may require a little research with Google and other basic resources available to every data scientist.
For this notebook, we're going to use the red wine dataset, wineQualityReds.csv. Make sure it's downloaded and sitting in your working directory. This is a very common dataset for practicing regression analysis and is actually freely available on Kaggle, [here](https://www.kaggle.com/piyushgoyal443/red-wine-dataset).
You're pretty familiar with the data science pipeline at this point. This project will have the following structure:
**1. Sourcing and loading**
- Import relevant libraries
- Load the data
- Exploring the data
- Choosing a dependent variable
**2. Cleaning, transforming, and visualizing**
- Visualizing correlations
**3. Modeling**
- Train/Test split
- Making a Linear regression model: your first model
- Making a Linear regression model: your second model: Ordinary Least Squares (OLS)
- Making a Linear regression model: your third model: multiple linear regression
- Making a Linear regression model: your fourth model: avoiding redundancy
**4. Evaluating and concluding**
- Reflection
- Which model was best?
- Other regression algorithms### 1. Sourcing and loading#### 1a. Import relevant libraries <jupyter_code># Import relevant libraries and packages.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns # For all our visualization needs.
import statsmodels.api as sm # Cross-sectional models and methods
from statsmodels.graphics.api import abline_plot # What does this do? Find out and type here.
from sklearn.metrics import mean_squared_error, r2_score # What does this do? Find out and type here.
from sklearn.model_selection import train_test_split # What does this do? Find out and type here.
from sklearn import linear_model, preprocessing # What does this do? Find out and type here.
import warnings # For handling error messages.
# Don't worry about the following two instructions: they just suppress warnings that could occur later.
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")<jupyter_output><empty_output><jupyter_text>#### 1b. Load the data<jupyter_code># Load the data.
df = pd.read_csv('wineQualityReds.csv')<jupyter_output><empty_output><jupyter_text>#### 1c. Exploring the data<jupyter_code># Check out its appearance.
df.head()
# Another very useful method to call on a recently imported dataset is .info(). Call it here to get a good
# overview of the data
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 1599 entries, 0 to 1598
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 1599 non-null int64
1 fixed.acidity 1599 non-null float64
2 volatile.acidity 1599 non-null float64
3 citric.acid 1599 non-null float64
4 residual.sugar 1599 non-null float64
5 chlorides 1599 non-null float64
6 free.sulfur.dioxide 1599 non-null float64
7 total.sulfur.dioxide 1599 non-null float64
8 density 1599 non-null float64
9 pH 1599 non-null float64
10 sulphates 1599 non-null float64
11 alcohol 1599 non-null float64
12 quality 1599 non-null int64
dtypes: float64(11), int64(2)
memory usage: 162.5 KB
<jupyter_text>What can you infer about the nature of these variables, as output by the info() method?
They are all numerical and nonempty.
Which variables might be suitable for regression analysis, and why? For those variables that aren't suitable for regression analysis, is there another type of statistical modeling for which they are suitable?
All but 'Unnamed' seems to be good candidates for regression analysis.<jupyter_code># We should also look more closely at the dimensions of the dataset.
df.shape<jupyter_output><empty_output><jupyter_text>#### 1d. Choosing a dependent variableWe now need to pick a dependent variable for our regression analysis: a variable whose values we will predict.
'Quality' seems to be as good a candidate as any. Let's check it out. One of the quickest and most informative ways to understand a variable is to make a histogram of it. This gives us an idea of both the center and spread of its values. <jupyter_code>plt.hist(df['quality'],bins = len(df.quality.unique()))<jupyter_output><empty_output><jupyter_text>We can see so much about the quality variable just from this simple visualization. Answer yourself: what value do most wines have for quality? What is the minimum quality value below, and the maximum quality value? What is the range? Remind yourself of these summary statistical concepts by looking at p.49 of the *AoS*.
Most quality values are 5 \
Minimum value is 3\
Max value is 8\
The values range from 3:8
But can you think of a problem with making this variable the dependent variable of regression analysis? Remember the example in *AoS* on p.122 of predicting the heights of children from the heights of parents? Take a moment here to think about potential problems before reading on.
The issue is this: quality is a *discrete* variable, in that its values are integers (whole numbers) rather than floating point numbers. Thus, quality is not a *continuous* variable. But this means that it's actually not the best target for regression analysis.
Before we dismiss the quality variable, however, let's verify that it is indeed a discrete variable with some further exploration. <jupyter_code># Get a basic statistical summary of the variable
df.quality.describe()
# What do you notice from this summary?
# Get a list of the values of the quality variable, and the number of occurrences of each.
df.quality.value_counts()<jupyter_output><empty_output><jupyter_text>The outputs of the describe() and value_counts() methods are consistent with our histogram, and since there are just as many values as there are rows in the dataset, we can infer that there are no NAs for the quality variable.
But scroll up again to when we called info() on our wine dataset. We could have seen there, already, that the quality variable had int64 as its type. As a result, we had sufficient information, already, to know that the quality variable was not appropriate for regression analysis. Did you figure this out yourself? If so, kudos to you!
The quality variable would, however, conduce to proper classification analysis. This is because, while the values for the quality variable are numeric, those numeric discrete values represent *categories*; and the prediction of category-placement is most often best done by classification algorithms. You saw the decision tree output by running a classification algorithm on the Titanic dataset on p.168 of Chapter 6 of *AoS*. For now, we'll continue with our regression analysis, and continue our search for a suitable dependent variable.
Now, since the rest of the variables of our wine dataset are continuous, we could — in theory — pick any of them. But that does not mean that are all equally sutiable choices. What counts as a suitable dependent variable for regression analysis is determined not just by *intrinsic* features of the dataset (such as data types, number of NAs etc) but by *extrinsic* features, such as, simply, which variables are the most interesting or useful to predict, given our aims and values in the context we're in. Almost always, we can only determine which variables are sensible choices for dependent variables with some **domain knowledge**.
Not all of you might be wine buffs, but one very important and interesting quality in wine is [acidity](https://waterhouse.ucdavis.edu/whats-in-wine/fixed-acidity). As the Waterhouse Lab at the University of California explains, 'acids impart the sourness or tartness that is a fundamental feature in wine taste. Wines lacking in acid are "flat." Chemically the acids influence titrable acidity which affects taste and pH which affects color, stability to oxidation, and consequantly the overall lifespan of a wine.'
If we cannot predict quality, then it seems like **fixed acidity** might be a great option for a dependent variable. Let's go for that.So if we're going for fixed acidity as our dependent variable, what we now want to get is an idea of *which variables are related interestingly to that dependent variable*.
We can call the .corr() method on our wine data to look at all the correlations between our variables. As the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.corr.html) shows, the default correlation coefficient is the Pearson correlation coefficient (p.58 and p.396 of the *AoS*); but other coefficients can be plugged in as parameters. Remember, the Pearson correlation coefficient shows us how close to a straight line the data-points fall, and is a number between -1 and 1. <jupyter_code># Call the .corr() method on the wine dataset
df.corr()<jupyter_output><empty_output><jupyter_text>Ok - you might be thinking, but wouldn't it be nice if we visualized these relationships? It's hard to get a picture of the correlations between the variables without anything visual.
Very true, and this brings us to the next section.### 2. Cleaning, Transforming, and Visualizing #### 2a. Visualizing correlations
The heading of this stage of the data science pipeline ('Cleaning, Transforming, and Visualizing') doesn't imply that we have to do all of those operations in *that order*. Sometimes (and this is a case in point) our data is already relatively clean, and the priority is to do some visualization. Normally, however, our data is less sterile, and we have to do some cleaning and transforming first prior to visualizing. Now that we've chosen alcohol level as our dependent variable for regression analysis, we can begin by plotting the pairwise relationships in the dataset, to check out how our variables relate to one another.<jupyter_code># Make a pairplot of the wine data
sns.pairplot(df)<jupyter_output><empty_output><jupyter_text>If you've never executed your own Seaborn pairplot before, just take a moment to look at the output. They certainly output a lot of information at once. What can you infer from it? What can you *not* justifiably infer from it?
... All done?
Here's a couple things you might have noticed:
- a given cell value represents the correlation that exists between two variables
- on the diagonal, you can see a bunch of histograms. This is because pairplotting the variables with themselves would be pointless, so the pairplot() method instead makes histograms to show the distributions of those variables' values. This allows us to quickly see the shape of each variable's values.
- the plots for the quality variable form horizontal bands, due to the fact that it's a discrete variable. We were certainly right in not pursuing a regression analysis of this variable.
- Notice that some of the nice plots invite a line of best fit, such as alcohol vs density. Others, such as citric acid vs alcohol, are more inscrutable.So we now have called the .corr() method, and the .pairplot() Seaborn method, on our wine data. Both have flaws. Happily, we can get the best of both worlds with a heatmap. <jupyter_code># Make a heatmap of the data plt.heatmap(df)
sns.heatmap(df.corr())<jupyter_output><empty_output><jupyter_text>Take a moment to think about the following questions:
- How does color relate to extent of correlation?
- How might we use the plot to show us interesting relationships worth investigating?
- More precisely, what does the heatmap show us about the fixed acidity variable's relationship to the density variable?
There is a relatively strong correlation between the density and fixed acidity variables respectively. In the next code block, call the scatterplot() method on our sns object. Make the x-axis parameter 'density', the y-axis parameter 'fixed.acidity', and the third parameter specify our wine dataset. <jupyter_code># Plot density against alcohol
sns.scatterplot(x='density',y='fixed.acidity',data = df)<jupyter_output><empty_output><jupyter_text>We can see a positive correlation, and quite a steep one. There are some outliers, but as a whole, there is a steep looking line that looks like it ought to be drawn. <jupyter_code># Call the regplot method on your sns object, with parameters: x = 'density', y = 'fixed.acidity'
sns.regplot(x='density',y='fixed.acidity',data = df)<jupyter_output><empty_output><jupyter_text>The line of best fit matches the overall shape of the data, but it's clear that there are some points that deviate from the line, rather than all clustering close. Let's see if we can predict fixed acidity based on density using linear regression. ### 3. Modeling #### 3a. Train/Test Split
While this dataset is super clean, and hence doesn't require much for analysis, we still need to split our dataset into a test set and a training set.
You'll recall from p.158 of *AoS* that such a split is important good practice when evaluating statistical models. On p.158, Professor Spiegelhalter was evaluating a classification tree, but the same applies when we're doing regression. Normally, we train with 75% of the data and test on the remaining 25%.
To be sure, for our first model, we're only going to focus on two variables: fixed acidity as our dependent variable, and density as our sole independent predictor variable.
We'll be using [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) here. Don't worry if not all of the syntax makes sense; just follow the rationale for what we're doing. <jupyter_code># Subsetting our data into our dependent and independent variables.
y = df[['fixed.acidity']]
X = df[['density']]
# Split the data. This line uses the sklearn function train_test_split().
# The test_size parameter means we can train with 75% of the data, and test on 25%.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 123)
# We now want to check the shape of the X train, y_train, X_test and y_test to make sure the proportions are right.
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)<jupyter_output>(1199, 1) (1199, 1)
(400, 1) (400, 1)
<jupyter_text>#### 3b. Making a Linear Regression model: our first model
Sklearn has a [LinearRegression()](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) function built into the linear_model module. We'll be using that to make our regression model. <jupyter_code># Create the model: make a variable called rModel, and use it linear_model.LinearRegression appropriately
rModel = linear_model.LinearRegression(normalize=True)
# We now want to train the model on our test data.
rModel.fit(X_train, y_train)
# Evaluate the model
print(rModel.score(X_train, y_train))<jupyter_output>0.45487824100681673
<jupyter_text>The above score is called R-Squared coefficient, or the "coefficient of determination". It's basically a measure of how successfully our model predicts the variations in the data away from the mean: 1 would mean a perfect model that explains 100% of the variation. At the moment, our model explains only about 23% of the variation from the mean. There's more work to do!<jupyter_code># Use the model to make predictions about our test data
y_pred = rModel.predict(X_test)
# Let's plot the predictions against the actual result. Use scatter()
plt.scatter(y_test,y_pred)<jupyter_output><empty_output><jupyter_text>The above scatterplot represents how well the predictions match the actual results.
Along the x-axis, we have the actual fixed acidity, and along the y-axis we have the predicted value for the fixed acidity.
There is a visible positive correlation, as the model has not been totally unsuccesful, but it's clear that it is not maximally accurate: wines with an actual fixed acidity of just over 10 have been predicted as having acidity levels from about 6.3 to 13.Let's build a similar model using a different package, to see if we get a better result that way.#### 3c. Making a Linear Regression model: our second model: Ordinary Least Squares (OLS)<jupyter_code># Create the test and train sets. Here, we do things slightly differently.
# We make the explanatory variable X as before.
X = df[['density']]
# But here, reassign X the value of adding a constant to it. This is required for Ordinary Least Squares Regression.
# Further explanation of this can be found here:
# https://www.statsmodels.org/devel/generated/statsmodels.regression.linear_model.OLS.html
X = sm.add_constant(X)
# The rest of the preparation is as before.
y = df[['fixed.acidity']]
# Split the data using train_test_split()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 123)
# Create the model
rModel2 = sm.OLS(y_train, X_train)
# Fit the model with fit()
rModel2_results = rModel2.fit()
# Evaluate the model with .summary()
rModel2_results.summary()<jupyter_output><empty_output><jupyter_text>One of the great things about Statsmodels (sm) is that you get so much information from the summary() method.
There are lots of values here, whose meanings you can explore at your leisure, but here's one of the most important: the R-squared score is 0.455, the same as what it was with the previous model. This makes perfect sense, right? It's the same value as the score from sklearn, because they've both used the same algorithm on the same data.
Here's a useful link you can check out if you have the time: https://www.theanalysisfactor.com/assessing-the-fit-of-regression-models/<jupyter_code># Let's use our new model to make predictions of the dependent variable y. Use predict(), and plug in X_test as the parameter
y_pred = rModel2_results.predict(X_test)
# Plot the predictions
# Build a scatterplot
plt.scatter(y_test,y_pred)
# Add a line for perfect correlation. Can you see what this line is doing? Use plot()
plt.plot([x for x in range(9,15)],[x for x in range(9,15)], color='red')
# Label it nicely
plt.title("Model 2 predictions vs. the actual values")
plt.xlabel("Actual Fixed Acidity")
plt.ylabel("Predicted Value")
<jupyter_output><empty_output><jupyter_text>The red line shows a theoretically perfect correlation between our actual and predicted values - the line that would exist if every prediction was completely correct. It's clear that while our points have a generally similar direction, they don't match the red line at all; we still have more work to do.
To get a better predictive model, we should use more than one variable.#### 3d. Making a Linear Regression model: our third model: multiple linear regression
Remember, as Professor Spiegelhalter explains on p.132 of *AoS*, including more than one explanatory variable into a linear regression analysis is known as ***multiple linear regression***. <jupyter_code># Create test and train datasets
# This is again very similar, but now we include more columns in the predictors
# Include all columns from data in the explanatory variables X except fixed.acidity and quality (which was an integer)
X = df.drop(["fixed.acidity", "quality"],axis=1)
# Create constants for X, so the model knows its bounds
X = sm.add_constant(X)
y = df[["fixed.acidity"]]
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 123)
# We can use almost identical code to create the third model, because it is the same algorithm, just different inputs
# Create the model
rModel3 = sm.OLS(y_train, X_train)
# Fit the model
rModel3_results = rModel3.fit()
# Evaluate the model
rModel3_results.summary()<jupyter_output><empty_output><jupyter_text>The R-Squared score shows a big improvement - our first model predicted only around 45% of the variation, but now we are predicting 87%!<jupyter_code># Use our new model to make predictions
y_pred = rModel3_results.predict(X_test)
# Plot the predictions
# Build a scatterplot
plt.scatter(y_test, y_pred)
# Add a line for perfect correlation
plt.plot([x for x in range(9,15)],[x for x in range(9,15)], color='red')
# Label it nicely
plt.title("Model 3 predictions vs. actual")
plt.xlabel("Actual")
plt.ylabel("Predicted")<jupyter_output><empty_output><jupyter_text>We've now got a much closer match between our data and our predictions, and we can see that the shape of the data points is much more similar to the red line. We can check another metric as well - the RMSE (Root Mean Squared Error). The MSE is defined by Professor Spiegelhalter on p.393 of *AoS*, and the RMSE is just the square root of that value. This is a measure of the accuracy of a regression model. Very simply put, it's formed by finding the average difference between predictions and actual values. Check out p. 163 of *AoS* for a reminder of how this works. <jupyter_code># Define a function to check the RMSE. Remember the def keyword needed to make functions?
def rmse(predictions, targets):
return np.sqrt(((predictions - targets) ** 2).mean())
# Get predictions from rModel3
y_pred = rModel3_results.predict(X_test)
# Put the predictions & actual values into a dataframe
matches = pd.DataFrame(y_test)
matches.rename(columns = {'fixed.acidity':'actual'}, inplace=True)
matches["predicted"] = y_pred
rmse(matches["actual"], matches["predicted"])
<jupyter_output><empty_output><jupyter_text>The RMSE tells us how far, on average, our predictions were mistaken. An RMSE of 0 would mean we were making perfect predictions. 0.6 signifies that we are, on average, about 0.6 of a unit of fixed acidity away from the correct answer. That's not bad at all.#### 3e. Making a Linear Regression model: our fourth model: avoiding redundancy We can also see from our early heat map that volatile.acidity and citric.acid are both correlated with pH. We can make a model that ignores those two variables and just uses pH, in an attempt to remove redundancy from our model.<jupyter_code># Create test and train datasets
# Include the remaining six columns as predictors
X = wine[["residual.sugar","chlorides","total.sulfur.dioxide","density","pH","sulphates"]]
# Create constants for X, so the model knows its bounds
X = sm.add_constant(X)
y = df[["fixed.acidity"]]
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 123)
# Create the fifth model
rModel4 = sm.OLS(y_train, X_train)
# Fit the model
rModel4_results = rModel4.fit()
# Evaluate the model
rModel4_results.summary()<jupyter_output><empty_output>
|
no_license
|
/Regression_Case_Wine.ipynb
|
vtou22/Springboard
| 20 |
<jupyter_start><jupyter_text>## Notebook for Data Science Capstone<jupyter_code>import pandas as pd
import numpy as np
print ('Hello Capstone Project Course!')<jupyter_output>Hello Capstone Project Course!
|
no_license
|
/Hello Capstone.ipynb
|
amanassa/Coursera_Capstone
| 1 |
<jupyter_start><jupyter_text># Podstawy programowania w analizie danych
## Tomasz Rodak
2017/2018, semestr letni
Wykład VI# Klasy## Tworzenie klas
* Klasy tworzymy za pomocą zarezerwowanego słowa `class`.
* Ciało klasy znajduje się we wcięciu, po dwukropku, tak samo jak ciało funkcji czy instrukcji sterującej.
* Zaleca się, aby nazwy klas pisać w stylu `TitleCase`.## Przykład
Oto najprostsza klasa w Pythonie. Jej nazwą jest `NajprostszaKlasa`. Klasa ta nic nie robi (choć może przechowywać dane).<jupyter_code>class NajprostszaKlasa:
pass<jupyter_output><empty_output><jupyter_text>## Operacje na klasach
Jedyne operacje jakie można wykonywać na klasach to:
* tworzenie **instancji**;
* odwoływanie się do atrybutów klasy.
## Instancja klasy
* Klasy są obiektami **wywoływalnymi**, tzn. można je wywoływać stosując notację taką jak dla funkcji.
* Wywołanie klasy zwraca instancję tej klasy.
* Każde wywołanie zwraca nową instancję o innej tożsamości.
Możesz myśleć, że klasa jest **fabryką instancji**.`a` jest instancją klasy `NajprostszaKlasa`<jupyter_code>a = NajprostszaKlasa()<jupyter_output><empty_output><jupyter_text>`b` jest inną instancją.<jupyter_code>b = NajprostszaKlasa()
a is b<jupyter_output><empty_output><jupyter_text>Instancje przedstawiają się nazwą klasy od której pochodzą i adresem, pod którym żyją. Różne adresy wskazują, że są to różne obiekty.<jupyter_code>a
b<jupyter_output><empty_output><jupyter_text>## Przykład: `Punkt`
* Wyobrażamy sobie, że instancje klasy `Punkt` będą punktami płaszczyzny.
* Klasa `Punkt` również nic nie robi -- jedynie nazwa wskazuje na jej przeznaczenie.<jupyter_code>class Punkt:
pass<jupyter_output><empty_output><jupyter_text>Tworzymy punkty `p` i `q`.<jupyter_code>p = Punkt()
q = Punkt()<jupyter_output><empty_output><jupyter_text>## Atrybuty instancji
* Instrukcja przypisania i **notacja z kropką** pozwalają na stowarzyszanie z instancjami nowych nazw.
* Jest to jeden ze sposobów na tworzenie **atrybutów instancji**.
Współrzędne punktów przechowamy **w instancji** jako nazwy `x`, `y`.<jupyter_code>p.x, p.y = 0, 1
q.x, q.y = 345, -123
p.x, p.y, q.x, q.y<jupyter_output><empty_output><jupyter_text>## Modyfikacja atrybutów instancji
Atrybuty można zmieniać. Względem instancji jest to operacja "w miejscu".<jupyter_code>p.x, p.y = 5, 100
p.x, p.y<jupyter_output><empty_output><jupyter_text>`q.x`, `q.y` nie uległy zmianie.<jupyter_code>q.x, q.y<jupyter_output><empty_output><jupyter_text>## Metody
* Funkcje zdefiniowane wewnątrz klasy nazywamy **metodami**.
* Metody są **atrybutami klasy**.## `Punkt`: dodajemy zachowanie
Dodajemy metodę `centruj()`, która będzie ustawiać punkt w środku układu współrzędnych.<jupyter_code>class Punkt:
def centruj(instancja):
instancja.x = 0
instancja.y = 0<jupyter_output><empty_output><jupyter_text>`centruj` jest atrybutem klasy `Punkt`.<jupyter_code>Punkt.centruj<jupyter_output><empty_output><jupyter_text>Próba wywołania skończy sie niepowodzeniem jeśli nie dostarczymy argumentu.<jupyter_code>Punkt.centruj()<jupyter_output><empty_output><jupyter_text>Tworzymy więc instancję<jupyter_code>p = Punkt()<jupyter_output><empty_output><jupyter_text>i wywołujemy na niej metodę<jupyter_code>Punkt.centruj(p)<jupyter_output><empty_output><jupyter_text>Po tej operacji instancja `p` posiada atrybuty `x`, `y`.<jupyter_code>p.x, p.y<jupyter_output><empty_output><jupyter_text>Przesunięcie do nowego położenia...<jupyter_code>p.x, p.y = 123, 543
p.x, p.y<jupyter_output><empty_output><jupyter_text>... i powrót do początku.<jupyter_code>Punkt.centruj(p)
p.x, p.y<jupyter_output><empty_output><jupyter_text>## Wiązanie metody z instancją
* Metoda klasy **wiąże** się z instancją zwracaną przez klasę.
* Metoda związana z instancją jest atrybutem tej instancji.
* Metoda związana z instancją po wywołaniu odwołuje się do metody z klasy.
* Domyślnym pierwszym argumentem metody związanej z daną instancją jest ta właśnie instancja.## `Punkt`: testujemy wiązanie metody `centruj()`
W samej klasie nic nie zmieniamy.<jupyter_code>class Punkt:
def centruj(instancja):
instancja.x = 0
instancja.y = 0<jupyter_output><empty_output><jupyter_text>Tworzymy instancję.<jupyter_code>p = Punkt()<jupyter_output><empty_output><jupyter_text>`p` ma atrybut `centruj` przedstawiający się jako **bound method** instancji.<jupyter_code>p.centruj<jupyter_output><empty_output><jupyter_text>Próba wywołania metody `centruj` związanej z instancją `p` na instancji `p` kończy się błędem.<jupyter_code>p.centruj(p)
p.x, p.y<jupyter_output><empty_output><jupyter_text>Zwróć uwagę na komunikat -- Python zdaje się twierdzić, że podaliśmy dwa argumenty.Oto wyjaśnienie:
* metoda związana z instancją automatycznie wstawia tę właśnie instancję jako pierwszy argument.
<jupyter_code>p.centruj()
p.x, p.y<jupyter_output><empty_output><jupyter_text>Przesunięcie do innego położenia<jupyter_code>p.x, p.y = 123, 456
p.x, p.y<jupyter_output><empty_output><jupyter_text>i powrót do początku<jupyter_code>p.centruj()
p.x, p.y<jupyter_output><empty_output><jupyter_text>## Parametr `self`
* Powtórzmy -- pierwszym argumentem metody związanej z instancją jest ta instancja.
* Z tego powodu metody klas, poza pewnymi wyjątkami, posiadają zawsze co najmniej jeden parametr i co więcej, argument akceptowany przez pierwszy parametr metody będzie instancją.
* Nazwa tego pierwszego parametru jest dowolna, ale istnieje bardzo silna konwencja, aby nazywać go **`self`**.Przepiszmy klasę `Punkt` używając ortodoksyjnego nazewnictwa.<jupyter_code>class Punkt:
def centruj(self):
self.x = 0
self.y = 0<jupyter_output><empty_output><jupyter_text>## Przykład
Co się stanie, gdy zapomnimy o szczególnej roli pierwszego parametru metody?
Łatwo to sprawdzić.<jupyter_code>class Klasa:
def komunikat():
print('Monty Python')
def sześcian(x):
return x ** 3<jupyter_output><empty_output><jupyter_text>Klasa zawiera dwie metody:
* `komunikat()` -- nic nie zwraca, nie ma parametrów, wyświetla komunikat;
* `sześcian(x)` -- jeden parametr `x`, zwraca jego trzecią potęgę.Jako atrybuty klasy `Klasa` funkcje działają w zwykły sposób.<jupyter_code>Klasa.komunikat()
Klasa.sześcian(5)<jupyter_output><empty_output><jupyter_text>Kłopoty zaczynają się wtedy, gdy próbujemy wywołać powyższe metody po związaniu z instancją -- liczba argumentów przestaje się zgadzać, gdyż interpreter jako pierwszy próbuje podać instancję.<jupyter_code>a = Klasa()
a.komunikat()
a.sześcian(5)<jupyter_output><empty_output><jupyter_text>Jeszcze jedno wywołanie. Tym razem liczba zmiennych się zgadza, ale operacja nie jest możliwa do przeprowadzenia. Nie podaliśmy argumentu `x`, więc interpreter mógł wstawić za niego instancję. Nie powiodła się próba podniesienia instancji do sześcianu -- ta operacja, przynajmniej w tym przypadku, nie ma sensu.<jupyter_code>a.sześcian()<jupyter_output><empty_output><jupyter_text>## Atrybuty klasy vs atrybuty instancji
Gdy interpreter widzi odwołanie do atrybutu instancji, poszukuje tego atrybutu najpierw w instancji, a następnie w klasie.<jupyter_code>class Klasa:
napis = 'ala ma kota'
i1 = Klasa()
i2 = Klasa()<jupyter_output><empty_output><jupyter_text>Przypisanie do atrybutu w instancji tworzy referencję do nowego obiektu i nadpisuje ten atrybut. Nie ma to wpływu na pozostałe instancje ani na klasę.<jupyter_code>i1.napis = 'Monty Python'
Klasa.napis, i1.napis, i2.napis<jupyter_output><empty_output><jupyter_text>Atrybut `i2.napis` nie został w instancji przypisany, więc będzie poszukiwany w klasie. Efekt jest dosyć niespodziewany.<jupyter_code>Klasa.napis = 'Żywot Briana'
Klasa.napis, i1.napis, i2.napis<jupyter_output><empty_output><jupyter_text>## Średnia krocząca
Przypomnijmy implementację średniej kroczącej z poprzednich wykładów. Działa dzięki domknięciom.<jupyter_code>def średnia_krocząca():
suma, liczba = 0, 0
def średnia(wartość):
nonlocal suma, liczba
suma += wartość
liczba += 1
return suma / liczba
return średnia<jupyter_output><empty_output><jupyter_text>Przykładowe wywołanie.<jupyter_code>avg = średnia_krocząca()
avg(5)
avg(10)
avg(15)<jupyter_output><empty_output><jupyter_text>## Średnia krocząca za pomocą klas -- wersja 1.0<jupyter_code>class ŚredniaKrocząca:
liczba = 0
suma = 0
def dodaj_wartość(self, a):
self.liczba += 1
self.suma += a
return self.suma / self.liczba<jupyter_output><empty_output><jupyter_text>Działa podobnie. Różnica jest taka, że teraz odwołujemy się do metody `dodaj_wartość()`.<jupyter_code>avg = ŚredniaKrocząca()
avg.dodaj_wartość(5)
avg.dodaj_wartość(10)
avg.dodaj_wartość(15)<jupyter_output><empty_output><jupyter_text>Wadą tej implementacji jest to, że atrybuty `liczba` i `suma` w instancjach odwołują się do tych samych atrybutów w klasie.<jupyter_code>ŚredniaKrocząca.suma = 10 ** 4
avg = ŚredniaKrocząca()
avg.dodaj_wartość(5)
avg.suma, avg.liczba<jupyter_output><empty_output><jupyter_text>## Średnia krocząca za pomocą klas -- wersja 1.1
* Dodajemy metodę `inicjalizuj()` ustawiającą atrybuty instancji `liczba` i `suma`.
* `inicjalizuj()` należy wywołać przed pierwszym `dodaj_wartość()`. <jupyter_code>class ŚredniaKrocząca:
def inicjalizuj(self):
self.liczba = 0
self.suma = 0
def dodaj_wartość(self, a):
self.liczba += 1
self.suma += a
return self.suma / self.liczba
avg = ŚredniaKrocząca()
avg.inicjalizuj()
avg.liczba, avg.suma
avg.dodaj_wartość(5)
avg.dodaj_wartość(4)
avg.dodaj_wartość(5)<jupyter_output><empty_output><jupyter_text>## Metoda `__init__()`
* Cechy metody `inicjalizuj()`:
* dodaje do instancji atrybuty `suma` i `liczba`;
* musi być wywołana jako pierwsza.
Bardzo często zachodzi potrzeba dodania atrybutów do instancji zaraz po jej utworzeniu. Dlatego Python dostarcza metodę magiczną (lub specjalną) `__init__()`, która jest wykonywana automatycznie przez interpreter natychmiast po utworzeniu instancji.## Średnia krocząca za pomocą klas -- wersja 1.2
Zastępujemy `inicjalizuj()` przez `__init__()`.<jupyter_code>class ŚredniaKrocząca:
def __init__(self):
self.liczba = 0
self.suma = 0
def dodaj_wartość(self, a):
self.liczba += 1
self.suma += a
return self.suma / self.liczba<jupyter_output><empty_output><jupyter_text>Atrybuty instancji `liczba` i `suma` są od razu gotowe do użycia.<jupyter_code>avg = ŚredniaKrocząca()
avg.liczba, avg.suma
avg.dodaj_wartość(5)
avg.dodaj_wartość(4)
avg.dodaj_wartość(5)<jupyter_output><empty_output><jupyter_text>## Konwencje nazewnicze w klasach
* Nazwa atrybutu rozpoczynająca się pojedynczym podkreśleniem oznacza jednostkę wewnętrzną (prywatną).
* Python nie blokuje dostępu do jednostek wewnętrznych.
* Korzystanie z jednostek wewnętrznych, a już zwłaszcza ich modyfikacja, uznawane jest za złą praktykę.
* Wprowadzanie jednostek wewnętrznych pozwala na hermetyzację danych i budowę eleganckiego interfejsu.### Podwójny podkreślnik
Nazwy atrybutów rozpoczynające się od podwójnego podkreślenia i kończące co najwyżej jednym podkreśleniem ozdabiane są nazwą klasy.<jupyter_code>class A:
__atr = 'Bardzo prywatny atrybut.'
inst = A()
inst.__atr # Tego atrybutu nie ma!
inst._A__atr # __atr zdefiniowany w klasie A.<jupyter_output><empty_output><jupyter_text>* Atrybuty zaczynające i kończące się podwójnym podkreśleniem mają w Pythonie znaczenie specjalne.
* Jeden z nich -- `__init__()` -- zdążyliśmy już poznać.
* Oto kilka innych:
* `__str__()`
* `__add__()`
* `__eq__()`
* `__call__()`
* `...`## Średnia krocząca za pomocą klas -- wersja 1.3
Hermetyzujemy (ukrywamy) atrybuty `liczba` i `suma`.<jupyter_code>class ŚredniaKrocząca:
def __init__(self):
self._liczba = 0
self._suma = 0
def dodaj_wartość(self, a):
self._liczba += 1
self._suma += a
return self._suma / self._liczba<jupyter_output><empty_output><jupyter_text>Teraz po wywołaniu<jupyter_code>avg = ŚredniaKrocząca()<jupyter_output><empty_output><jupyter_text>jedynym bezpośrednio dostępnym atrybutem `avg` jest `dodaj_wartość()`. Atrybut ten składa się na cały interfejs naszej aplikacji.## Średnia krocząca za pomocą klas -- wersja 2.0
Modyfikacja interfejsu -- oddzielamy dodawanie wartości od liczenia średniej.<jupyter_code>class ŚredniaKrocząca:
def __init__(self):
self._liczba = 0
self._suma = 0
def dodaj_wartość(self, a):
self._liczba += 1
self._suma += a
def średnia(self):
return self._suma / self._liczba
avg = ŚredniaKrocząca()
avg.dodaj_wartość(4)
avg.dodaj_wartość(6)
avg.dodaj_wartość(11)
avg.średnia()<jupyter_output><empty_output><jupyter_text>## Średnia krocząca za pomocą klas -- wersja 2.1
Usuwamy błąd z wersji 2.0 -- nie można zwrócić średniej z pustego zbioru wartości.<jupyter_code>class ŚredniaKrocząca:
def __init__(self):
self._liczba = 0
self._suma = 0
def dodaj_wartość(self, a):
self._liczba += 1
self._suma += a
def średnia(self):
if self._liczba == 0:
raise ArithmeticError('Nie podałeś żadnych wartości.')
return self._suma / self._liczba
avg = ŚredniaKrocząca()
avg.średnia()
avg.dodaj_wartość(4)
avg.dodaj_wartość(6)
avg.dodaj_wartość(11)
avg.średnia()<jupyter_output><empty_output><jupyter_text>## Średnia krocząca za pomocą klas -- wersja 2.2
Dopuszczamy dodawanie dowolnie wielu wartości na raz.<jupyter_code>class ŚredniaKrocząca:
def __init__(self):
self._liczba = 0
self._suma = 0
def dodaj_wartość(self, *wartości):
self._liczba += len(wartości)
self._suma += sum(wartości)
def średnia(self):
if self._liczba == 0:
raise ArithmeticError('Nie podałeś żadnych wartości.')
return self._suma / self._liczba
avg = ŚredniaKrocząca()
avg.dodaj_wartość(4, -4, 10, 10)
avg.średnia()<jupyter_output><empty_output><jupyter_text>## Średnia krocząca za pomocą klas -- wersja 2.3
Dopuszczamy dodawanie wartości już podczas tworzenia instancji.<jupyter_code>class ŚredniaKrocząca:
def __init__(self, *wartości):
self._liczba = len(wartości)
self._suma = sum(wartości)
def dodaj_wartość(self, *wartości):
self._liczba += len(wartości)
self._suma += sum(wartości)
def średnia(self):
if self._liczba == 0:
raise ArithmeticError('Nie podałeś żadnych wartości.')
return self._suma / self._liczba
avg = ŚredniaKrocząca(4, -4, 10, 10)
avg.średnia()
avg.dodaj_wartość(10, 24)
avg.średnia()<jupyter_output><empty_output>
|
no_license
|
/Docker/jupyter/notebooks/wyklad_VI.ipynb
|
djkormo/ContainersSamples
| 47 |
<jupyter_start><jupyter_text># Covid-19 Exploratory Data Analysis
Preprocessed Dataset Link: https://github.com/jimcrews/Covid-19-Preprocessed-Dataset
<jupyter_code># plotly imports
import plotly as py
import plotly.express as px
import plotly.graph_objects as go
import plotly.figure_factory as ff
from plotly.subplots import make_subplots
import plotly.io as pio
#py.offline.init_notebook_mode()
# folium imports
import folium
# standard pandas imports
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import math
import random
from datetime import timedelta
import warnings
warnings.filterwarnings('ignore')
# color pallette we will be using
cnf = '#393e46'
dth = '#ff2e63'
rec = '#21bf73'
act = '#fe9801'<jupyter_output><empty_output><jupyter_text>## Data Preparation<jupyter_code>df = pd.read_csv('Raw_Data/covid_19_data_cleaned.csv', parse_dates=['Date'])
country_daywise = pd.read_csv('Raw_Data/covid_19_country_daywise.csv', parse_dates=['Date'])
countrywise = pd.read_csv('Raw_Data/covid_19_countrywise.csv')
daywise = pd.read_csv('Raw_Data/covid_19_daywise.csv', parse_dates=['Date'])
country_daywise.head()
df['Province/State'] = df['Province/State'].fillna("")
df
# group by day, across all countries, get the totals
confirmed = df.groupby('Date').sum()['Confirmed'].reset_index()
recovered = df.groupby('Date').sum()['Recovered'].reset_index()
deaths = df.groupby('Date').sum()['Deaths'].reset_index()
deaths
# are there any nulls ?
df.isnull().sum()
# column info
df.info()
# query by country
df.query('Country == "Australia"')<jupyter_output><empty_output><jupyter_text>## Worldwide Total Confirmed, Recovered, and Deaths<jupyter_code>confirmed.tail()
recovered.tail()
deaths.tail()
# matplotlib
import matplotlib.ticker as tkr
fig, ax = plt.subplots()
def func(x, pos): # formatter function takes tick label and tick position
s = '{:0,d}'.format(int(x))
return s
y_format = tkr.FuncFormatter(func) # make formatter
ax.yaxis.set_major_formatter(y_format) # set formatter to needed axis
ax.xaxis_date() # interpret the x-axis values as dates
fig.autofmt_xdate() # make space for and rotate the x-axis tick labels
plt.plot(confirmed['Date'], confirmed['Confirmed'])
plt.plot(recovered['Date'], recovered['Recovered'])
plt.plot(deaths['Date'], deaths['Deaths'])
plt.grid()
plt.xlabel('Date')
plt.ylabel('Number of Cases')
plt.legend(['Confirmed', 'Recovered', 'Deaths'], loc='upper left')
plt.show()
confirmed['Confirmed']
# plotly
fig = go.Figure()
fig.add_trace(go.Scatter(x = confirmed['Date'], y= confirmed['Confirmed'], mode='lines+markers', name='Confirmed', line= dict(color="Orange")))
fig.add_trace(go.Scatter(x = recovered['Date'], y= recovered['Recovered'], mode='lines+markers', name='Recovered', line= dict(color="Green")))
fig.add_trace(go.Scatter(x = deaths['Date'], y= deaths['Deaths'], mode='lines+markers', name='Deaths', line= dict(color="Red")))
fig.update_layout(title="Worldwide Covid-19 Cases", yaxis = dict(title = "Number of Cases"))
fig.show()<jupyter_output><empty_output>
|
no_license
|
/Covid-19_EDA.ipynb
|
jimcrews/my-py-notebooks
| 3 |
<jupyter_start><jupyter_text>## 模型训练<jupyter_code>knn = KNeighborsClassifier(n_neighbors=2)
knn.fit(X_train, Y_train)
train_score = knn.score(X_train, Y_train)
test_score = knn.score(X_test, Y_test)
print("train score: {}; test score: {}".format(train_score, test_score))
from sklearn.model_selection import ShuffleSplit
from common.utils import plot_learning_curve
knn = KNeighborsClassifier(n_neighbors=2)
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
plt.figure(figsize=(10, 6), dpi=200)
plot_learning_curve(plt, knn, "Learn Curve for KNN Diabetes",
X, Y, ylim=(0.0, 1.01), cv=cv);<jupyter_output><empty_output><jupyter_text>## 数据可视化<jupyter_code>from sklearn.feature_selection import SelectKBest
selector = SelectKBest(k=2)
X_new = selector.fit_transform(X, Y)
X_new[0:5]
results = []
for name, model in models:
kfold = KFold(n_splits=10)
cv_result = cross_val_score(model, X_new, Y, cv=kfold)
results.append((name, cv_result))
for i in range(len(results)):
print("name: {}; cross val score: {}".format(
results[i][0],results[i][1].mean()))
# 画出数据
plt.figure(figsize=(10, 6), dpi=200)
plt.ylabel("BMI")
plt.xlabel("Glucose")
plt.scatter(X_new[Y==0][:, 0], X_new[Y==0][:, 1], c='r', s=20, marker='o'); # 画出样本
plt.scatter(X_new[Y==1][:, 0], X_new[Y==1][:, 1], c='g', s=20, marker='^'); # 画出样本<jupyter_output><empty_output>
|
no_license
|
/SCIKIT-LEARN/ch04.03.ipynb
|
RufusTang/Study
| 2 |
<jupyter_start><jupyter_text>## [예제] 미국 대통령 당선 정보에서 정당 정보를 도출하여 시각하 하기
### * 불러온 데이타집합의 정보 확인하기
- type(df) : 데이타 타입
- df.shape : 행과 열의 수
- df.columns : 컬럼명 확인
- df.dtypes : 각각의 컬럼 데이타타입
- df.info() : df.dtypes 비슷
- df.info
- df.head()
- df.tail()
### 파이썬과 판다스 자료형 비교
int int64
float float64
string object(****)
### 당선된 각 정당(Political Party) 수를 먼저 구해야 한다
- value_counts() 이용
- 원그래프 출력
### [연습] 데이타셋에서 다른 인사이드를 도출하여 시각화하기
<jupyter_code>import pandas as pd
df = pd.read_excel('http://qrc.depaul.edu/excel_files/presidents.xlsx')
### [참고] 웹에서 제공하는 데이타를 다운로드하기
# import urllib.request as req
# local = './data/president/presidents.xlsx' # data폴더 밑에 president 폴더가 있어야 한다
# url = 'http://qrc.depaul.edu/excel_files/presidents.xlsx'
# req.urlretrieve(url,local)
# print('다운로드 완료')
<jupyter_output><empty_output>
|
no_license
|
/10.python/dAnalysis/02_pandas_class/Ex01_xample2_과제.ipynb
|
kklck/00.siat
| 1 |
<jupyter_start><jupyter_text>## Get recipe names into a vector<jupyter_code>stopwords = nltk.corpus.stopwords.words('english')
stemmer = SnowballStemmer("english")
def tokenize_and_stem(title, is_ingredient = False):
stemmer = SnowballStemmer("english")
stemmed_titles = []
new_title=[]
for word in nltk.word_tokenize(title):
new_title.append(stemmer.stem(word))
stemmed_titles.extend(new_title)
return " ".join([i for i in stemmed_titles])
def clean_up_ingredient(ingredient_line):
ingredient_line = re.sub("\[u\'", "", ingredient_line)
ingredient_line = re.sub("\']", "", ingredient_line)
return find_measurement_words(ingredient_line)
def get_ingredientslist_for_recipeid(i):
return [clean_up_ingredient(item.ingredient)\
for item in names.filter(Recipe.id == i).all()[0].ingredients]
def make_ingredient_list_string(ingredientdictlist):
return " ".join((str(j["ingredient"] + "INGREDIENT") for j in ingredientdictlist))
tokenized_ingredients = [tokenize_and_stem(make_ingredient_list_string(get_ingredientslist_for_recipeid(i))) for i in (recipes["id"])]
tokenized_names = [tokenize_and_stem(i) for i in (recipes["name"])]
tokenized_name_ing = zip(tokenized_names, tokenized_ingredients)
tokenized_name_ing = map(lambda a: " ".join(a), tokenized_name_ing)
from sklearn.feature_extraction.text import CountVectorizer
tokenized_names = [tokenize_and_stem(i) for i in (recipes["name"])]
vectorizer = CountVectorizer(ngram_range=(1,3), min_df=0.0003)
a = vectorizer.fit_transform(tokenized_name_ing)
from sklearn.feature_selection import f_regression, mutual_info_regression
def chol_to_percentile(vector, operate_on):
v = [i for i in vector if i != 0]
pctvector = []
pct75 = np.percentile(v, 75)
pct50 = np.percentile(v, 50)
pct20 = np.percentile(v, 20)
print(pct75)
print(pct50)
print(pct20)
for index, i in enumerate(operate_on):
if i > pct75:
pctvector.append("vhigh")
elif i > pct50:
pctvector.append("high")
elif i > pct20:
pctvector.append("med")
else:
pctvector.append("low")
return pctvector
cholcat = chol_to_percentile(recipes["cholesterol"], recipes["cholesterol"])
# produces an array with mutual information (weights?) between individual words/n-grams in recipe names and cholesterol information
mi = mutual_info_regression(a, recipes["cholesterol"])
mi /= np.max(mi)
# Returns indices of columns with MI value greater than e.g. 0.01. Can be used for re
informative_words = np.array([(index) for index, i in enumerate(mi) if i>0.02])
# return word vector array with uninformative words removeds
culled_array = a.toarray()[:,informative_words]
import cPickle as pkl
ground_truth_x =[]
ground_truth_y =[]
for item in pkl.load(open("groundtruth.pkl", "rb")):
ground_truth_x.append(item["item_name"])
ground_truth_y.append(item["nf_cholesterol"])
for item in pkl.load(open("groundtruth2.pkl", "rb")):
ground_truth_x.append(item["item_name"])
ground_truth_y.append(item["nf_cholesterol"])
for item in pkl.load(open("groundtruth3.pkl", "rb")):
ground_truth_x.append(item["item_name"])
ground_truth_y.append(item["nf_cholesterol"])
tokenized_names_ground_truth = [tokenize_and_stem(i) for i in (ground_truth_x)]
# Puts words from new ground truth set into matrix from training set
ground_truth_vectorized = vectorizer.transform(tokenized_names_ground_truth)
ground_truth_vectorized_culled_array = ground_truth_vectorized.toarray()[:,informative_words]
for index, i in enumerate(ground_truth_y):
if i == None:
ground_truth_y[index] = 0
mask = [index for (index, item) in enumerate(ground_truth_y) if item !=0]
masked_ground_truth = []
for i,j in enumerate(ground_truth_y):
if i in mask:
masked_ground_truth.append(j)
masked_ground_truth_vectnames = []
for i,j in enumerate(ground_truth_vectorized_culled_array):
if i in mask:
masked_ground_truth_vectnames.append(j)
<jupyter_output><empty_output><jupyter_text>### Linear SVM<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm, datasets
X_train = culled_array[200:]
#y_train = recipes["cholesterol"]
y_train = cholcat[200:]
X=X_train
y=y_train
X_test = culled_array[:200]
y_test = cholcat[:200]
h = .02 # step size in the mesh
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1 # SVM regularization parameter
lin_svc = svm.LinearSVC(C=C).fit(X_train, y_train)
%matplotlib inline
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm = confusion_matrix(chol_to_percentile(recipes["cholesterol"], masked_ground_truth), lin_svc.predict(masked_ground_truth_vectnames)
)
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True)
lin_svc.score(masked_ground_truth_vectnames, chol_to_percentile(recipes["cholesterol"], masked_ground_truth))
%matplotlib inline
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm = confusion_matrix(y_test, lin_svc.predict(X_test)
)
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True)<jupyter_output>/home/andylane/anaconda2/envs/insight/lib/python2.7/site-packages/seaborn/matrix.py:143: DeprecationWarning: elementwise == comparison failed; this will raise an error in the future.
if xticklabels == []:
/home/andylane/anaconda2/envs/insight/lib/python2.7/site-packages/seaborn/matrix.py:151: DeprecationWarning: elementwise == comparison failed; this will raise an error in the future.
if yticklabels == []:
<jupyter_text>### Polynomial SVM<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm, datasets
X_train = culled_array[200:]
#y_train = recipes["cholesterol"]
y_train = cholcat[200:]
X=X_train
y=y_train
X_test = culled_array[:200]
y_test = cholcat[:200]
h = .02 # step size in the mesh
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1 # SVM regularization parameter
poly_svc = svm.SVC(kernel='poly', degree=2, C=C).fit(X, y)
#lin_svc = svm.LinearSVC(C=C).fit(X_train, y_train)
%matplotlib inline
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm = confusion_matrix(y_test, poly_svc.predict(X_test))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True)
poly_svc.score(masked_ground_truth_vectnames, chol_to_percentile(recipes["cholesterol"], masked_ground_truth))<jupyter_output>124.0
81.0
47.0
<jupyter_text>#### Well, that looks even worse. Try an RBF kernel.<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm, datasets
X_train = culled_array[200:]
#y_train = recipes["cholesterol"]
y_train = cholcat[200:]
X=X_train
y=y_train
X_test = culled_array[:200]
y_test = cholcat[:200]
h = .02 # step size in the mesh
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1 # SVM regularization parameter
rbf_svc = svm.SVC(kernel='rbf', gamma=1, C=C).fit(X, y)
%matplotlib inline
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm = confusion_matrix(y_test, rbf_svc.predict(X_test))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True)
%matplotlib inline
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm = confusion_matrix(chol_to_percentile(recipes["cholesterol"], masked_ground_truth), rbf_svc.predict(masked_ground_truth_vectnames))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True)
poly_svc.score(masked_ground_truth_vectnames, chol_to_percentile(recipes["cholesterol"], masked_ground_truth))<jupyter_output>124.0
81.0
47.0
|
no_license
|
/sklearn_intermediate_models/naive_bayes_ingredients.ipynb
|
andrewblane/menusights
| 4 |
<jupyter_start><jupyter_text># Box Office Revenue Prediction## Imports<jupyter_code>import pandas as pd
import numpy as np
import re # regex
import ast
from datetime import datetime
from datetime import date
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
from sklearn import metrics
from sklearn import preprocessing
from sklearn.pipeline import Pipeline
from sklearn.metrics import f1_score
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import learning_curve, RandomizedSearchCV, train_test_split<jupyter_output><empty_output><jupyter_text>## Data Sources
1. IMDB:
* No. of ratings (star)
* Avg. star rating (out of 10)
* No. of user ratings (text reviews)
* No. of critic ratings (meta critic)
* Countries (truncate to just first country) - √
* Language (truncate to just first language) - √
* No. of languages (to be created from previous column) - √
* Production house (truncate to just first production house) - √
* Duration (convert to minutes/numeric from string) - √
* Genre (need to explode to one-hot columns for 23 genres) - √
* MPAA rating
* TV- ratings - reclassify - √
* Not Rated --> PG-13/Unrated --> PG - many from non-English fall under these categories - √
* **Gross:** 'Y' variable (need to remove movies with alphabet-only gross, convert INR (other currencies?) to USD and 3 INR 3-digit movies):
* Adjust for inflation, based on year of release - √
* Release date:
* convert first to datetime and use as a feature (holiday - US - release/not) - √
2. Popularity Scores:
* Average popularity score per movie
3. Sentiment Scores:
* AFINN score - based on IMDB user reviews pre-release
* AFINN score - based on IMDB user reviews post-release
4. YouTube:
* View count
* Like count
* Dislike count
* Comment count## Reading Excel File(s)
#### !!!!!! IMPORTANT- manually removed the unnamed serial number column from the popularity score excel sheet I'm reading below!!!!!
#### Check that it is removed in your excel sheet also<jupyter_code># read popularity scores xlsx with IMDB_ID as the indexing column (2nd column from the left, or 1st index)
df_pop = pd.read_excel("Final_data_sheets_updated_popularity_scores.xlsx", index_col = 0)
df = pd.read_excel("Final_data_sheets_with_Features.xlsx", index_col = 0)
df_pop = df_pop['Average_popularity_score_per_movie']
df_pop.head(2)
# inspect
df.head(3)
list(df)
df = df.drop(['main_cast_list', 'main_cast_links','dir_list','creator_list', 'meta_critic_score','story_line', 'others', 'release_date'], axis = 1)
df.head(3)
# join df and df_pop
df = df.join(df_pop)
list(df)
df.shape<jupyter_output><empty_output><jupyter_text>## Data Transformations### 1. Check for NAs/NANs<jupyter_code>df.describe()
df.shape<jupyter_output><empty_output><jupyter_text>#### Columns with NAs
MPAA rating, num_user_ratings, num_critic_ratings, language, production_house<jupyter_code>df.isnull().any()<jupyter_output><empty_output><jupyter_text>#### How many NAs total?
Could be multiple NAs for a given row.<jupyter_code>df.isnull().sum().sum()<jupyter_output><empty_output><jupyter_text>#### How many NAs per column?
Can it be manually fixed by finding the true value? Say, for duration of a couple of movies.<jupyter_code>df.isnull().sum(axis = 0)
df[df['duration'].isnull()]<jupyter_output><empty_output><jupyter_text>This anyway has other columns as NaNs (info isn't available on IMDB anymore?), so might as well drop the row.#### If dropping all NAs?<jupyter_code># Lose 442 (45 rows still have NA in num_user_ratings, num_critic_ratings)
df = df.dropna(subset=['Language', 'Production_House','motion_picture_rating'])
df.shape<jupyter_output><empty_output><jupyter_text>#### num_user_ratings and num_critic_ratings still have NAs - we will impute these later (pipelining) using median/mean<jupyter_code>df.isnull().any()
# number of IMDB user reviews written
df['num_user_ratings'].isnull().sum()
# metacritic rating
df['num_critic_ratings'].isnull().sum()<jupyter_output><empty_output><jupyter_text>### 2. Add number of languages<jupyter_code># inspect unique values
df.Language.unique()
languages_split = df.Language.str.split(pat="|")
df['num_languages'] = languages_split.str.len()
# inspect
df.head(3)
# check NaN
df['num_languages'].isnull().values.any()<jupyter_output><empty_output><jupyter_text>### 3. Truncate 'Languages' and 'Country'<jupyter_code># inspect unique values
df.Country.unique()
languages_split = df.Language.str.split(pat="|").apply(lambda x: x[0])
df['Language'] = languages_split
countries_split = df.Country.str.split(pat="|").apply(lambda x: x[0])
df['Country'] = countries_split
df.head(3)
# check both columns for NaN
print(df['Language'].isnull().values.any())
print(df['Country'].isnull().values.any())
# check how many unique values of each
print(len(df.Country.unique()))
print(len(df.Language.unique()))<jupyter_output><empty_output><jupyter_text>### 4. Truncate 'Production_House'<jupyter_code># inspect unique values - 3697 of them, can't see all
df.Production_House.unique()
production_house_split = df.Production_House.str.split(pat=", ").apply(lambda x: x[0])
df['Production_House'] = production_house_split
# still could too unqiue of a column - 2277 unique values!
len(df.Production_House.unique())<jupyter_output><empty_output><jupyter_text>### 5. Check MPAA column and regroup
https://simple.m.wikipedia.org/wiki/Motion_Picture_Association_of_America_film_rating_system
#### **Reclassification:**
* TV-Y, TV-7, TV-G --> G
* TV-PG --> PG
* TV-14 --> PG-13
* TV-MA --> R
* Not Rated (923!) -->
* Unrated (143) --><jupyter_code># check NaN
df['motion_picture_rating'].isnull().values.any()
# what are the unique ratings, and how many in each category?
df.groupby('motion_picture_rating').size()
df.loc[df['motion_picture_rating'].isin(["TV-G", "TV-Y7", "TV-Y"]), 'motion_picture_rating'] = "G"
df.loc[df['motion_picture_rating'].isin(["TV-PG"]), 'motion_picture_rating'] = "PG"
df.loc[df['motion_picture_rating'].isin(["TV-14", "M"]), 'motion_picture_rating'] = "PG-13"
df.loc[df['motion_picture_rating'].isin(["TV-MA"]), 'motion_picture_rating'] = "R"
df.loc[df['motion_picture_rating'].isin(["Unrated"]), 'motion_picture_rating'] = "PG" # mostly documentaries
df.loc[df['motion_picture_rating'].isin(["Not Rated"]), 'motion_picture_rating'] = "PG-13"
df.groupby('motion_picture_rating').size()
# check NaN after transforming
df['motion_picture_rating'].isnull().values.any()<jupyter_output><empty_output><jupyter_text>### 6. Convert 'duration' column to time in minutes (integer)<jupyter_code>def check_time(time):
if len(time) == 1:
if "h" in time[0]:
new_time = 60*int(re.sub("\D", "", time[0]))
else:
new_time = int(re.sub("\D", "", time[0]))
else:
new_time = 60*int(re.sub("\D", "", time[0])) + int(re.sub("\D", "", time[1]))
return new_time
test1 = df.duration.str.split(" ")
test2 = test1.apply(lambda x: check_time(x))
df['duration'] = test2
# check unique values
print(df.duration.unique())
## check for NaNs after transforming
df['duration'].isnull().values.any()
list(df)<jupyter_output><empty_output><jupyter_text>### 7. Expanding 'genre' to one-hot columns<jupyter_code>type(df['genre'].iloc[0]) # Need to convert string representation of list to an actual Python list to accumulate as et later
# check unique genre lists
unique_genre_lists = df['genre'].unique()
print(unique_genre_lists)
def convert_to_list(x):
if "[" in x:
x = re.sub("[\[\]]", "", x)
x = x.split(", ")
else:
x = x.split(" ") # split by non-existent delimiter
return x
# get all unique genres available
genre_lists = df.genre.apply(lambda x: convert_to_list(x))
df.genre = genre_lists
# temp = genre_lists.tolist()
# flattened = [y for x in temp for y in x]
# print(set(flattened))
type(df['genre'].iloc[0])
# add 23 new one-hot columns
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
df = df.join(pd.DataFrame(mlb.fit_transform(df.pop('genre')),
columns=mlb.classes_,
index=df.index))
df.head(3)
list(df)
# check for NaNs after transforming in all genre columns
df.shape<jupyter_output><empty_output><jupyter_text>### 9. Cleaning up 'Gross'
Standardize currency, hard-code 3-digit movies, convert string to int/float<jupyter_code>type(df['Gross'].iloc[0])
pd.set_option('display.max_row', 4000)
# look for values which have alphabetic characters in them => not in USD and has to be converted
df[df.Gross.str.contains(pat = "[a-zA-Z]")]
# remove extra whitespaces, commas:
df['Gross'] = df.Gross.apply(lambda x: re.sub("[,\s]", "", x))
df[df.Gross.str.contains(pat = "[a-zA-Z]")]
# add 7 trailing zeros for these 4:
# 6980546 INR 206 Bharat Ane Nenu
# 3142764 INR 130 Race Gurram
# 6734984 INR 157 Duvvada Jagannadham
# 6522546 INR 124 Spyder
gross_truncated = ["INR206", "INR130","INR157","INR124"]
df['Gross'] = df.Gross.apply(lambda x: x + "0000000" if x in gross_truncated else x)
# 1753865 Giochi d'estate - Gross not available on IMDB or elsewhere, drop the row
df.drop(1753865, inplace = True)
# for American Satan - VND 74 cumulative worldwide gross - change to USD $226,232
# https://www.the-numbers.com/movie/American-Satan#tab=international
df.at[5451690, 'Gross'] = "226232"
# Raazi - incorrectly entered as 2070 crores gross on IMDB, is actually ~207 crores
df.at[7098658, 'Gross'] = "2070000000"
from currency_converter import CurrencyConverter
c = CurrencyConverter()
def convert_currency(x):
if re.search('[a-zA-Z£]', x) == None:
return float(x)
split_gross = re.split('(\d+)',x)
# GBP
if(split_gross[0] == "£"):
return (c.convert(float(split_gross[1]), 'GBP','USD'))
# NPR isn't supported CurrencyConverter - hard code
if(split_gross[0] == "NPR"):
return (float(split_gross[1])*0.0090)
return (c.convert(float(split_gross[1]), split_gross[0],'USD'))
df['Gross'] = df['Gross'].apply(lambda x : convert_currency(x))
list(df)
# check NaN
df['Gross'].isnull().values.any()
df['Gross'].describe()
type(df['Release date '].iloc[0])
#### Adjust gross for inflation based on the year - need inputs from Karthik for this:
def adjust_for_inflation(gross, release_date):
if release_date.year == 2010:
return(gross*1.152)
elif release_date.year == 2011:
return(gross*1.124)
elif release_date.year == 2012:
return(gross*1.101)
elif release_date.year == 2013:
return(gross*1.087)
elif release_date.year == 2014:
return(gross*1.086)
elif release_date.year == 2015:
return(gross*1.068)
elif release_date.year == 2016:
return(gross*1.053)
elif release_date.year == 2017:
return(gross*1.032)
# 2018 => just return x itself
return gross
df['Gross'] = df.apply(lambda x: adjust_for_inflation(x['Gross'], x['Release date ']), axis=1)
df['Gross'].describe()<jupyter_output><empty_output><jupyter_text>### If dividing into equal revenue ranges (instead of quintiles)<jupyter_code># def find_revenue_range(x):
# if 0 <= x <= 588111000:
# return 0
# elif 588111001 <= x <= 1176222000:
# return 1
# elif 1176222000 <= x <= 1764333003:
# return 2
# elif 1764333003 <= x <= 2352444004:
# return 3
# else:
# return 4
# df['gross_equal_range'] = df['Gross'].apply(lambda x: find_revenue_range(x))
# df[df['gross_equal_range'] == 3]<jupyter_output><empty_output><jupyter_text>## Categorize movies by gross revenue quintile
Split movies into 5 groups by revenue, and add (one-hot?) columns for classification.<jupyter_code>list(df.columns.values)
df.head(5)<jupyter_output><empty_output><jupyter_text>#### Divide into quintiles based on gross revenue
This divides into 5 balanced classes.
*** Dividing into 5 based on manually selected ranges results in a very high accuracy ~97%, because it is highly imbalanced - even easiest prediction of majority class can result in this accuracy. **<jupyter_code>ret_value = pd.qcut(df['Gross'], 5, labels=["very low", "low", "medium", "high", "very high"], retbins = True)<jupyter_output><empty_output><jupyter_text>#### Check bucket values<jupyter_code>df['gross_category'] = ret_value[0]
ret_value[1]
# very low ends at 4.869325e+04,
#low ends at 3.891683e+05, medium ends at 5.904366e+06,
# high ends at 6.748606e+07, very high ends at 2.208863e+09
df.groupby('gross_category').size()
df_sorted = df.sort_values(['Gross'])<jupyter_output><empty_output><jupyter_text>#### This prints the whole dataframe (all ~3k rows)! <jupyter_code>df_sorted<jupyter_output><empty_output><jupyter_text>## Basic Classification Model - Logistic Regression<jupyter_code>df_cleaned = df.copy()
df_cleaned.head(5)<jupyter_output><empty_output><jupyter_text>### Dealing with categorical features
Inspect non-numeric columns:
* Country object -- 61 unique - categorize as top 5 vs. others
* Language object -- 77 unique - categorize as top 5 vs. others
* Production_House object -- ~2000+ unique - categorize as top 5 vs. othersCheck
* motion_picture_rating object -- only 5 groups
* Name object -- drop, too unique, unless using to derive a text-based feature
* release_date object -- drop, can be used to extract weekend/not later <jupyter_code>df_cleaned.dtypes<jupyter_output><empty_output><jupyter_text>#### Check production house split<jupyter_code>df_cleaned['Production_House'].dtypes
df_cleaned['Production_House'].head(5)
# could do top 5 vs others
df_cleaned.groupby('Production_House').size().sort_values(ascending = False).head(20)
top_production = list(df_cleaned.groupby('Production_House').size().sort_values(ascending = False).head(5).index)
df_cleaned['Production_House'] = df_cleaned.Production_House.apply(lambda x: x if x in top_production
else "Other")
df_cleaned.groupby('Production_House').size().sort_values(ascending = False)<jupyter_output><empty_output><jupyter_text>#### Check language split<jupyter_code>df_cleaned.groupby('Language').size().sort_values(ascending = False).head(20) # could do English, French, Hindi, Spanish, Mandarin vs. others
top_language = list(df_cleaned.groupby('Language').size().sort_values(ascending = False).head(5).index)
df_cleaned['Language'] = df_cleaned.Language.apply(lambda x: x if x in top_language
else "Other")
df_cleaned.groupby('Language').size().sort_values(ascending = False)<jupyter_output><empty_output><jupyter_text>#### Check country split<jupyter_code>df_cleaned.groupby('Country').size().sort_values(ascending = False).head(20) # could do USA, UK, France, India, Canada, China vs. others
top_countries = list(df_cleaned.groupby('Country').size().sort_values(ascending = False).head(5).index)
df_cleaned['Country'] = df_cleaned.Country.apply(lambda x: x if x in top_countries
else "Other")
df_cleaned.groupby('Country').size().sort_values(ascending = False)<jupyter_output><empty_output><jupyter_text>### One-Hot Encoding
For categorical features, and the gross_category label.<jupyter_code>X = df_cleaned.drop(['gross_category', 'Gross', 'release_date', 'Name', 'Release date '], axis=1)
# drop is NOT in-place by default, doesn't affect original DF
y = df_cleaned['gross_category'].copy()
X.dtypes
X.shape
#categorical_cols = ["motion_picture_rating", "Country", "Language", "Production_House"]
X_dummies = pd.get_dummies(X)
X_dummies.shape # 40 -4 + 18 + 5 = 59
X_dummies.head(5)
le = preprocessing.LabelEncoder()
le.fit(y)
list(le.classes_)
y_encoded = le.transform(y) <jupyter_output><empty_output><jupyter_text>### Split data - train, test<jupyter_code>X_train, X_test, y_train, y_test = train_test_split(X_dummies, y_encoded, random_state=1)
X_train.shape
X_test.shape
imputer = SimpleImputer()
scaler = StandardScaler()
lr = LogisticRegression(multi_class = "multinomial", solver = 'newton-cg', max_iter = 5000)
pipe = Pipeline([('imputer', imputer),
('scaler', scaler),
('lr', lr)])
pipe.fit(X_train, y_train)
pipe.named_steps.keys()
# for any continuous parameters, specify a distribution instead of a list of options
param_grid = {}
param_grid['imputer__strategy'] = ["mean", "median"]
param_grid['scaler__with_mean'] = [True, False]
param_grid['scaler__with_std'] = [True, False]
param_grid['lr__C'] = [1, 0.75, 0.5] # smaller specifies stronger regularization
param_grid
# additional parameters are n_iter (number of searches) and random_state
rand = RandomizedSearchCV(pipe, param_grid, cv=5, scoring='accuracy', n_iter=5, random_state=1)
# time the randomized search
%time rand.fit(X_train, y_train)
print(rand.best_score_) # hold-out set
print(rand.best_params_)
# print the best model found by RandomizedSearchCV
print(rand.best_estimator_)
# predictions on train and test data with best estimator
y_trainpred0 = rand.predict(X_train)
y_pred0 = rand.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred0))
print(metrics.f1_score(y_test, y_pred0, average='macro'))
# train set
print(metrics.accuracy_score(y_train, y_trainpred0))
print(metrics.f1_score(y_train, y_trainpred0, average='macro'))
# interpretation
lr.coef_
lr.coef_.shape<jupyter_output><empty_output><jupyter_text>### k-NN Classification<jupyter_code>knn = KNeighborsClassifier()
pipe_knn = Pipeline([('imputer', imputer),
('scaler', scaler),
('knn', knn)])
# pipeline steps are automatically assigned names by make_pipeline
param_grid = {}
param_grid['imputer__strategy'] = ["mean", "median"]
param_grid['scaler__with_mean'] = [True, False]
param_grid['scaler__with_std'] = [True, False]
param_grid['knn__n_neighbors'] = [50, 100, 150, 200, 250]
param_grid['knn__weights'] = ['uniform', 'distance']
param_grid['knn__algorithm'] = ['auto', 'ball_tree', 'kd_tree', 'brute']
param_grid
rand_knn = RandomizedSearchCV(pipe_knn, param_grid, cv=5, scoring='accuracy', n_iter=5, random_state=1)
# time the randomized search
%time rand_knn.fit(X_train, y_train)
print(rand_knn.best_score_) # hold-out set
print(rand_knn.best_params_)
# print the best model found by RandomizedSearchCV
print(rand_knn.best_estimator_)
# predictions on train and test data with best estimator
y_trainpred_knn = rand_knn.predict(X_train)
y_pred_knn = rand_knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred_knn))
print(metrics.f1_score(y_test, y_pred_knn, average='macro'))
# train set
print(metrics.accuracy_score(y_train, y_trainpred_knn))
print(metrics.f1_score(y_train, y_trainpred_knn, average='macro'))<jupyter_output><empty_output><jupyter_text>### Random Forest Classification<jupyter_code>rf = RandomForestClassifier(random_state=0)
pipe_rf = Pipeline([('imputer', imputer),
('scaler', scaler),
('rf', rf)])
param_grid = {}
param_grid['imputer__strategy'] = ["mean", "median"]
param_grid['scaler__with_mean'] = [True, False]
param_grid['scaler__with_std'] = [True, False]
param_grid['rf__n_estimators'] = [50, 100, 150, 200, 300, 500, 600] # how many trees to use in the forest
param_grid['rf__max_depth'] = [3, 5, 7, 9] # max depth
param_grid['rf__criterion'] = ['gini', 'entropy']
param_grid['rf__max_features'] = ['auto', 'log2'] # like mtry
param_grid['rf__oob_score'] = [True, False]
param_grid
rand_rf = RandomizedSearchCV(pipe_rf, param_grid, cv=5, scoring='accuracy', n_iter=5, random_state=1)
# time the randomized search
%time rand_rf.fit(X_train, y_train)
print(rand_rf.best_score_) # hold-out set
print(rand_rf.best_params_)
# print the best model found by RandomizedSearchCV
print(rand_rf.best_estimator_)
# predictions on train and test data with best estimator
y_trainpred_rf = rand_rf.predict(X_train)
y_pred_rf = rand_rf.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred_rf))
print(metrics.f1_score(y_test, y_pred_rf, average='macro'))
# train set
print(metrics.accuracy_score(y_train, y_trainpred_rf))
print(metrics.f1_score(y_train, y_trainpred_rf, average='macro'))
rf.feature_importances #cannot do this when using pipeline?<jupyter_output><empty_output><jupyter_text>### Naive Bayes Classificiation<jupyter_code>nb = MultinomialNaiveBayes(random_state=0)
pipe_rf = Pipeline([('imputer', imputer),
('scaler', scaler),
('nb', rf)])
param_grid = {}
param_grid['imputer__strategy'] = ["mean", "median"]
param_grid['scaler__with_mean'] = [True, False]
param_grid['scaler__with_std'] = [True, False]
param_grid['nb__n_estimators'] = [50, 100, 150, 200, 300, 500, 600] # how many trees to use in the forest
param_grid['nb__max_depth'] = [3, 5, 7, 9] # max depth
param_grid['nb__criterion'] = ['gini', 'entropy']
param_grid['nb__max_features'] = ['auto', 'log2'] # like mtry
param_grid['nb__oob_score'] = [True, False]
param_grid
rand_nb = RandomizedSearchCV(pipe_nb, param_grid, cv=5, scoring='accuracy', n_iter=5, random_state=1)
# time the randomized search
%time rand_nb.fit(X_train, y_train)
print(rand_nb.best_score_) # hold-out set
print(rand_nb.best_params_)
# print the best model found by RandomizedSearchCV
print(rand_nb.best_estimator_)
# predictions on train and test data with best estimator
y_trainpred_nb = rand_nb.predict(X_train)
y_pred_nb = rand_nb.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred_nb))
print(metrics.f1_score(y_test, y_pred_nb, average='macro'))
# train set
print(metrics.accuracy_score(y_train, y_trainpred_nb))
print(metrics.f1_score(y_train, y_trainpred_nb, average='macro'))<jupyter_output><empty_output><jupyter_text>### MLP Classification<jupyter_code>nb = MultinomialNaiveBayes(random_state=0)
pipe_rf = Pipeline([('imputer', imputer),
('scaler', scaler),
('nb', rf)])
param_grid = {}
param_grid['imputer__strategy'] = ["mean", "median"]
param_grid['scaler__with_mean'] = [True, False]
param_grid['scaler__with_std'] = [True, False]
param_grid['nb__n_estimators'] = [50, 100, 150, 200, 300, 500, 600] # how many trees to use in the forest
param_grid['nb__max_depth'] = [3, 5, 7, 9] # max depth
param_grid['nb__criterion'] = ['gini', 'entropy']
param_grid['nb__max_features'] = ['auto', 'log2'] # like mtry
param_grid['nb__oob_score'] = [True, False]
param_grid
rand_nb = RandomizedSearchCV(pipe_nb, param_grid, cv=5, scoring='accuracy', n_iter=5, random_state=1)
# time the randomized search
%time rand_nb.fit(X_train, y_train)
print(rand_nb.best_score_) # hold-out set
print(rand_nb.best_params_)
# print the best model found by RandomizedSearchCV
print(rand_nb.best_estimator_)
# predictions on train and test data with best estimator
y_trainpred_nb = rand_nb.predict(X_train)
y_pred_nb = rand_nb.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred_nb))
print(metrics.f1_score(y_test, y_pred_nb, average='macro'))
# train set
print(metrics.accuracy_score(y_train, y_trainpred_nb))
print(metrics.f1_score(y_train, y_trainpred_nb, average='macro'))<jupyter_output><empty_output><jupyter_text>## Further evaluation of best-performing model<jupyter_code># classification report
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred_rf, target_names=["very low", "low", "medium", "high", "very high"]))
# confusion matrix
# from https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = ["very low", "low", "medium", "high", "very high"]
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
np.set_printoptions(precision=2)
class_names = ["very low", "low", "medium", "high", "very high"]
# Plot non-normalized confusion matrix
plot_confusion_matrix(y_test, y_pred_rf, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plot_confusion_matrix(y_test, y_pred_rf, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()<jupyter_output><empty_output>
|
no_license
|
/BT5153 Project.ipynb
|
AdithyaSelvaganapathy/BT5153_Box_Office_Revenue
| 32 |
<jupyter_start><jupyter_text>**Februari 2019**
Vi har bestämt oss att kolla hur stort antal offentligt anställda i kommuner och regioner som närmar sig pensionsåldern. Statistiken kommer från SKL och hittas [här för kommundata](https://skl.se/ekonomijuridikstatistik/statistik/personalstatistik/personalenidiagramochsiffror/tabellerkommunalpersonal2017.11575.html), och [här för regiondata](https://skl.se/ekonomijuridikstatistik/statistik/personalstatistik/personalenidiagramochsiffror/tabellerlandstingsanstalldpersonal2017.11576.html). För att få historik kontakde jag informationsansvarig statistiker Clara Arrhenius på SKL som skickade filerna.
---
**Resultatet av denna notebook användes [till denna artikel](https://www.dagenssamhalle.se/nyhet/var-fjarde-anstalld-lamnar-kommunerna-26613) som publicerades i tidning nr 10, publicerad den 14 mars 2019**
---### Tvätt**Anteckningar angående datatvätt**
Ok, vidrig myndighetsdata i excelform återigen :)
Tre viktiga detaljer:
1. Av någon anledning så finns inte åldersgrupperna 50-54 och 55-59 år i länsdata för åren 2008-2010
2. Gotland finns endast i länsdata from 2017, så den stryker jag
3. Det finns enbart andelsdata på åldersgrupper och antalet måste därför räknas ut bakvägen, så avrundningar kan vara felaktiga på enstaka personer.<jupyter_code>files = [x for x in os.listdir('data') if 'Tabell' in x]
LP_files = [x for x in files if "LP" in x]
files = [x for x in files if not "LP" in x]
dfr = pd.DataFrame()
for file in LP_files:
år = file[-9:-5]
tmp = pd.read_excel(f'data/{file}',header=4)
tmp.columns = ['reg', 'antal', 'andel_kvinnor', 'Unnamed: 3', 'kvinnor', 'män', 'Unnamed: 6', '-29', '30-39', '40-49', '50-54', '55-59', '60-']
tmp['år'] = år
tmp = tmp.drop(['Unnamed: 3','Unnamed: 6'],axis=1).dropna()
# print(år,tmp.shape)
dfr = pd.concat([dfr,tmp])
dfr = dfr[['reg','år','antal','55-59','60-']]
dfr.loc[dfr['reg']=='Summa','reg'] = 'Totalt'<jupyter_output><empty_output><jupyter_text>Bort med Gotland 2017:<jupyter_code>dfr = dfr[dfr['reg']!='Gotland *']
dfr['reg'] = dfr['reg'].str.strip()
dfr = dfr.set_index('reg')
dfk = pd.DataFrame()
for file in files:
if file[0] == '~':
continue
print(file)
tmp = pd.read_excel(f'data/{file}').dropna()
tmp.columns = ['reg', 'antal', '-29', '30-39', '40-49', '50-54', '55-59', '60-', ' ', 'medelålder_totalt', 'medelålder_kvinnor', 'medelålder_män']
tmp['år'] = file[-9:-5]
dfk = pd.concat([dfk,tmp])
dfk = dfk[dfk['reg']!='Kommun']
dfk.loc[dfk['reg'].str.contains('Malung'), 'reg'] = 'Malung-Sälen'
dfk = dfk[['reg','år','antal','55-59','60-']]
bort = ['Kommunalförbund ',
'Riket',
'Kommungrupper',
'Storstäder',
'Förortskommuner till storstäderna',
'Större städer',
'Förortskommuner till större städer',
'Pendlingskommuner',
'Turism- och besöksnäringskommuner',
'Varuproducerande kommuner',
'Glesbygdkommuner',
'Kommuner i tätbefolkad region',
'Kommuner i glesbefolkad region',
'Pendlingskommun nära storstad',
'Större stad',
'Pendlingskommun nära större stad',
'Lågpendlingskommun nära större stad',
'Mindre stad/tätort',
'Pendlingskommun nära mindre tätort',
'Landsbygdskommun',
'Landsbygdskommun med besöksnäring',
'Förortskommuner ',
'Större städer ',
'Storstäder ',
'Pendlingskommuner ',
'Glesbygdskommuner ',
'Varuproducerande kommuner ',
'Övriga kommuner, över 25 000 inv. ',
'Övriga kommuner, 12 500-25 000 inv. ',
'Övriga kommuner, mindre än 12 500 inv. ',
'Alla kommuner']
dfk = dfk[~dfk['reg'].isin(bort)]
dfk['reg'] = dfk['reg'].str.strip()
dfk = dfk[~dfk['reg'].str.contains(' län')]
dfk = dfk.set_index('reg')<jupyter_output><empty_output><jupyter_text>Kontroll av antalet rader och kolumner per år:<jupyter_code>for year in dfk.år.unique():
print(f'{year}: ', dfk[dfk['år']==year].shape)<jupyter_output>2016: (290, 4)
2017: (290, 4)
2010: (290, 4)
2011: (290, 4)
2008: (290, 4)
2012: (290, 4)
2013: (290, 4)
2009: (290, 4)
2014: (290, 4)
2015: (290, 4)
<jupyter_text>Gotland ej med i datatAll kommundata nu i samma format....PUST<jupyter_code>for year in dfr.år.unique():
print(f'{year}: ', dfr[dfr['år']==year].shape)<jupyter_output>2012: (21, 4)
2013: (21, 4)
2014: (21, 4)
2015: (21, 4)
2016: (21, 4)
2017: (21, 4)
2011: (21, 4)
<jupyter_text>### Beräkning<jupyter_code>for col in dfk.columns:
dfk[col] = pd.to_numeric(dfk[col])
for col in dfr.columns:
dfr[col] = pd.to_numeric(dfr[col])
def calc(df):
df = df.reset_index()[df.reset_index()['reg']!='Totalt'].set_index('reg')
df['antal_55_59'] = (df['antal'] * (df['55-59']/100)).astype('int')
df['antal_60-'] = (df['antal'] * (df['60-']/100)).astype('int')
df['antal_55-'] = df['antal_55_59'] + df['antal_60-']
df['andel_55-'] = df['55-59']+df['60-']
return df[['år','antal','antal_55_59','antal_60-','antal_55-','andel_55-']]
df = calc(dfr).reset_index()
län = df[df['reg']!='Totalt'][['år','reg','antal_55-','antal']].groupby('år').sum()
län['andel_gamla'] = ((län['antal_55-']/län['antal'])*100).round(2)
df = calc(dfk).reset_index()
kom = df[['år','reg','antal_55-','antal']].groupby('år').sum()
kom['andel_gamla'] = ((kom['antal_55-']/kom['antal'])*100).round(2)
# Kommunernas siffror per år:
kom
kom.andel_gamla.plot(figsize=(9,7))
plt.ylim(bottom=20,top=30)<jupyter_output><empty_output><jupyter_text>Så en svag nedgån i andelen äldre som anställda i Sveriges kommuner de senaste 10 åren!<jupyter_code># Skillnad mellan 2017 och 2008 i antalet anställda 55-plusare i kommunerna
kom.loc[kom.index==2017,'antal_55-'].values[0] - kom.loc[kom.index==2008,'antal_55-'].values[0]
# Totalt antal anställda 55-plusare i kommun och landsting 2017
kom.loc[kom.index==2017,'antal_55-'].values[0]+län.loc[län.index==2017,'antal_55-'].values[0]
# Skillnad i procenenheter i andelen gamla 2017 jämfört 2008 i kommunerna
(kom.loc[kom.index==2017,'andel_gamla'].values[0] - kom.loc[kom.index==2008,'andel_gamla'].values[0]).round(1)
# Andelen 55-plusare i riket 2017
(((kom.loc[kom.index==2017,'antal_55-'].values[0]+län.loc[län.index==2017,'antal_55-'].values[0])\
/(kom.loc[kom.index==2017,'antal'].values[0]+län.loc[län.index==2017,'antal'].values[0]))*100).round(1)
# Andelen 55-plusare i kommunerna 2017
((kom.loc[kom.index==2017,'antal_55-'].values[0]/kom.loc[kom.index==2017,'antal'].values[0])*100).round(1)
def results(df,year=2017):
df = df.reset_index()[df.reset_index()['reg']!='Totalt'].set_index('reg')
start = df.år.min()
tmp = calc(df).reset_index()[['reg','år', 'andel_55-']]
resultat = tmp.pivot_table(index='reg',columns='år')
resultat.columns = resultat.columns.droplevel()
resultat['diff'] = resultat[year]-resultat[start]
resultat = resultat.reset_index()[['reg',year,'diff']]\
.sort_values(year,ascending=False).reset_index(drop=True)
tmp = calc(df).reset_index()[['reg','år','antal_55-']]
tmp = tmp.loc[tmp['år']==year,['reg','antal_55-']]
resultat = resultat.merge(tmp,on='reg',how='left')
resultat = resultat[['reg','antal_55-',year,'diff']]
resultat.columns = ['Region/kommun','Antal anställda 55 år+',
'Andel anställda 55 år+',f"Förändring {start}-2017, %-enheter"]
return resultat
results(dfk).head()
if not os.path.isdir('res'):
os.makedirs('res')
# Spara till resultatfiler
results(dfk).to_excel('res/kommuner.xlsx',index=False)
results(dfr).to_excel('res/regioner.xlsx',index=False)
df = results(dfk)
# Antalet kommuner där andelen 55-plusare ökat sedan 2008
df[df['Förändring 2008-2017, %-enheter']>0].shape
# Antalet kommuner som har större andel 55-plusare 2017 jämfört med genomsnittet i Sveriges kommuner
df[df['Andel anställda 55 år+']>(25.9)].shape
dåliga = df[df['Förändring 2008-2017, %-enheter']>0]['Region/kommun'].tolist()
df = calc(dfk[dfk.index.isin(dåliga)]).reset_index()
df1 = df[['reg','år','antal','antal_55-']].groupby(['år']).sum()
tmp = df[df['reg']!='Stockholm'][['reg','år','antal','antal_55-']].groupby(['år']).sum()
sthlm = df[df['reg']=='Stockholm'][['reg','år','antal','antal_55-']].groupby(['år']).sum()
df1['andel']=((df1['antal_55-']/df1['antal'])*100).round(1)
tmp['andel']=((tmp['antal_55-']/tmp['antal'])*100).round(1)
sthlm['andel']=((sthlm['antal_55-']/sthlm['antal'])*100).round(1)
fig, ax = plt.subplots(figsize=(9,7))
df1.rename(columns={'andel':'Dålig trend-kommuner'})['Dålig trend-kommuner'].plot(ax=ax)
kom.rename(columns={'andel_gamla':'Samtliga kommuner'})['Samtliga kommuner'].plot(ax=ax)
sthlm.rename(columns={'andel':'Stockholm'})['Stockholm'].plot(ax=ax)
plt.ylim(bottom=25,top=31)
plt.grid(True,axis='y')
fig.legend(loc=(0.1,0.11))
plt.savefig('plot.png')<jupyter_output><empty_output><jupyter_text>Trendkuran ovan visar att andelen äldre i kommunernas förvaltning ökade 2009 och nådde sedan en topp 2012, men har sedan dess sjunkit. Lägesbilden blir med andra ord inte värre. Men enskilda kommuner går emot denna övergripande trend – i 66 kommuner ökar andelen äldre. I Övertorneå ökade exempelvis andelen anställda 55 år+ med 10 procentenheter de senaste tio åren – och det var från en redan hög nivå. Vi valde därför att fokusera på dessa kommuners utmaning.<jupyter_code>kom.reset_index().to_excel('res/riket.xlsx',index=False)<jupyter_output><empty_output>
|
no_license
|
/notebook.ipynb
|
johan-ekman/ds_off_anstallda_55plus
| 7 |
<jupyter_start><jupyter_text>This header imports all of the modules and libraries you will need in this exercise:<jupyter_code>""" This file contains code for use with "Think Stats", by Allen B. Downey,
available from greenteapress.com
Copyright 2014 Allen B. Downey
License: GNU GPLv3 http://www.gnu.org/licenses/gpl.html
"""
from __future__ import print_function
import numpy as np
import sys
import nsfg
import thinkstats2<jupyter_output>//anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
<jupyter_text>Write a function ReadFemResp that reads the NSFG respondent data 2002FemResp and returns a df<jupyter_code>def ReadFemResp(dct_file='data/2002FemResp.dct',
dat_file='data/2002FemResp.dat.gz'):
"""Reads the NSFG 2002FemResp data.
dct_file: string file name
dat_file: string file name
returns: DataFrame
"""
dct = thinkstats2.ReadStataDct(dct_file)
df = dct.ReadFixedWidth(dat_file, compression='gzip')
return df
resp = ReadFemResp()
resp.head()<jupyter_output><empty_output><jupyter_text>The variable *pregnum* is a recode that indicates how many times each respondent has been pregnant. Print the value counts for this variable and compare them to the published results in the [codebook](http://www.icpsr.umich.edu/nsfg6/Controller?displayPage=labelDetails&fileCode=FEM§ion=R&subSec=7869&srtLabel=606835)<jupyter_code>assert(len(resp) == 7643)
pregnum_counts = resp.pregnum.value_counts()
assert(pregnum_counts[0] == 2610)
assert(pregnum_counts[1] == 1267)
assert(pregnum_counts[2] == 1432)
assert(pregnum_counts[3] == 1110)
assert(pregnum_counts[4] == 611)
assert(pregnum_counts[5] == 305)
assert(pregnum_counts[6] == 150)
assert(sum(pregnum_counts[7:]) == 158)<jupyter_output><empty_output><jupyter_text>Cross-validate the respondent and pregnancy files by comparing *pregnum* for each respondent with the number of records in the pregnancy file. You can use nsfg.MakePregMap to make a dictionary that maps from each caseid to a list of indices into the pregnancy DataFrame.<jupyter_code>def ValidatePregnum(resp):
# read the pregnancy DataFrame
preg = nsfg.ReadFemPreg()
# make the map from caseid to list of pregnancy indices
preg_map = nsfg.MakePregMap(preg)
# iterate through the respondent pregnum series
for index, pregnum in resp.pregnum.iteritems():
caseid = resp.caseid[index]
indices = preg_map[caseid]
# check that pregnum from the respondent file equals
# the number of records in the pregnancy file
if len(indices) != pregnum:
print(caseid, len(indices), pregnum)
return False
return True
assert(ValidatePregnum(resp) == True)
# Or:
ValidatePregnum(resp)<jupyter_output><empty_output>
|
no_license
|
/chap01ex02soln.ipynb
|
HechmiMell/thinkStats2-1
| 4 |
<jupyter_start><jupyter_text>## Reddit comments
### Generate PCA based "toxicity" score
In this notebook, I analyze a set of features associated with a corpus of comments sampled from the Reddit subredit r/politics.
The goal is to create a standardized score based on comment features that represents whether the comment was positively or negatively received by other users. A highly negative comment score could be deemed "toxic".
The comments were downloaded from a target Reddit using PRAW and a custom script. Each comment has the following associated features:
- comment ID#
- subreddit name
- post ID#
- parent ID#
- comment timestamp
- comment age since post time (secs)
- comment age since now (secs)
- user ID#
- user name
- user account created date
- user account comment karma
- user account link karma
- #replies to the comment
- contoversial flag state
- comment vote score
- comment text (converted to ascii)
### Setup notebook and load reddit comment data<jupyter_code># remove warnings
import warnings
warnings.filterwarnings('ignore')
# ---
%matplotlib inline
from matplotlib import pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
import pandas as pd
pd.options.display.max_columns = 100
import numpy as np
import datetime
import time
import csv
# subdir containing collected comment data as csv files
srcdir = './data_collected/'
# select a file containing collected comments
csvfilename = 'comment_sample_politics190220_002433'
df = pd.read_csv(srcdir+csvfilename+'.csv')
# remove any deleted or removed comments
df = df[(df.text!='[deleted]') & (df.text!='[removed]')]
print(df.shape)<jupyter_output>(83282, 16)
<jupyter_text>### Create a few features for visualization, select analysis features
For a start, I'm going to look at several features that might be correlated with user approval or disapproval of a comment.
- user account comment karma
- user account link karma
- #replies to the comment
- #days difference between post date and comment date
- comment vote score
#### Note that comment vote score is going to be the most useful feature, since it's a direct measurement of reader approval/disapproval. Therefore, I'll be comparing the other features to it: if they correlate with vote score, then they will be included in the PCA score calculation.
<jupyter_code>import seaborn as sns
# get sign of vote score
df['score_sign'] = (df.score<0).map({True:'negative',False:'positive'})
# convert seconds time difference between post date and comment date into days
df['u_days'] = ((df.time-df.u_created)/86400)
# specify which feature columns to analyze
cols2compare = ['u_comment_karma', 'u_link_karma', 'num_replies',
'u_days', 'score']
<jupyter_output><empty_output><jupyter_text>### Feature distributions
Except for u_days, all of these features have extremely peaky distributions. I'd like to try and normalize them a bit before throwing them into the PCA.<jupyter_code>plt.figure(figsize=(15, 8))
for plotnum, colname in zip(range(len(cols2compare)), cols2compare):
std3 = 3*df[colname].std()
plt.tight_layout()
plt.subplot(2,3,plotnum+1)
plt.hist(df[colname], range=(-std3,std3))
plt.title(colname)
plt.tight_layout(rect=[0, 0, 1, 0.90]);
plt.suptitle('Original feature histograms, linear scale', fontsize=20);
<jupyter_output><empty_output><jupyter_text>### Feature distributions on log y scale
Things look a little better on log y scale, so I'll log transform the features.<jupyter_code>plt.figure(figsize=(15, 8))
for plotnum, colname in zip(range(len(cols2compare)), cols2compare):
std3 = 3*df[colname].std()
plt.tight_layout()
plt.subplot(2,3,plotnum+1)
plt.hist(df[colname], range=(-std3,std3), log=True)
plt.title(colname)
plt.tight_layout(rect=[0, 0, 1, 0.90]);
plt.suptitle('Original feature histograms, log y scale', fontsize=20);
<jupyter_output><empty_output><jupyter_text>## Pairplot features
I'm using vote score as a primary feature, and comparing the other features against it. If a given feature is correlated with score (and thus might be useful for labelling comments toxic vs not), then it should have a non-zero regression slope.
Note that I'm log transforming the features and I've separately plotted samples by whether their score is less than 1 (blue) or greater than 0 (red).
Considering vote score vs the other features:
- u_comment_karma: slight positive regression slope, maybe useful.
- u_link_karma: no slope - not useful.
- num_replies: definitely a slope, but slopes have different signs for negative vs positive vote score samples.
- u_days: no slope - not useful.
So, based on these results, I'm going to drop u_link_karma and u_days.
num_replies vs vote score is odd - I want to do something with that to make the regression slope for pos and neg score samples more linear.
<jupyter_code># set pairplot format
pairplot_kws = {'line_kws':{'color':'green','alpha': 0.2}, 'scatter_kws': {"s": 3, 'alpha': 0.05}}
# create a new df for all the log transformed features
df_log = df.copy()
# comment karma can be negative, so compute logs differently
df_log.u_comment_karma[df_log.u_comment_karma>0] = np.log(df_log.u_comment_karma[df_log.u_comment_karma>0])
df_log.u_comment_karma[df_log.u_comment_karma<0] = -np.log(df_log.u_comment_karma[df_log.u_comment_karma<0].abs())
df_log.u_link_karma[df_log.u_link_karma>0] = np.log(df_log.u_link_karma[df_log.u_link_karma>0])
df_log.num_replies[df_log.num_replies>0] = np.log(df_log.num_replies[df_log.num_replies>0])
# comment score can be negative, so compute logs differently
df_log.score[df_log.score>0] = np.log(df_log.score[df_log.score>0])
df_log.score[df_log.score<0] = -np.log(df_log.score[df_log.score<0].abs())
# get rid of comments with 0 scores
df_log = df_log[df_log.score!=0]
# plt.figure(figsize=(10, 10));
pp = sns.pairplot(df_log, vars=cols2compare, hue='score_sign', kind='reg', plot_kws=pairplot_kws);
pp.fig.tight_layout(rect=[0, 0, .90, 0.92]);
pp.fig.suptitle('Comment features, log transformed\n\n', fontsize=24);
<jupyter_output><empty_output><jupyter_text>### Correlation matrix
I'll take a look at how these features correlate with each other. Since num_replies is being nonlinear at 0, I'll split the samples again between pos and neg vote scores.
Looking at correlations between vote score and other features:
- u_comment_karma: r=0.25 = keep
- u_link_karma: r=0.05 = drop.
- num_replies: r=-0.48(neg score), 0.63(pos score) = keep but need to fix
- u_days: r=0.09 = drop.
<jupyter_code># Generate a mask for the upper triangle
# mask = np.zeros_like(corr, dtype=np.bool)
mask = np.zeros([len(cols2compare),len(cols2compare)], dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
plt.figure(figsize=(20, 10))
plt.subplot(1,2,1)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(df_log[df_log.score<1][cols2compare].corr(), mask=mask, cmap=cmap, vmax=.9, center=0, annot=True,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
plt.title('Feature correlations, vote score < 0 ', fontsize=24);
plt.subplot(1,2,2)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(df_log[df_log.score>=1][cols2compare].corr(), mask=mask, cmap=cmap, vmax=.9, center=0, annot=True,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
plt.title('Feature correlations, vote score > 1', fontsize=24);
<jupyter_output><empty_output><jupyter_text>### Fixing num_replies
It occurs to me that the behavior of num_replies vs vote score (an indication of approval/disapproval) makes sense psychologically: people tend to reply to comments that they either really like, or really hate. In fact this is the phenomenon that internet trolls exploit to get the reaction they seek from other users: post something nasty and outrage others into replying. So, really, I can view comment replies as likely either positive or negative, and I can make that assessment using the vote score: if a comment has a negative vote score, then replies can be assumed to be negative.
Result: setting num_replies for comments with scores < 0 to be negative makes the vote score vs num replies regression linear.
<jupyter_code>cols2compare = ['score', 'num_replies_fixed', 'u_comment_karma' ]
# make numreplies negative if vote score is negative
df_log['num_replies_fixed'] = df_log.num_replies
df_log.num_replies_fixed[df_log.score<0] = -df_log.num_replies_fixed[df_log.score<0]
pp = sns.pairplot(df_log, vars=cols2compare, hue='score_sign', kind='reg', plot_kws=pairplot_kws);
pp.fig.tight_layout(rect=[0, 0, .90, 0.92]);
pp.fig.suptitle('Selected comment features, num_replies fixed', fontsize=20);
<jupyter_output><empty_output><jupyter_text>### Correlation matrix for selected features and fixed num_replies
All of the features have positive correlations now across the range of values. <jupyter_code># Generate a mask for the upper triangle
mask = np.zeros([len(cols2compare),len(cols2compare)], dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
plt.figure(figsize=(20, 10))
plt.subplot(1,2,1)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(df_log[cols2compare].corr(), mask=mask, cmap=cmap, vmax=.9, center=0, annot=True,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
plt.title('Selected feature correlations, all samples ');
<jupyter_output><empty_output><jupyter_text>## PCA analysis
Since I'm only using three features, the max PCA dimensions will be 2.<jupyter_code>from sklearn.decomposition import PCA
cols2compare = ['score', 'num_replies_fixed', 'u_comment_karma' ]
pcascores = PCA(n_components=2).fit_transform(df_log[cols2compare])
df_log['pca_score1'] = pcascores[:,0]
df_log['pca_score2'] = pcascores[:,1]
<jupyter_output><empty_output><jupyter_text>### Compare PCA 1 & 2 with vote score
Looks like PCA 2 is more closely correlated with vote score, but not PCA 1. I'll use PCA 2 then, since I want the new score to be aligned with vote score.<jupyter_code>sp = sns.lmplot(data=df_log, x='score', y='pca_score1',
line_kws={'color':'green','alpha': 0.2}, scatter_kws={"s": 3, 'alpha': 0.05});
plt.tight_layout();
plt.xlabel('Log vote score')
plt.ylabel('PCA 1 score')
plt.title('Vote score vs PCA 1 score');
sp = sns.lmplot(data=df_log, x='score', y='pca_score2',
line_kws={'color':'green','alpha': 0.2}, scatter_kws={"s": 3, 'alpha': 0.05});
plt.tight_layout();
plt.xlabel('Log vote score')
plt.ylabel('PCA 2 score')
plt.title('Vote score vs PCA 2 score');
<jupyter_output><empty_output><jupyter_text>### Pairplot of selected variables vs PCA 1 and PCA 2
PCA 2 appears to be mostly influenced by vote score (good!), but also is nicely correlated with number of replies. Comment karma seems to be the strongest influence on PCA 1, so it doesn't contribute to the PCA 2 score.<jupyter_code>
cols2compare = ['u_comment_karma', 'num_replies_fixed', 'score',
'pca_score1', 'pca_score2']
pp = sns.pairplot(df_log, vars=cols2compare, hue='score_sign',
kind='reg', plot_kws=pairplot_kws);
pp.fig.tight_layout(rect=[0, 0, .90, 0.92]);
pp.fig.suptitle('Selected comment features, num_replies fixed',
fontsize=24);
<jupyter_output><empty_output><jupyter_text>### Correlate selected features against PCA 1 and PCA 2<jupyter_code># Generate a mask for the upper triangle
# mask = np.zeros_like(corr, dtype=np.bool)
mask = np.zeros([len(cols2compare),len(cols2compare)], dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
plt.figure(figsize=(20, 10))
plt.subplot(1,2,1)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(df_log[cols2compare].corr(), mask=mask, cmap=cmap, vmax=.9,
center=0, annot=True,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
plt.title('Selected feature correlations, all samples ');
<jupyter_output><empty_output><jupyter_text>## Create a new datafile with PCA score
- Create a new dataframe with text and PCA 2 score as our label
- standardize the score and range it between +/-5
- Split pos and neg scores and standardize each separately.
- Generate before vs after histograms
- Save the dataframe to CSV<jupyter_code>from sklearn.preprocessing import StandardScaler
# create output dataframe
df_labeled = df.copy() # copy original unmodified dataframe
# using PCA 2 as score
df_labeled['pca_score'] = df_log.pca_score2
# plot the original PCA score distribution
plt.figure(figsize=(10, 5))
df_labeled.pca_score.hist(bins=40)
plt.title('original PCA score distribution')
# range and adjust scores using stds for pos and neg values
stdneg = df_labeled.pca_score[df_labeled.pca_score<0].std()
stdpos = df_labeled.pca_score[df_labeled.pca_score>0].std()
print('PCA score negative std=%1.2f, positive = %1.3f'%(stdneg,stdpos))
# standardize the range, treat pos and neg scores separately
df_labeled.pca_score[df_labeled.pca_score<0] = df_labeled.pca_score[df_labeled.pca_score<0]/stdneg
df_labeled.pca_score[df_labeled.pca_score>0] = df_labeled.pca_score[df_labeled.pca_score>0]/stdpos
# threshold scores so that we have a range of +/-5
df_labeled.pca_score[df_labeled.pca_score<-5] = -5
df_labeled.pca_score[df_labeled.pca_score>5] = 5
plt.figure(figsize=(10, 5))
df_labeled.pca_score.hist(bins=40)
plt.title('Standardized and ranged PCA score distribution')
df_labeled.to_csv(csvfilename+'_labeled.csv')<jupyter_output>PCA score negative std=1.04, positive = 1.229
|
no_license
|
/capstone_1/reddit_analyze_PCA_score_v1.ipynb
|
johnmburt/springboard
| 13 |
<jupyter_start><jupyter_text>**D1DAE: Análise Estatística para Ciência de Dados (2021.1)**
IFSP Campinas
Profs: Ricardo Sovat, Samuel Martins
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.# 1. Probability Distributions## 1.2. Binomial Distribution### Exercise 1In an admission test for the Data Science specialization, **10 questions** with **3 possible choices** in each question.
**Each question scores equally**. Suppose that a candidate have not been prepared for the test. She decided to guess all answers.
Let the test has the **maximum score of 10** and **cut-off score of 5** for being approved for the next stage.
Provide the _probability_ that this candidate will **get 5 questions right**, and the _probability_ that she will **advance to the next stage of the test**.
#### Is this a Binomial experiment?#### 1. How many trials (n)? (Fixed number of identical trials)#### 2. Are the trials independent?Yes. One option chosen for a given question does not influence the chosen answer for the other questions.#### 3. Are only two outcomes possible per trial?Yes. The candidate has two possibilities: **hit** or **miss** the question.#### 4. What is the probability of success (p) and failure (q)?Therefore, it is a **Binomial experiment**.#### What is the total number of events that you want to get successes (x)?
#### Q1. What is the _probability_ that the candidate will get 5 questions right?##### Solution 1##### Solution 2
#### Q2. How likely is the candidate to pass the test? (What is the _probability_ for that?)
##### Solution 1##### Solution 2##### Solution 3
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html
### Exercise 2In the last World Chess Championship, **the proportion of female participants was 60%.**
**The total of teams, with 12 members, in this year's championship is 30.**
According to these information, **how many teams should be formed by 8 women?**
Let's first calculate the probability of a team has 8 women.#### 1. How many trials (n)? (Fixed number of identical trials)#### 2. Are the trials independent?Yes. The gender of each member is independent.#### 3. Are only two outcomes possible per trial?Yes: woman (success) and others (failure).#### 4. What is the probability of success (p) and failure (q)?#### What is the total number of events that you want to get successes (x)? #### Q: How many teams (out of 30) should be formed by 8 women?##### Solution#### mean = n * p
## 1.3. Poisson Distribution### Exercise 1A restaurant receives **20 orders per hour**. What is the chance that, at a given hour chosen at random, the restaurant will receive **15 orders**?#### What is the mean number of occurrences per hour? (𝜆)#### What is the desired number of occurrences within the period of time? (x)##### Solution 1##### Solution 2
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html### Exercise 2Vehicles pass through a junction on a busy road at an average rate of 300 per hour. #### **Q1**: Find the probability that none passes in a given minute.##### The average number of cars per minute (𝜆)##### What is the desired number of occurrences within the period of time? (x)#### **Q2**: What is the expected number (average number) passing in two minutes?#### **Q3**: Find the probability that this expected number actually pass through in a given two-minute period.
Given that the average rate of vehicles that pass through in a busy road in **two minutes** is **10**, which is the probability of passing through **exactly 10 vehicles** in a given two-minute period?##### The average number of cars per two minutes (𝜆)##### What is the desired number of occurrences within the period of time? (x)### Exercise 3Suppose the **average number of lions** seen on a **1-day safari** is **5**. What is the probability that tourists will see **fewer than four lions** on the next 1-day safari?#### What is the mean number of lions seen on a 1-day safari? (𝜆)#### What is the desired number of occurrences within the period of time? (x)
x = 0, 1, 2, or 3##### Solution 1##### Solution 2
## 1.4. Normal Distribution### Exercise 1When studying the height of the inhabitants of Pompeia, it was found that its **distribution is approximately normal**, with **mean** of 1.70 m and **standard deviation** of 0.1.#### Q1: Probability of a person, selected by chance, is less than 1.8m tall? P(X < 1.8)##### **P(X < 1.8) = P(Z < 1.000)**##### Solution 1 - Using the z-score table
https://www.math.arizona.edu/~rsims/ma464/standardnormaltable.pdfChecking the z-score table, **P(Z < 1.000)=0.84134**##### Solution 2 - Using scipy#### Q2: Probability of a person, selected by chance, is between 1.6m and 1.8m tall? P(1.6 <= X <= 1.8)
P(1.6 <= X <= 1.8) = P(X < 1.8) - P(X < 1.6)**P(1.6 <= X <= 1.8) = P(Z < -0.9999) - P(Z < 1.00000)**##### Solution 1 - Using the z-score table
https://www.math.arizona.edu/~rsims/ma464/standardnormaltable.pdf##### Solution 2 - Using scipy#### Q3: Probability of a person, selected by chance, is over 1.9m tall? P (X >= 1.9)
P(X >= 1.9) = 1 - P(X < 1.9)**P(X >= 1.9) = P(Z >= 1.99999) = 1 - P(Z < 1.99999)**##### Solution 1 - Using the z-score table
https://www.math.arizona.edu/~rsims/ma464/standardnormaltable.pdf##### Solution 2 - Using scipy
# 2. Central Limit Theorem<jupyter_code># dataset with data about stroke patients
df = pd.read_csv('./datasets/healthcare-dataset-stroke-data.csv')
df.head()
population = df['avg_glucose_level']
population.head()
population.shape()
population_mean = population.mean()
plt.figure(figsize=(10,6))
ax = sns.histplot(population) # the distribution is not normal
ax.axvline(x=population_mean, color='red')
ax.annotate(f'Population Mean\n{population_mean:.2f}', xy=(population_mean + 5, 350))<jupyter_output><empty_output><jupyter_text>
#### The data distribution of a sample does not necessarily follow the **normal distribution**<jupyter_code>sample_100 = population.sample(100)
plt.figure(figsize=(10,6))
sns.histplot(sample_100)<jupyter_output><empty_output><jupyter_text>#### As the sample size increases, the **sampling distribution of the mean** approaches a **normal distribution** with the **sampling distribution’s mean** equals **the population mean**<jupyter_code># Dictionary where each key correspond to a sample size
# For each sample size, there is a dataframe with 1000 samples associated to
samples = {}
for n in [5, 10, 30, 100, 1000]:
df_sample_size = pd.DataFrame()
for i in range(1000):
sample = population.sample(n)
sample.reset_index(drop=True, inplace=True) # requires this "trick" to work
df_sample_size[f'Sample #{i}'] = sample
samples[n] = df_sample_size
samples.keys()
samples[5]
samples[100]
# mean of each one of the 1000 samples with sample size of 100
samples[100].mean()
sample_sizes = sorted(samples.keys())
fig, axs = plt.subplots(1, 5, figsize=(24, 3))
for i, n in enumerate(sample_sizes):
sampling_distribution = samples[n].mean()
mean_of_sampling_distribution = sampling_distribution.mean()
ax = sns.histplot(sampling_distribution, ax=axs[i])
axs[i].axvline(x=mean_of_sampling_distribution, color='red')
ax.annotate(f'Mean\n{mean_of_sampling_distribution:.2f}', xy=(mean_of_sampling_distribution + 1, 120))
ax.spines['top'].set_visible(False)
ax.set_ylim([0, 130])<jupyter_output><empty_output>
|
no_license
|
/data_distributions/.ipynb_checkpoints/data_distributions-checkpoint.ipynb
|
LeticiaPaivaSigalei/IFSP-CMP-D1AED-2021.1
| 3 |
<jupyter_start><jupyter_text>
This template will lay out all possible sections that could be used for a research report and manual. All research reports and manuals should strive to comply with this template, but not all sections will be relevant to all teams. In order to use this template, copy this file from the AguaClara team resources repository to your team's repository, and rename it for your team in a format similar to "[Team Name] [Semester]". An example would be "Filter and Treatment Train Flow Control Spring 2017." For guidance on writing Markdown documents, refer to the [AguaClara Interactive Tutorial](https://github.com/AguaClara/aguaclara_tutorial) and the [AguaClara Tutorial training pages](https://aguaclara.github.io/aguaclara_tutorial/). Once you are ready to write your report, please delete this description and everything above.
# Team Name, Semester Year
#### Authors
#### Date
##Abstract
Briefly summarize your previous work, goals and objectives, what you have accomplished, and future work. (100 words max)
## Introduction
Explain how the completion of your challenge will affect AguaClara and the mission of providing safe drinking water (or sustainable wastewater treatment!). If this is a continuing team, how will your contribution build upon previous research? What needs to be further discovered or defined? If this is a new team, what prompted the inclusion of this team?
## Literature Review and Previous Work
Discuss what is already known about your research area based on both external work and that of past AguaClara Teams. Connect your objectives with what is already known and explain what additional contribution you intend to make. Make sure to add APA formatted in-text citations. If you mention the author(s) in your sentence, you can simply give the year of publication. [(Logan et. al. 1987)](http://www.jstor.org/stable/pdf/25043431.pdf?acceptTC=true)##Methods
Explain the techniques you have used to acquire additional data and insights. Reserve fine detail for the Manual at the end of the report, but use this section to give an overview with enough detail for the reader to understand your Results and Analysis. Be especially careful to detail the conditions your experiments were conducted under, as this information is especially important for interpreting your results.
Below, some example sections are given. Sectioning the report is meant to keep similar information together. Continue making sections as necessary, or delete sections if you do not need them. Feel free to add subsubsections to further delineate the information. For example, under the Experimental Apparatus section below, the EStaRS team might consider having sections such as "Filter Design" and "Filter Fabrication".## Experimental Apparatus
Describe your apparatus and justify every decision you made and every parameter you chose in its design. Explain your apparatus setup using enough detail such that future teams can recreate it.
Make sure to include:
* Design calculations
* Use [LaTeX](https://www.overleaf.com/learn/latex/Mathematical_expressions) to format mathematical equations. LaTeX equations can be written in-line ($x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$) or as equation elements:
\[x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}\]
* Schematic drawing
* Clearly labeled components, flow paths, sensors, and reactor geometry.
Figure 1. Above is an example schematic drawing of the Summer 2018 High Rate Sedimentation apparatus. It is labeled with components, flow paths, sensors, and reactor geometry.
* Labeled images
Figure 2. Above is an example image of the Summer 2018 High Rate Sedimentation apparatus. All photos should be clearly labeled and captioned.
| Label | Component | Label | Component |
|:-----:|:--------------------- |:-----:|:--------------------- |
| 1 | Water Pump | 8 | Effluent Turbidimeter |
| 2 | Clay Pump | 9 | Pressure Attenuator |
| 3 | Coagulant Pump | 10 | Coagulant Stock |
| 4 | Waste Pump | 11 | Clay Stock |
| 5 | Flocculator | 12 | Sedimentation Tank |
| 6 | Pressure Sensor | 13 | Waste Tube |
| 7 | Influent Turbidimeter | | |## Procedure
Discuss your experimental procedure. How did you run your experiment? What were you testing? What were the values of relevant parameters?## Results and Analysis
Present an observation (results), then explain what happened (analysis). Each paragraph should focus on one aspect of your results and interpret that result. In other words, there should not be two distinct paragraphs, but one paragraph containing one result and the interpretation and analysis of this result. Here are some guiding questions for results and analysis:
When describing your results, present your data, using the guidelines below:
* What happened? What did you find?
* Show your experimental data in a professional way (see [Figure Requirements](#figure-requirements)).
After describing a particular result, within a paragraph, go on to connect your work to fundamental physics, chemistry, statics, fluid mechanics, or whatever field is appropriate. Analyze your results and compare with theoretical expectations; or, if you have not yet done the experiments, describe your expectations based on established knowledge.
* Why did you get those results/data?
* Did these results line up with expectations?
* What went wrong?
* If the data do not support your hypothesis, is there another hypothesis that describes your new data?
Include implications of your results. How will your results influence the design of AguaClara plants? If possible provide clear recommendations for design changes that should be adopted.
### Figure requirements
The [Data Analysis.md](https://github.com/AguaClara/team_resources/blob/master/Example%20Code/Data%20Analysis.md) document in the Example Code folder of the [Team Resources](https://github.com/AguaClara/team_resources) repository has examples for graphing data in Python. In these examples, many of the requirement below have already been met, so we recommend you understand and use them.
Basics
- Create the graph using Python and the [Matplotlib](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.html) package.
- Place a caption with a brief description below the graph. Add this caption using the wiki formatting, not in your graphing software.
- Insert the graph in your report after the first paragraph with a reference to it.
Appearance
- Use a white background.
- Use the same font in the graphs as you use in the text of the report.
- Scale the size of the graph so it is large enough to see the data and read the text without having to follow a link to see a larger image. Avoid using hyperlinks on images, because an export to Microsoft Word will not include the image.
Axes
- Data with x and y values should be presented using an xy scatter plot.
- Label all axes and include units.
- Axis scale labels should be in the margin of the graph, not inside the graph border.
- If the x-axis is time, then make time zero the beginning of the test.
- Eliminate ranges of the x and y axes that are not used or meaningful.
Data Representation
- Use data symbols, not a line, to show discrete data points.
- However, if there is so much data that the symbols overlap, it is better to connect the data points with a line and not show the data symbols.
Multiple Plots
- When presenting multiple plots on a single graph make sure that it is easy to distinguish the plots using the legend.
Curve Fitting
- If curve fitting is used, explain why and include the equation elsewhere in the report.
- The model or theoretical curve should be a smooth curve without data points.
Below is a graph created by example code in the [Data Analysis.md](https://github.com/AguaClara/team_resources/blob/master/Example%20Code/Data%20Analysis.md) document referenced in the beginning of this section. **Note that the code is included and linked at the bottom of this report. Make any code you use for data analysis is provided this way too.**
Figure 3. Descriptive captions are very important for figures. Rather than including a title above your figure, write a caption below. The code used to create this graph can be found in the [Python Code](#python-code) section of this report.
## Conclusions
Explain what you have learned and how that influences your next steps. Why does what you discovered matter to AguaClara?
Make sure that you have defended your conclusions with facts and results.## Future Work
Describe your plan of action for the next several weeks of research. Detail the next steps for this team. How can AguaClara use what you discovered for future projects? Your suggestions for challenges for future teams are most welcome. Should research in this area continue?## Bibliography
Logan, B. E., Hermanowicz, S. W., & Parker,A. S. (1987). A Fundamental Model for Trickling Filter Process Design. Journal (Water Pollution Control Federation), 59(12), 1029–1042.# Manual
The goal of this section is to provide all of the guidance that would be necessary for a future team to pick up your work where you left off. Please try to be thorough and put yourselves in the shoes of a newcomer to the project. Below are some recommended sections, but the manual will likely take a slightly different form for each team.## Fabrication Details
Include any information related to the fabrication of equipment, technologies, or experimental apparatuses, such as
* materials, with dimensions
* fabrication methods and the purpose of each step
* complications and constraints in construction
* revisions made to a previous design, with a reference to the report where the design is described
* appropriate safety precautions## Special Components
If your subteam uses a particular part that is unique and you could foresee a future subteam needing to order it or learn more about it, please include basic information such as its vendor, its catalog/item number, and a link to any documentation.## Experimental Methods### Set-up
Step 1.
* Put tasks in a sequential order.
* It is okay to have sub-lists.
- Like this.### Experiment
Step 1.### Cleaning Procedure
Step 1.## Experimental Checklist
Another potential section could include a list of things that you need to check before running an experiment.
## ProCoDA Method File
**Upload your ProCoDA method file to your Github repository and provide a [link]() it in your report.**
Use the rest of this section to explain your method file. Your explanation can be broken up into several components as shown below:
### States
Here, you should describe the function of each state in your method file, both in terms of its overall purpose and also in terms of the details that make it distinct from other states. For example:
- **OFF**: Resting state of ProCoDA. All sensors, relays, and pumps are turned off.
### Set Points
Here, you should list the set points used in your method file and explain their use as well as how each was calculated.
## Python Code
In this section, provide any code that was a significant part of your research, including in making theoretical calculations, determining parameters for your apparatus and your experiment, or analyzing your data.
If your code is repetitive (e.g. similar code for creating different graphs) or too long, you may instead provide a [link to a file](https://github.com/AguaClara/team_resources/blob/master/Example%20Code/Data%20Analysis.md) with your code that has been *uploaded to your repository*.### Code for Figure 3
Provide a brief description of your code, for example:
Below is the code used to create the influent and effluent turbidity graph in Figure 3, using the `aguaclara.research.procoda_parser` module to read the turbidity data from a ProCoDA data file.
**Note: your code should be able to run as is in this report. That means any files you reference should have the correct directories/paths, so that anybody who downloads your repository can run the code in your report and find the same results.** For example, the code below references a ProCoDA data file in the Data folder of this repository. If that file is not present or moved to a different folder, this code will result in an error.
<jupyter_code>!pip install aguaclara
import aguaclara.research.procoda_parser as pp
import matplotlib.pyplot as plt
import numpy as np
# read time, influent turbidity, and effluent turbidity columns from
# ProCoDA data file from 6-14-2018 as NumPy arrays
time, influent_turbidity, effluent_turbidity = pp.get_data_by_time(
path="Data", columns=[0, 3, 4], start_date="6-14-2018",
start_time="15:40", end_time="23:30")
elapsed_time = (np.array(time)-time[0])*24
# set up multiple subplots
fig, ax1 = plt.subplots()
# make the first subplot (Effluent Turbidity vs Time)
ax1.set_xlabel("Time (hours)")
ax1.set_ylabel("Effluent Turbidity (NTU)")
line1, = ax1.plot(elapsed_time, effluent_turbidity, color="blue")
# set the x-axis of the second subplot equal to the first's
ax2 = ax1.twinx()
# make the second subplot (Influent Turbidity vs Time)
ax2.set_ylabel("Influent Turbidity (NTU)")
# adjust the y-axis
ax2.set_ylim(60,120)
line2, = ax2.plot(elapsed_time, influent_turbidity, color="green")
plt.legend((line1, line2), ("Effluent", "Influent"))<jupyter_output><empty_output><jupyter_text>### Code for Experimental Design
Make sure variables in your code, as well as anything that might not be easily understood by a newcomer, is well commented. Also include the outputs of your code, if applicable and not given earlier in the report.
<jupyter_code>import aguaclara.research.peristaltic_pump as pp
import aguaclara.research.stock_qc as stock_qc
from aguaclara.core.units import unit_registry as u
# volume per revolution flowing from the pump for PACl (coagulant) stock
vol_per_rev_PACl = pp.vol_per_rev_3_stop("yellow-blue")
# revolutions per minute of PACl stock pump
rpm_PACl = 3 * u.rev/u.min
# flow rate from the PACl stock pump
Q_PACl = pp.flow_rate(vol_per_rev_PACl, rpm_PACl)
# desired system flow rate
Q_sys = 2 * u.mL/u.s
# desired system concentration of PACl
C_sys = 1.4 * u.mg/u.L
# a variable representing the reactor and its parameters
reactor = stock_qc.Variable_C_Stock(Q_sys, C_sys, Q_PACl)
# required concentration of PACl stock
C_stock_PACl = reactor.C_stock()
# concentration of purchased PACl super stock in lab
C_super_PACl = 70.28 * u.g/u.L
# dilution factor of PACl super stock necessary to achieve PACl stock
# concentration (in L of super stock per L water)
dilution = reactor.dilution_factor(C_super_PACl)
mL_per_L = dilution * 1000
print("A reactor with a system flow rate of", Q_sys, ",")
print("a system PACl concentration of", C_sys, ",")
print("and a PACl stock flow rate of", Q_PACl)
print("will require a dilution factor of", round(mL_per_L.magnitude, 3),
"milliliters of PACl super")
print("stock per liter of water for the PACl stock.")<jupyter_output><empty_output>
|
no_license
|
/RamPump2019Summer.ipynb
|
ChingPangggg/team_resources
| 2 |
<jupyter_start><jupyter_text># Data Mining 15.0621 Homework Assignment 1
## Questions 2.11, 3.3, 3.4 (a & b)
### By: Jonathan Johannemann<jupyter_code>import pandas as pd,matplotlib.pyplot as plt,datetime as dt,warnings,numpy as np
warnings.filterwarnings('ignore')
%matplotlib inline<jupyter_output><empty_output><jupyter_text>## Question 2.11The objective of this question is to determine if some of the variables that are provided in the ToyotaCorolla dataset are correlated. After reading in the data, we can see that there are variables that are highly corrolated. To give an idea of the most correlated variables, I provided an arbitrary threshold of 0.8 for the absolute value of the correlation coefficient.<jupyter_code>df211 = pd.read_csv('C:\Users\Jonathan\Documents\JMP_Data\ToyotaCorolla.csv',sep=',',header=0)<jupyter_output><empty_output><jupyter_text>#### Question 2.11 (a) <jupyter_code>corr_mat = df211.corr()
corr_mat.values[[np.arange(len(corr_mat))]*2]=0
corr_mat = corr_mat[np.abs(corr_mat)>0.8].fillna(0).reset_index()
for col in corr_mat.columns.values[1:]:
if len(corr_mat[corr_mat[col]!=0][col].index)>0:
print "="*40
print "Looking at: ",col
for i in corr_mat[corr_mat[col]!=0][col].index:
print corr_mat['index'][i]," : ",corr_mat[corr_mat[col]!=0][col][i]
print "="*40
nonlinear_corr_mat = df211.corr('spearman')
nonlinear_corr_mat.values[[np.arange(len(nonlinear_corr_mat))]*2]=0
nonlinear_corr_mat = nonlinear_corr_mat[np.abs(nonlinear_corr_mat)>0.8].fillna(0).reset_index()
for col in nonlinear_corr_mat.columns.values[1:]:
if len(nonlinear_corr_mat[nonlinear_corr_mat[col]!=0][col].index)>0:
print "="*40
print "Looking at: ",col
for i in nonlinear_corr_mat[nonlinear_corr_mat[col]!=0][col].index:
print nonlinear_corr_mat['index'][i]," : ",nonlinear_corr_mat[nonlinear_corr_mat[col]!=0][col][i]
print "="*40
df211.plot(kind='scatter',x='Age_08_04',y='Mfg_Year')
df211.plot(kind='scatter',x='Age_08_04',y='Price')
df211.plot(kind='scatter',x='\xef\xbb\xbfId',y='Age_08_04') #don't be fooled, that's just Id for the x value
df211.plot(kind='scatter',x='Price',y='\xef\xbb\xbfId')<jupyter_output><empty_output><jupyter_text>I checked for both nonlinear and linear correlations between the variables and output. As we can see, there are multiple relationships that are highly correlated. In particular, radio and radio cassette are pretty obvious. We also notice that Price, Age, and Mfg_Year are correlated which seems fairly normal; newer cars are worth more than older cars.#### Question 2.11 (b)<jupyter_code>#make training, validation, and test data
def split(train,cross_validation,test):
total = train+cross_validation+test
random_num = np.random.randint(total)
label = None
if random_num<train:
label = 'TRAIN'
elif random_num<(train+cross_validation):
label = 'CROSS VALIDATION'
else:
label = 'TEST'
return label
df211['Partition'] = df211.Validation.apply(lambda x: split(50,30,20))
print df211.Partition.head()
print df211.Partition.tail()<jupyter_output>0 CROSS VALIDATION
1 TRAIN
2 TRAIN
3 CROSS VALIDATION
4 CROSS VALIDATION
Name: Partition, dtype: object
1431 TRAIN
1432 TRAIN
1433 TRAIN
1434 CROSS VALIDATION
1435 TRAIN
Name: Partition, dtype: object
<jupyter_text>For part b, we needed to partition the data using certain percentages. As you can see above, I've partitioned the data randomly with 50% of the data being allocated for training, 30% for validation, and 20% for testing.
* Training:
The training data is what we build our model on. This is the data that we tune the model on and try different variations on to initially get the best training output without, ideally, overfitting our trained observations. From here, we move on to the cross validation partition to evaluate the predictive ability of our model and then we may come back to this training partition or the model itself to determine what will ultimately improve the predictive ability of the model.
* Cross Validation:
Cross validation is used as an intermediary between training and testing. We generally seek to train our model first and then, once we're comfortable with how we've tuned the model on the training data, the cross validation acts as freebie in terms of model evaluation on "out of sample" data. However, cross validation data should not be included in the actual training of the model. The cross validation data lets us compare our model against some held out data and allows us to tune our model still on these new observations.
* Testing:
Testing is a means as a final say in terms of how well the model performs. In general, we do not want to look at the testing data until we are absolutely finished tuning our model. After evaluating the testing data set, as honest data scientists, we should not go back and try to tune our model to fit the new observations that have been provided before us. In a sense, testing is used to see how a model would perform in reality and the purpose of the testing partition allows us to get a completely unbiased, unseen sample to evaluate the predictive ability of our model.## Question 3.3<jupyter_code>df33 = pd.read_csv('C:\Users\Jonathan\Documents\JMP_Data\LaptopSalesJanuary2008.csv',sep=',',header=0)
df33 = df33[df33['Retail Price'].notnull()]<jupyter_output><empty_output><jupyter_text>Removing null values for retail price.### Question 3.3 (a)
#### Create a bar chart, showing the average retail price by store (Store Postcode). Adjust the y-axis scaling to magnify differences. Which store has the highest average? Which has the lowest?<jupyter_code>q33a = df33[['Store Postcode','Retail Price']]
q33a = q33a.groupby(['Store Postcode']).mean()
q33a.plot(kind='bar',ylim=[475,500])
q33a = q33a.reset_index()
print "Max:\n ",q33a[q33a['Retail Price']==q33a['Retail Price'].max()]
store_number_max = q33a[q33a['Retail Price']==q33a['Retail Price'].max()]['Store Postcode'] #let's just keep this for later
print "Min:\n ",q33a[q33a['Retail Price']==q33a['Retail Price'].min()]
store_number_min = q33a[q33a['Retail Price']==q33a['Retail Price'].min()]['Store Postcode']<jupyter_output>Max:
Store Postcode Retail Price
4 N17 6QA 494.634146
Min:
Store Postcode Retail Price
15 W4 3PH 481.006289
<jupyter_text>So if we look at the output provided above, we can see that N17 6QA has the highest average retail price among its competitors. On the other hand, W4 3PH has the lowest average retail price among its competitors.### Question 3.3 (b)
#### To better compare retail prices across stores, create side-by-side boxplots of retail price by store. Now compare the prices in teh two stores above. Does there seem to be a difference between their price distributions?<jupyter_code>df33[['Store Postcode','Retail Price']].boxplot(by='Store Postcode',rot=90)
df33[['Store Postcode','Retail Price']].groupby('Store Postcode').describe().unstack()<jupyter_output><empty_output><jupyter_text>So something that you might be wondering based on the boxplot that was provided in a cell above is: why does S1P 3AU look so much more different than the others? Well, as shown in the table above, we can see that this store only has 4 transactions in the data which explains for the comparatively abnormal figure.<jupyter_code>q33b = df33[['Store Postcode','Retail Price']]
max_store = q33b.ix[q33b['Store Postcode'] == store_number_max.min()]
min_store = q33b.ix[q33b['Store Postcode'] == store_number_min.min()]
q33b_new = max_store.append(min_store)
evaluate_max_min = q33b_new.boxplot(by='Store Postcode',rot=90)<jupyter_output><empty_output><jupyter_text>If we look at the price distributions of the two stores (the max and the min for average retail price), we can see that the distribution of the "max average" store's mean and the concentration of most of the observations are higher than the "min average" store. The "min average" store's distribution also appears to be a bit wider than the "max average" store's.## Question 3.4The file LaptopSales.txt is a comma-separated fiel with nearly 300,000 rows. ENBIS (the European Network for Business and Industrial Statistics) provided these data as part of a contest organized in the fall of 2009.
Scenario: Imagine that you are a new analyst for a company called Acell (a company sellign laptops). You have been provided with data about products and sales. You need to help the company with their business goal of planning a product strategy and pricing policies that will maximize Acell's projected revenues in 2009...Check to ensure that the data and modeling types in the data table are correct for each of the variables, and answer the following questions...<jupyter_code>df34 = pd.read_csv('C:\Users\Jonathan\Documents\JMP_Data\LaptopSales.csv',sep=',',header=0)
df34 = df34[df34['Retail Price'].notnull()]<jupyter_output><empty_output><jupyter_text>### Question 3.4 (a)#### Question 3.4 (a) i. At what price are the laptops actually selling?<jupyter_code>df34['Retail Price'].hist(bins=40)
print "The mean is: ",df34['Retail Price'].mean()
print "The median is: ",df34['Retail Price'].median()
print "The mode is: ",df34['Retail Price'].mode()[0]
print "Lower 95%: ", (df34['Retail Price'].mean()-2*df34['Retail Price'].std())
print "Upper 95%: ", (df34['Retail Price'].mean()+2*df34['Retail Price'].std())<jupyter_output>The mean is: 508.125935755
The median is: 500.0
The mode is: 510.0
Lower 95%: 298.902357666
Upper 95%: 717.349513843
<jupyter_text>We can see from the mean, median, and mode that laptops are generally sold in the region of 500-510 dollars. If we wished to broadened the price range, we could take a quick look and give a rough estimate that most of the laptops are sold in 400-600 dollar range.#### Question 3.4 (a) ii. Does price change with time?<jupyter_code>month_dict = {'Jan':1,'Feb':2,'Mar':3,'Apr':4,'May':5,'Jun':6,
'Jul':7,'Aug':8,'Sep':9,'Oct':10,'Nov':11,'Dec':12}
df34['Month'] = df34.Date.apply(lambda x: str(x)[2:5])
df34aii = df34[df34.Month!='n']
df34aii.Month = df34aii.Month.apply(lambda x: month_dict[x])
df34aii[['Month','Retail Price']].groupby('Month').mean().reset_index().plot(x='Month',y='Retail Price',kind='bar',ylim = [400,600],rot=90)<jupyter_output><empty_output><jupyter_text>Based on a monthly view of the retail prices overtime, it looks like prices are fairly low during March, April, and May but increase substantially during the summer months of June, July, and August. Perhaps a reason for this is because weather changes might influence buyer mood/decisions or perhaps students are purchasing laptops during this time before they go back to school.#### Question 3.4(a) iii. Are prices consistent with retail outlets?<jupyter_code>df34[['Store Postcode','Retail Price']].boxplot(by='Store Postcode',rot=90)<jupyter_output><empty_output><jupyter_text>Looking at each distribution we can see that there is some variation between retail price distributions. Some stores have lower average and the range for sale concentrations vary between store. However, these distributions don't differ too dramatically except for the case of S1P 3AU which may be due to the lack of data.#### Question 3.4(a) iv. How does price change with configuration?<jupyter_code>q34biv = df34[['Configuration','Retail Price']]
q34biv = q34biv.groupby(['Configuration']).mean()
q34biv.plot(rot=90) #price increases with configuration<jupyter_output><empty_output><jupyter_text>We can see that configuration does result in an impact on retail prices. Later configurations generate higher prices.### Question 3.4 (b)#### Question 3.4(b) i. Where are the stores and customers located?<jupyter_code>plt.figure()
plt.plot(df34['customer X'],df34['customer Y'],'b*',df34['store X'],df34['store Y'],'r.')
plt.title('Map of Store and Customer Locations')
plt.show()<jupyter_output><empty_output><jupyter_text>Above the customer locations and store locations are plotted. We can see that for the most part, the customers aren't terribly far from the stores except for a few outliers given the scale provided.#### Question 3.4(b) ii. Which stores are selling the most?<jupyter_code>q34bi = df34[['Store Postcode','Customer Postcode']]
q34bi['Store Postcode'].value_counts('Customer Postcode').plot(kind='bar')<jupyter_output><empty_output><jupyter_text>So, if we take a look at the max and min, we can see that SW1P 3AU is selling the most. But wait, S1P 3AU is the least. This looks like a potential problem. There may have been an ommitted leter in the data entering process and these might be the same store postcodes! For now, I am going to be a little conservative and say that they are not the same. I took a look at the coordinate X and Y values for S1P 3AU and they are "nan" for both which is unfortunate. This means that I can't make a quick comparison to the X and Y value coordinates of SW1P 3AU to confidently confirm that they are the same.#### Question 3.4(b) iii. How far would customers travel to buy a laptop?<jupyter_code>df34['Euclidean Distance'] = np.sqrt((df34['store X'] - df34['customer X'])**2 + (df34['store Y'] - df34['customer Y'])**2)
df34[['Store Postcode','Euclidean Distance']].boxplot(by='Store Postcode',rot=90)
print "The below bar chart provides the average distance traveled per store."
df34[['Store Postcode','Euclidean Distance']].groupby('Store Postcode').mean().plot(kind='bar',rot=90)<jupyter_output>The below bar chart provides the average distance traveled per store.
<jupyter_text>Save S1P 3AU which has no coordinate data, we can see that generally people travel between 3000 and 5000 units to a store.#### Question 3.4(b) iv. Try an alternative way of looking at how far customers traveled. Do this by creating a new data column that computes the distance between customer and store.<jupyter_code>df34['Manhattan Distance'] = np.abs(df34['store X'] - df34['customer X']) + np.abs(df34['store Y'] - df34['customer Y'])
df34[['Store Postcode','Manhattan Distance']].boxplot(by='Store Postcode',rot=90)
print "The below bar chart provides the average Manhattan distance traveled per store."
df34[['Store Postcode','Manhattan Distance']].groupby('Store Postcode').mean().plot(kind='bar',rot=90)<jupyter_output>The below bar chart provides the average Manhattan distance traveled per store.
|
no_license
|
/15.0621 Data Mining Assignment 1.ipynb
|
JonathanJohann/Independent_Research
| 17 |
<jupyter_start><jupyter_text># Grouping Data in Pandas DataFrames## 1. Import Libraries and Dependencies<jupyter_code># Import necessary libraries and dependencies
import pandas as pd
%matplotlib inline<jupyter_output><empty_output><jupyter_text>## 3. Read the CSV into a Pandas DataFrame<jupyter_code># Use the file path to read the CSV into a DataFrame and display a few rows
people_df = pd.read_csv('people_cleansed.csv')
people_df.head()<jupyter_output><empty_output><jupyter_text>## 4. Group DataFrame by `Occupation` and perform `count` aggregation<jupyter_code># Group by `Occupation` and perform count
people_df.groupby('Occupation').count()<jupyter_output><empty_output><jupyter_text>## 5. Group DataFrame by `Occupation` and Calculate Average Salary and Age<jupyter_code># Calculate average Salary and Age for each Occupation
people_df.groupby('Occupation').mean()<jupyter_output><empty_output><jupyter_text>## 6. Group By `Occupation` and `Gender` Columns, then Calculate Average Salary and Age<jupyter_code># Group by Occupation and Gender columns
people_df.groupby(['Occupation','Gender']).mean()<jupyter_output><empty_output>
|
no_license
|
/4.2/06-groupby-01/Solved/groupby-01.ipynb
|
jcarlosusa/FinTech
| 5 |
<jupyter_start><jupyter_text>## A intenção do projeto é criar um chatbot baseado em reviews de filmes para que se possa fazer perguntas e manter uma conversa livre
- link do banco de dados https://www.kaggle.com/Cornell-University/movie-dialog-corpus?select=movie_lines.tsv
- referências
>- https://shanebarker.com/blog/deep-learning-chatbot/
> -https://towardsdatascience.com/how-to-create-a-chatbot-with-python-deep-learning-in-less-than-an-hour-56a063bdfc44<jupyter_code>!pip3 install gensim
!pip3 install tensorflow
!pip3 install keras
import pandas as pd
import re
import gensim
from keras.layers import Dense, Activation, Dropout
from keras.optimizers import SGD
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from keras.models import Sequential
from scipy.spatial import distance<jupyter_output><empty_output><jupyter_text>### Opening movie reviews<jupyter_code>messages = pd.read_csv('./chatdata/movie_lines.tsv', header = None, delimiter="\t", quoting=3, encoding='ISO-8859-2')
messages.columns = ['msg_line', 'user_id', 'movie_id', 'user_name', 'msg']
messages.head(10)<jupyter_output><empty_output><jupyter_text>### Data exploration<jupyter_code>#setting parameters for data visualization
np.set_printoptions(threshold=None, precision=2)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 500)
pd.set_option('precision', 2)<jupyter_output><empty_output><jupyter_text>#### Data Analysis based on this article
- https://neptune.ai/blog/exploratory-data-analysis-natural-language-processing-tools?utm_source=medium&utm_medium=crosspost&utm_campaign=blog-exploratory-data-analysis-natural-language-processing-tools<jupyter_code>!pip3 install \
pandas matplotlib numpy \
nltk seaborn sklearn gensim pyldavis \
wordcloud textblob spacy textstat
data = messages['msg_line']
len(data)
data.describe()<jupyter_output><empty_output><jupyter_text>### Number of characters of each message<jupyter_code>#histogram to display the number of character of each message
data.str.len().hist()<jupyter_output><empty_output><jupyter_text>The number of characters are between 0 and 500### Removing non-alphabetical messages<jupyter_code>#there are some non-alphabetical messages that need to be found
data.loc[data.isna()].size
#the above statment does the same of this
#msg.loc[msg.isnull()].size
#filling the nan messages with a string
#messages = messages.fillna('.')<jupyter_output><empty_output><jupyter_text>### Number of words for each message<jupyter_code>#number of words for each message
data.str.split(' ').\
map(lambda x: len(str(x))).\
hist()<jupyter_output><empty_output><jupyter_text>The number of words are between 0 and 100### Average word length<jupyter_code>#checking the average word length
data.str.split(' ').\
apply(lambda x : [len(i) for i in x]). \
map(lambda x: np.mean(x)).hist()<jupyter_output><empty_output><jupyter_text>The length of words goes from 0 to 15The word 'I' has the biggest occurrence. There are a lot of messages like dashes that can be removed## Value types<jupyter_code>#checking the average word length
data_set = [type(item) for item in data]
data_set = set(data_set)
data_set<jupyter_output><empty_output><jupyter_text>## Print float values<jupyter_code>[it for it in data if isinstance(it, float)]<jupyter_output><empty_output><jupyter_text>## Print values with more than 7 characters<jupyter_code>data_clean = [m.split('L') for m in data]
data_clean = set([ l[0] for l in data_clean])
data_clean<jupyter_output><empty_output>
|
non_permissive
|
/.ipynb_checkpoints/data_exploration_movie_lines_msg_line-checkpoint.ipynb
|
douglasdcm/chatbot_for_movies
| 11 |
<jupyter_start><jupyter_text># Mean-variance portfolio## Importing the packages<jupyter_code>import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.regression.rolling import RollingOLS
import matplotlib.pyplot as plt
import matplotlib
from sklearn.linear_model import LinearRegression
matplotlib.style.use('ggplot')<jupyter_output><empty_output><jupyter_text>## Importing the data#### Bonds<jupyter_code>bonds = pd.read_csv('bofa_ice_sovereign_bonds_returns.csv')
bonds = bonds.set_index('Unnamed: 0')
bonds.index = pd.to_datetime(bonds.index)
bonds = bonds.sort_index()
bonds = bonds.ffill().fillna(0).pct_change().fillna(0)
bonds = bonds.loc[bonds.index.isin(pd.date_range('2016-09-01', '2022-01-01'))]
names = pd.read_csv('../Data/country_data/ice_bofa_sovereign_indices_summary.csv')
new_columns = {}
for i in range(len(names)):
new_columns[(names['Index BBG Ticker']+' Index').loc[i]] = names['Full Index Name'].apply(lambda x: x.split(' ')[4]).loc[i]<jupyter_output><empty_output><jupyter_text>#### Equities<jupyter_code>equities = pd.read_csv('../Data/country_data/ETF_adj_close.csv')
equities = equities.rename(columns={'Unnamed: 0': 'trdate'})
equities = equities.set_index('trdate')
equities.index = pd.to_datetime(equities.index)
equities = equities.sort_index()
equities = equities.ffill().fillna(0).pct_change().fillna(0)
equities = equities.replace(to_replace=float('inf'), value=0)
equities = equities.loc[equities.index.isin(pd.date_range('2016-09-01', '2022-01-01'))]<jupyter_output><empty_output><jupyter_text>## Building the functions<jupyter_code>def strategy(returns: pd.DataFrame, V_0=1, window=120, horizon=7, benchmark=False, GAMMA=1):
V = V_0
portfolio_value = [V]
returns_value = [0]
pnl = []
# for each week we perform the optimisation
for i in range(0, len(returns)-window-horizon, horizon):
if benchmark:
tradable = list(set(X.columns) - set(list(filter(lambda x: all(X[x].iloc[i:i+window] == 0.), X.columns))+['GDZM Index']))
w = np.ones(len(tradable))/len(tradable)
temp = (returns.iloc[i+window:i+window+horizon][tradable] @ w)
V *= (1+temp.sum())
else:
tradable = list(set(X.columns) - set(list(filter(lambda x: all(X[x].iloc[i:i+window] == 0.), X.columns))+['GDZM Index']))
try:
inv_corr = np.linalg.inv(X.iloc[i:i+window][tradable].corr())
num = np.dot(inv_corr, np.ones(len(inv_corr)))
den = np.dot(np.ones(len(inv_corr)), inv_corr)@np.ones(len(inv_corr))
num = list(map(lambda x: max(min(10, x), -10), num))
w = np.array(num)/sum(num) if sum(num) != 0 else np.ones(len(tradable))/len(tradable)
except:
w = np.ones(len(tradable))/len(tradable)
temp = (returns.iloc[i+window:i+window+horizon][tradable] @ w)
V *= (1+temp.sum())
portfolio_value.append(V)
returns_value.append(portfolio_value[-1]/portfolio_value[-2]-1)
return V, portfolio_value, returns_value
def plot_performance(value, rtns, title=None):
fig, axs = plt.subplots(2, figsize=(12,6))
fig.suptitle(title if title else 'Plot of portfolio value and returns over time')
axs[0].plot(value)
axs[1].plot(rtns)
plt.show()<jupyter_output><empty_output><jupyter_text>## Applications to our data<jupyter_code>X = bonds.copy()
X = X.replace(to_replace=float('inf'), value=0)
X = X.resample('W').sum()
perf, value, rtns = strategy(X, window=5, horizon=1, benchmark=False)
plot_performance(value, rtns, title="Plot of portfolio value and returns over time - Bonds")
perf_bench, value_bench, rtns_bench = strategy(X, window=5, horizon=1, benchmark=True)
plot_performance(value_bench, rtns_bench, title="Plot of benchmark value and returns over time - Bonds")
X = equities.copy()
X.index = pd.to_datetime(X.index)
X = X.resample('W').sum()
X = X.replace(float('inf'), 0)
perf, value, rtns = strategy(X, window=30, horizon=1, benchmark=False)
plot_performance(value, rtns, title="Plot of portfolio value and returns over time - Equities")
perf_bench, value_bench, rtns_bench = strategy(X, horizon=1, benchmark=True)
plot_performance(value_bench, rtns_bench, title="Plot of benchmark value and returns over time - Equities")<jupyter_output><empty_output><jupyter_text>## MVP on the clusters<jupyter_code>equities_clusters = [['Pakistan', 'Saudi Arabia', 'Qatar', 'Egypt', 'United Arab Emirates'],
['Argentina', 'Greece', 'South Africa', 'Brazil', 'Colombia'],
['Turkey'],
['Thailand', 'Taiwan', 'Russia', 'China', 'Peru', 'Mexico', 'Malaysia', 'Korea', 'India', 'Poland', 'Philippines']]
bond_clusters = [['Bulgaria', 'China', 'Hungary', 'South Korea', 'Malaysia', 'Pakistan', 'Poland', 'Tunisia', 'Ukraine', 'Venezuela'],
['Bolivia'],
['Brazil', 'Colombia', 'Indonesia', 'Mexico', 'Panama', 'Peru', 'Philippines', 'Russia', 'Turkey', 'Uruguay', 'Vietnam', 'South Africa'],
['Chile', 'Costa Rica', 'Dominican Republic', 'Egypt', 'Guatemala', 'Israel', 'Jamaica', 'Lebanon', 'Qatar', 'El Salvador', 'Trinidad & Tobago']
]
for i in range(4):
X = equities.loc[equities.index.isin(pd.date_range('2016-09-01', '2022-01-01')), equities_clusters[i]].copy()
perf, value, rtns = strategy(X, window=30, horizon=1, benchmark=False)
plot_performance(value, rtns, title="Equity performance for cluster "+str(i+1))<jupyter_output><empty_output>
|
no_license
|
/Experiments/Loic stuff.ipynb
|
wingwingz/AFP
| 6 |
<jupyter_start><jupyter_text># Testing Purge Functions in AWS
## Purging with older than<jupyter_code>from pathlib import Path
import arrow
import os
filesPath = r"C:\scratch\removeThem"
criticalTime = arrow.now().shift(hours=+5).shift(days=-7)
for item in Path(filesPath).glob('*'):
if item.is_file():
itemTime = arrow.get(item.stat().st_mtime)
if itemTime < criticalTime:
print('Removing...' + str(item.absolute()))
os.remove(str(item.absolute()))
#pass
path = os.path.abspath('Caroe.mp4')
ext = path.split('.')[-1]
print(path)
print(ext)
<jupyter_output><empty_output>
|
permissive
|
/PurgeTests.ipynb
|
harrisonford/myPoCs
| 1 |
<jupyter_start><jupyter_text>## I. Збір та анотування даних
### Опрацюйте один із архівів Wiktionary і витягніть усі синонімні ряди для будь-якої мови. Будь ласка, не обирайте англійську, українську чи російську мови.
Был выбран шведский язык.
Пользовался последним дампом - `svwiktionary-latest-pages-articles.xml`
Скрипт - `parsing.py`
Результат выполнения скрипта:<jupyter_code>import pandas as pd
!ls
unpickled_df = pd.read_pickle("./syns.pkl")
unpickled_df<jupyter_output><empty_output><jupyter_text>### Визначте рівень згоди між анотувальниками (або inter-annotator agreement) у корпусі NUCLE Error Corpus. <jupyter_code>from itertools import combinations
tags = """Any Vt3 Vm V0 Vform SVA ArtOrDet Nn Npos Pform Pref Prep Wci Wa Wform Wtone Srun Smod Spar Sfrag Ssub WOinc WOadv Trans Mec Rloc Cit Others Um""".split(" ")
import pathlib
curr_path = pathlib.Path().parent.absolute()
path = curr_path / 'official-2014.combined-withalt.m2'
lines = []
with open(path) as file:
lines = file.readlines()
comb = list(combinations(range(5), 2))
data_for_tag = {}
pairvise_agreement = {}
for t in tags:
data_for_tag[t]={}
for c in comb:
pairvise_agreement[c] = []
#prepare data
for t in tags:
for c in comb:
data_for_tag[t][c]=[]
examples = []
curr = []
for line in lines:
if line!="\n":
curr.append(line)
else:
examples.append(curr)
curr = []
def create_dict():
d = {}
for tag in tags:
d[tag] = []
return d
def get_anotator(s):
return int(s.strip()[-1])
def get_anotation_string(s, sentense_number, tag):
i = s.find("REQUIRED")
s = s.strip()[2:i]
token_range = ""
count = 0
for x in s:
if x=="|":
break
count += 1
token_range += x
s = s[count+3:]
count = 0
anotation_tag=""
for x in s:
if x=="|":
break
anotation_tag += x
count += 1
s = s[count+3:]
if tag!="Any":
if anotation_tag!=tag:
return ""
correction=""
count=0
for x in s:
if x=="|":
break
correction+=x
count+=1
return " ".join([str(sentense_number), token_range, correction])
def get_anotation_range(s):
range_string = ""
for x in s[2:]:
if x=="|":
break
range_string+=x
return tuple(map(int, range_string.split(' ')))
def get_fix(s):
return get_anotation_string(s, 0, "Any").split(' ')[-1]
def dist(r):
return r[1]-r[0]
def process_block(l, index):
anotations = l[1:]
if len(anotations) > 1:
anotation_tags = create_dict()
t = [""]*5
ranges = [[] for _ in range(5)]
fixes = [[] for _ in range(5)]
curr_anotator = -1
for x in anotations:
an = get_anotator(x)
if an != curr_anotator:
if curr_anotator!=-1:
t[curr_anotator] = anotation_tags
anotation_tags = create_dict()
curr_anotator = an
ranges[an].append(get_anotation_range(x))
fixes[an].append(get_fix(x))
for tag in tags:
anotation_tags[tag].append(get_anotation_string(x, index, tag))
t[curr_anotator] = anotation_tags
#append processed lines to data
indexes = [i for i, j in enumerate(t) if j!='']
if len(indexes)>1:
#process in general
for idx in combinations(indexes, 2):
i, j = idx
ri=ranges[i]
rj=ranges[j]
fi=fixes[i]
fj=fixes[j]
#go through ranges (check if range needs to be expanded) for left term
expanded_ri = []
expanded_fi = []
for i, r in enumerate(ri):
if dist(r)==2:
fix = fi[i]
if len(fix.split())==1: #we're dealing with 2->1 word fix, meaning second word would be deleted
expanded_ri.append((r[0],r[0]+1))
expanded_fi.append(fi[i])
expanded_ri.append((r[1],r[1]+1))
expanded_fi.append("")
else:
expanded_ri.append(r)
expanded_fi.append(fi[i])
#go through ranges (check if range needs to be expanded) for right term
expanded_rj = []
expanded_fj = []
for i, r in enumerate(rj):
if dist(r)==2:
fix = fj[i]
if len(fix.split())==1: #we're dealing with 2->1 word fix, meaning second word would be deleted
expanded_rj.append((r[0],r[0]+1))
expanded_fj.append(fj[i])
expanded_rj.append((r[1],r[1]+1))
expanded_fj.append("")
else:
expanded_rj.append(r)
expanded_fj.append(fj[i])
g = [expanded_rj.count(el) for el in expanded_ri]
if len(g)>0:
c = sum(g)*2/(len(expanded_ri)+len(expanded_rj))
pairvise_agreement[idx].append(c)
#process for each tag
for tag in tags:
for idx in combinations(indexes, 2):
d1 = t[idx[0]][tag]
d2 = t[idx[1]][tag]
g = [d2.count(el) for el in d1 if el!='']
if len(g)!=0:
c = sum(g)*2/(len(d1)+len(d2))
data_for_tag[tag][idx].append(c)
for i, e in enumerate(examples):
process_block(e,i)
#calculating result
print("Agreement by PAIR:")
all=[]
for c in comb:
d = pairvise_agreement[c]
all.extend(d)
if len(d)!=0:
combined_score = sum(d)/len(d)
print("Pair: {} Score: {}".format(c, combined_score))
print("General agreement:")
print(sum(all)/len(all))
print("Agreement by TAG:")
for tag in tags:
data = data_for_tag[tag]
combined_score = 0
count = 0
for k in data:
d = data[k]
if len(d)!=0: # Means this annotators annotated same sentences with this tag
combined_score += sum(d)/len(d)
count += 1
res = combined_score/count if count!=0 else 0
print("For tag: {} score is: {}".format(tag, res))<jupyter_output>Agreement by PAIR
Pair: (0, 1) Score: 0.33398914582172073
Pair: (0, 2) Score: 0.4069804408428973
Pair: (0, 3) Score: 0.5096655053092518
Pair: (0, 4) Score: 0.7188692571045512
Pair: (1, 2) Score: 0.49448138169155953
Pair: (1, 3) Score: 0.5614817226249059
Pair: (1, 4) Score: 0.6329485329485329
Pair: (2, 3) Score: 0.5457730206877925
Pair: (2, 4) Score: 0.5416666666666667
Pair: (3, 4) Score: 0.5035714285714286
General agreement:
0.3952901650568998
Agreement by TAG
For tag: Any score is: 0.4339096067012667
For tag: Vt3 score is: 0
For tag: Vm score is: 0.059583316422409
For tag: V0 score is: 0.08932971231964114
For tag: Vform score is: 0.1371039023483001
For tag: SVA score is: 0.10700282848347095
For tag: ArtOrDet score is: 0.1578472245092778
For tag: Nn score is: 0.16807668845741963
For tag: Npos score is: 0.05250616612549034
For tag: Pform score is: 0.0683225705034294
For tag: Pref score is: 0.08492123276948227
For tag: Prep score is: 0.15783307874541927
For tag: Wci score is: 0.140629583[...]
|
non_permissive
|
/students/YehorShapanov/03-homework/homework03.ipynb
|
DmytroBabenko/prj-nlp-2020
| 2 |
<jupyter_start><jupyter_text>## Size Bins Style
Helper function for quickly creating a size bins style. Use `help(size_bins_style)` to get more information.<jupyter_code>from cartoframes.auth import set_default_credentials
set_default_credentials('cartoframes')
from cartoframes.viz import Layer, size_bins_style
Layer('clev_sales', size_bins_style('sale_price'))<jupyter_output><empty_output>
|
permissive
|
/examples/styles/size_bins_style.ipynb
|
Apex-Consulting/cartoframes
| 1 |
<jupyter_start><jupyter_text>## Docker
docker run -d -p 8888:8888 -p 6006:6006 -v /home/user:/root/shared -runtime=nvidia vovacher/dl3:gpu
docker rmi $(docker images -q --filter "dangling=true")
## Pick a GPU to work on (make sure it is free!)
Notice how it grabs all the GPU memory immediately (eventhough it has done nothing so far)<jupyter_code>import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
# The GPU id to use, usually either "0" or "1", "2", "3"
os.environ["CUDA_VISIBLE_DEVICES"]="1"
# Do other imports now...
import keras
!pip install pydot
!pip install graphviz<jupyter_output>Requirement already satisfied: graphviz in /opt/conda/lib/python3.6/site-packages (0.10.1)
[31mdistributed 1.21.8 requires msgpack, which is not installed.[0m
[33mYou are using pip version 10.0.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
<jupyter_text>Make sure to load up the GPUs (push their utility up!
## Release GPUs' memory (so others can use them!)
If you have a separate notebook running you might the GPUs. The following command releases the memory in the GPUs.
`K.clear_session()`
## Which GPU am I using?To check if you really are utilizing all of your GPUs, specifically NVIDIA ones, you can monitor your usage in the terminal using:
`watch -n0.5 nvidia-smi`<jupyter_code>#Check the GPUs
!nvidia-smi<jupyter_output>Wed Dec 12 16:54:46 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 000086EB:00:00.0 Off | 0 |
| N/A 38C P8 27W / 149W | 11MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memor[...]<jupyter_text># Train Alexnet on CIFAR 10
In 2012, the deep learning networks created by Alex Krizhevsky, Geoffrey Hinton and Ilya Sutskever (now largely know as AlexNet) blew everyone out of the water to win Image Classification Challenge (ILSVRC). This heralded the new era of deep learning. AlexNet is the most influential modern deep learning networks in machine vision that use multiple convolutional and dense layers and distributed computing with GPU.
Like LeNet-5, AlexNet is one of the most important & influential neural network architectures that demonstrate the power of convolutional layers in machine vision. So, let’s build AlexNet with Keras first, them move onto building it in
## AlexNet in Keras
Here we make a few changes in order to simplify a few things and further optimise the training outcome.
* First of all, we use the sequential model and eliminating the parallelism for simplification.
* For example, the first convolutional layer had 2 layers with 48 neurons each. Instead, we combine it to 98 neurons.
* The original architecture did not have batch normalisation after every layer (although it had normalisation between a few layers) and dropouts. Here we put a batch normalisation layer before the input after every layer and dropouts between the fully-connected layers to reduce overfitting.
* When to use batch normalisation is difficult. Everyone seems to have opinions or evidence that supports their opinions. Without going into too much details, I decided to normalise before the input as it seems to make sense statistically.
## Task Train AlexNet model on Cifar10 data
Using the code provided below as a basis, please specify AlexNet using the Functional API (as opposed to the sequential API) and train it on the Cifar10 dataset which introduced below.
### Summarize your network
Summarize your network using the following:
1. Summarize your Model using:
`from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(2, input_dim=1, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())`
### Visualize your model
* Visualize your model as follows:
`from keras.models import Sequential
from keras.layers import Dense
from keras.utils.vis_utils import plot_model
model = Sequential()
model.add(Dense(2, input_dim=1, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)`
### Report your results and discuss
Please report your experimental results using the following format and discuss key findings (please try multiple experimental hyperparameter settings if time permits):
| Model | Detail| Input size| Top-1 Test Acc| Param(M)| Mult-Adds (B)| Depth|train time|Num of Epochs|batchsize|GPU desc|
| ------------- |:-------------:| -----|---------|---------|---------|----|---|---|
| AlexNet| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
In addition, produce training progress plots like the following and discuss your findings:
## The CIFAR-10 dataset in more detail
The goal of this exercise is to learn image classifer using the CIFAR10 dataset. This 10-class 32x32x2 image dataset which can be downloaded directly from Keras' datasets.
The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
Here are the classes in the dataset, as well as 10 random images from each:
<jupyter_code>def display_nn_model(model, file_name):
print(model.summary())
plot_model(model, to_file=file_name, show_shapes=True, show_layer_names=True)<jupyter_output><empty_output><jupyter_text>## Starter Code for your task<jupyter_code># (1) Importing dependency
from __future__ import print_function
from __future__ import division
import os
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten,\
Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.utils.vis_utils import plot_model
from keras.callbacks import ModelCheckpoint, TensorBoard
import numpy as np
from sklearn.metrics import accuracy_score
from keras.optimizers import Adam
np.random.seed(1000)
from keras.optimizers import Adam
#global variable
nb_epoch = 10
# (2) Get CIFAR10 Dataset
from keras.datasets import cifar10
(trainX, trainY), (testX, testY) = cifar10.load_data()
trainX = trainX.astype('float32')
testX = testX.astype('float32')
trainX /= 255.
testX /= 255.
batch_size = 100
nb_classes = 10
img_rows, img_cols = 32, 32
img_channels = 3
img_dim = (img_channels, img_rows, img_cols) if K.image_dim_ordering() == "th" else (img_rows, img_cols, img_channels)
# convert to one hot encoing
y_train = keras.utils.to_categorical(trainY, nb_classes)
y_test = keras.utils.to_categorical(testY, nb_classes)
# (3) Create a sequential model
# AlexNet Define the Model
# Feature BODY - Classification HEAD
model = Sequential()
# FEATURE Body
# model.add(Conv2D(96, (11,11), strides=(4,4), activation='relu', padding='same', input_shape=(img_height, img_width, channel,)))
# for original Alexnet
# CIFAR10 images are only 32x32x3 and not 224x224x3 so use smaller kernel size here
# Also use 96 kernels and not 2 x 48 as in the the original e
model.add(Conv2D(96, (3,3), strides=(2,2), activation='relu', padding='same',input_shape=img_dim))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
# Local Response normalization for Original Alexnet
model.add(BatchNormalization())
# 2nd Convolutional Layer
model.add(Conv2D(256, (5,5), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2,2)))
# Local Response normalization for Original Alexnet
model.add(BatchNormalization())
# Convolutional Layer 3, 4, 5
model.add(Conv2D(384, (3,3), activation='relu', padding='same'))
model.add(Conv2D(384, (3,3), activation='relu', padding='same'))
model.add(Conv2D(256, (3,3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2,2)))
# Local Response normalization for Original Alexnet
model.add(BatchNormalization())
# CLASSIFICATION HEAD
model.add(Flatten())
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
#print(model.summary())
#plot_model(model, to_file='model_plot1.png', show_shapes=True, show_layer_names=True)
display_nn_model(model, 'model_plot1.png')<jupyter_output>Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 18s 0us/step
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 16, 16, 96) 2688
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 96) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 8, 8, 96) 384
_________________________________________________________________
conv2d_2 (Conv2D) (None, 8, 8, 256) 614656
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 3, 3, 256) 0
______________________________________________________________[...]<jupyter_text>## Visualize the architecture (RERUN to see the latest version)
(RERUN to see the latest version)
<jupyter_code>history = {"testLoss": [],
"testAccuracy": [],
}
class TestCallback(keras.callbacks.Callback):
def __init__(self, test_data):
self.test_data = test_data
def on_epoch_end(self, epoch, logs={}):
x, y = self.test_data
loss, acc = self.model.evaluate(x, y, verbose=0)
history["testLoss"].append(loss)
history["testAccuracy"].append(acc)
print('Testing loss: {}, acc: {}'.format(loss, acc))<jupyter_output><empty_output><jupyter_text>Create callback to trace metrics live in TF Tensorboard<jupyter_code>from keras.callbacks import ModelCheckpoint, TensorBoard
tb = TensorBoard()<jupyter_output><empty_output><jupyter_text>Callback to save best model weights<jupyter_code>import os
os.makedirs("./snapshots", exist_ok=True)
mc = ModelCheckpoint("./snapshots/cifar10_cnn_best.h5", save_best_only=True)<jupyter_output><empty_output><jupyter_text>Train!## Train AlexNet on my GPU### Which GPU am I using?<jupyter_code>#Check the GPUs
!nvidia-smi
%%time
from sklearn.metrics import accuracy_score
# (4) Compile
model.compile(loss='categorical_crossentropy', optimizer='adam',\
metrics=['accuracy'])
# (5) Train
hist = model.fit(trainX, y_train, batch_size=64, epochs=nb_epoch, verbose=1, \
validation_split=0.2, shuffle=True,
callbacks=[TestCallback((testX, y_test)),tb, mc])
# Test the model
score = model.evaluate(testX, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
history['testAccuracy'][-5:]
hist.history['val_acc'][-5:]<jupyter_output><empty_output><jupyter_text>## Plot learning progress<jupyter_code>import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
def plot_learning_progress(hist, imgNet):
if len(hist.history["loss"]) > 1:
# plot the training loss and accuracy
N = np.arange(0, len(hist.history["loss"]))
plt.style.use("ggplot")
plt.figure()
fig, ax1 = plt.subplots()
ax1.plot(N, hist.history["loss"], "b.", label="train_loss")
ax1.plot(N, hist.history["val_loss"], "b", label="val_loss")
ax1.set_xlabel("Epoch #")
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('loss', color='b')
ax1.tick_params('y', colors='b')
color = "g"
ax2 = ax1.twinx()
ax2.plot(N, hist.history["acc"], color+".", label="train_acc")
ax2.plot(N, hist.history["val_acc"], color, label="val_acc")
plt.title("Cross entropy Loss and Accuracy [to Epoch {}]\n' + imgNet + ' trained on 4 GPUs".format(
len(hist.history["loss"])))
ax2.set_ylabel('Accuracy', color=color)
ax2.tick_params('y', colors=color)
ax1.legend()
ax2.legend()
plot_learning_progress(hist, 'AlexNet')<jupyter_output><empty_output><jupyter_text>## Load best model<jupyter_code>#Load best model
from keras.models import Model, load_model
model = load_model("./snapshots/cifar10_cnn_best.h5")<jupyter_output><empty_output><jupyter_text>## Visualize the confusion matrix
Lets plot confusion matrix for this CIFAR dataset.
<jupyter_code>## Visualization of confusion matrix
#The below code takes a confusion matrix and produces a nice and shiny visualization
# import for showing the confusion matrix
import itertools
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import numpy as np
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
results = model.predict(testX)
# convert from class probabilities to actual class predictions
predicted_classes = np.argmax(results, axis=1)
# Names of predicted classes
class_names = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
# Generate the confusion matrix
cnf_matrix = confusion_matrix(testY, predicted_classes)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
plt.show()<jupyter_output>Confusion matrix, without normalization
[[701 25 122 24 31 6 5 9 37 40]
[ 35 822 16 12 1 2 9 6 22 75]
[ 52 12 623 38 130 57 62 14 6 6]
[ 19 12 122 389 136 150 132 14 7 19]
[ 6 3 114 48 730 22 46 14 13 4]
[ 13 8 112 158 90 513 66 24 6 10]
[ 7 8 87 30 75 8 774 3 5 3]
[ 11 8 66 36 161 95 11 584 2 26]
[ 77 34 46 22 18 9 4 1 772 17]
[ 39 75 20 22 7 5 14 10 24 784]]
<jupyter_text>## Launch Tensorboard<jupyter_code>!tensorboard --logdir=./logs<jupyter_output><empty_output><jupyter_text># Using multiple GPUs for training
Keras provides a clever divide and conquer strategy centred for training over multiple GPUs based on distributing each mini-batch to the available GPUs for the gradient calculation (MAP) phase and then using the CPU for the weight update (REDUCE). **This induces quasi-linear speedup on up to 8 GPUs.** (on AlexNet we see a 4X speedup on 4 GPUs! It is a small dataset)
To use multiple GPUs First it replicates a model over each different GPUs using the following function:
tf.keras.utils.multi_gpu_model(
model,
gpus,
cpu_merge=True,
cpu_relocation=False
)
Specifically, this function implements single-machine multi-GPU data parallelism. It works in the following way:
* Divide the model's input(s) into multiple sub-batches.
* Apply a model copy on each sub-batch. Every model copy is executed on a dedicated GPU.
* Concatenate the results (on CPU) into one big batch.
* E.g. if your batch_size is 64 and you use gpus=2, then we will divide the input into 2 sub-batches of 32 samples, process each sub-batch on one GPU, then return the full batch of 64 processed samples.
Arguments:
* model: A Keras model instance. To avoid OOM errors, this model could have been built on CPU, for instance (see usage example below).
* gpus: Integer >= 2, number of on GPUs on which to create model replicas.
* cpu_merge: A boolean value to identify whether to force merging model weights under the scope of the CPU or not.
* cpu_relocation: A boolean value to identify whether to create the model's weights under the scope of the CPU. If the model is not defined under any preceding device scope, you can still rescue it by activating this option.
This function is defined in tensorflow/python/keras/utils/multi_gpu_utils.py.
For more information see [Keras help page on this]( https://www.tensorflow.org/api_docs/python/tf/keras/utils/multi_gpu_model)
## Example of using multiple GPUs for training in code
.......
assemble model layers
......................
# Instantiate model.
model = Model(inputs=inputs, outputs=outputs)
#
# DISTRIBUTE training over all GPUs
# use 4 gpus to train the model
model = multi_gpu_model(model, gpus=num_gpus)
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=lr_schedule(0)),
metrics=['accuracy'])
model.summary()
print(model_type)
model.fit(....)
## Train on 4 GPUs: AlexNet with no augmentation <jupyter_code># (1) Importing dependency
from __future__ import print_function
from __future__ import division
import os
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten,\
Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.utils.vis_utils import plot_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard
from keras.callbacks import ModelCheckpoint
from keras.callbacks import ReduceLROnPlateau, LearningRateScheduler
import numpy as np
from sklearn.metrics import accuracy_score
from keras.optimizers import Adam
num_gpus = 1
np.random.seed(1000)
# (2) Get CIFAR10 Dataset
from keras.datasets import cifar10
(trainX, trainY), (testX, testY) = cifar10.load_data()
trainX = trainX.astype('float32')
testX = testX.astype('float32')
trainX /= 255.
testX /= 255.
batch_size = 64 # will multiply by at FIT time *num_gpus
nb_classes = 10
#nb_epoch = 100
img_rows, img_cols = 32, 32
img_channels = 3
img_dim = (img_channels, img_rows, img_cols) if K.image_dim_ordering() == "th" else (img_rows, img_cols, img_channels)
# convert to one hot encoing
y_train = keras.utils.to_categorical(trainY, nb_classes)
y_test = keras.utils.to_categorical(testY, nb_classes)
# (3) Create a sequential model
# AlexNet Define the Model
# Feature BODY - Classification HEAD
def createAlexNet(img_dim):
model = Sequential()
# FEATURE Body
# model.add(Conv2D(96, (11,11), strides=(4,4), activation='relu', padding='same', input_shape=(img_height, img_width, channel,)))
# for original Alexnet
# CIFAR10 images are only 32x32x3 and not 224x224x3 so use smaller kernel size here
# Also use 96 kernels and not 2 x 48 as in the the original e
model.add(Conv2D(96, (3,3), strides=(2,2), activation='relu', padding='same',input_shape=img_dim))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
# Local Response normalization for Original Alexnet
model.add(BatchNormalization())
# 2nd Convolutional Layer
model.add(Conv2D(256, (5,5), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2,2)))
# Local Response normalization for Original Alexnet
model.add(BatchNormalization())
# Convolutional Layer 3, 4, 5
model.add(Conv2D(384, (3,3), activation='relu', padding='same'))
model.add(Conv2D(384, (3,3), activation='relu', padding='same'))
model.add(Conv2D(256, (3,3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2,2)))
# Local Response normalization for Original Alexnet
model.add(BatchNormalization())
# CLASSIFICATION HEAD
model.add(Flatten())
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
#print(model.summary())
#plot_model(model, to_file='model_plotAlexNet.png', show_shapes=True, show_layer_names=True)
display_nn_model(model, 'model_plotAlexNet.png')
return model<jupyter_output><empty_output><jupyter_text>### Distribution training at complie time<jupyter_code># DISTRIBUTE training over all GPUs
# use 4 gpus to train the model
from keras.utils import multi_gpu_model
import tensorflow as tf
# check to see if we are compiling using just a single GPU
if num_gpus <= 1:
print("[INFO] training with 1 GPU...")
model = createAlexNet(img_dim)
# otherwise, we are compiling using multiple GPUs
else:
print("[INFO] training with {} GPUs...".format(num_gpus))
# we'll store a copy of the model on *every* GPU and then combine
# the results from the gradient updates on the CPU
with tf.device("/cpu:0"):
# initialize the model
model = createAlexNet(img_dim)
# make the model parallel
model = multi_gpu_model(model, gpus=num_gpus)
#model = multi_gpu_model(model, gpus=num_gpus)
#model.compile(loss='categorical_crossentropy',
# optimizer='adam',
# metrics=['accuracy'])
#model.summary()
# (5) Train
#hist = model.fit(trainX, y_train, batch_size=64, epochs=nb_epoch, verbose=1, \
# validation_split=0.2, shuffle=True,
# callbacks=[TestCallback((testX, y_test)),tb, mc])
history = {"testLoss": [],
"testAccuracy": [],
}
class TestCallback(keras.callbacks.Callback):
def __init__(self, test_data):
self.test_data = test_data
def on_epoch_end(self, epoch, logs={}):
x, y = self.test_data
loss, acc = self.model.evaluate(x, y, verbose=0)
history["testLoss"].append(loss)
history["testAccuracy"].append(acc)
print('Testing loss: {}, acc: {}'.format(loss, acc))
#Create callback to trace metrics live in TF Tensorboard
from keras.callbacks import ModelCheckpoint, TensorBoard
tb = TensorBoard()
#Callback to save best model weights
import os
os.makedirs("./snapshots", exist_ok=True)
mc = ModelCheckpoint("./snapshots/cifar10_cnn_best_4GPUs.h5", save_best_only=True)<jupyter_output><empty_output><jupyter_text>### Start Training on all GPUs!#### Which GPU am I using? Watch it from the command line during training
To check if you really are utilizing all of your GPUs, specifically NVIDIA ones, you can monitor your usage in the terminal using:
`watch -n0.5 nvidia-smi`<jupyter_code>#Check the GPUs
# nvidia-smi
from tensorflow.python import keras
print(keras.__version__)
#2.1.6-tf
%%time
from sklearn.metrics import accuracy_score
# (4) Compile
model.compile(loss='categorical_crossentropy', optimizer='adam',\
metrics=['accuracy'])
print("[INFO] training network...")
# (5) Train: set steps_per_epoch, i.e., the number of batches: len(trainX) // (batch_size*num_gpus)
#hist = model.fit(trainX, y_train,
#batch_size=batch_size*num_gpus,
# epochs=nb_epoch, verbose=1, \
# validation_split=0.2, shuffle=True,
# steps_per_epoch=int(len(trainX)*0.8 // (batch_size*num_gpus)),
# validation_steps = int(len(trainX)*0.2 // (batch_size*num_gpus)),
# callbacks=[TestCallback((testX, y_test)),tb, mc])
hist = model.fit(trainX, y_train,
#batch_size=batch_size*num_gpus,
epochs=nb_epoch, verbose=1, \
validation_data=(testX, y_test), shuffle=True,
steps_per_epoch=len(trainX) // (batch_size*num_gpus),
validation_steps = 10, #len(testX) // (batch_size*num_gpus),
callbacks=[TestCallback((testX, y_test)),tb, mc])
# Test the model
score = model.evaluate(testX, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
12 // 3<jupyter_output><empty_output><jupyter_text>## Release GPU memory (so others can use them!)
If you have a separate notebook running you might the GPUs. The following command releases the memory in the GPUs.
`K.clear_session()`
<jupyter_code>history['testAccuracy'][-5:]
hist.history['val_acc'][-5:]<jupyter_output><empty_output><jupyter_text>## Plot learning progress<jupyter_code>plot_learning_progress(hist, 'AlexNet')<jupyter_output><empty_output><jupyter_text># AlexNet with Image Augmentation on CIFAR10
Data preparation is required when working with neural network and deep learning models. Increasingly data augmentation is also required on more complex object recognition tasks.
Here you will discover how to use data augmentation with your image datasets when developing and evaluating deep learning models in Python with Keras.
After reading this post, you will know:
* About the image augmentation API provide by Keras and how to use it with your models.
* How to augment data with random rotations, shifts and flips.
Keras provides the ImageDataGenerator class that defines the configuration for image data preparation and augmentation. This includes capabilities such as:
* Sample-wise standardization.
* Feature-wise standardization.
* ZCA whitening.
* Random rotation, shifts, shear and flips.
* Dimension reordering.
* Save augmented images to disk.
An augmented image generator can be created as follows:
`datagen = ImageDataGenerator()`
Rather than performing the operations on your entire image dataset in memory, the API is designed to be iterated by the deep learning model fitting process, creating augmented image data for you just-in-time. This reduces your memory overhead, but adds some additional time cost during model training.
After you have created and configured your ImageDataGenerator, you must fit it on your data. This will calculate any statistics required to actually perform the transforms to your image data. You can do this by calling the fit() function on the data generator and pass it your training dataset.
`datagen.fit(train)`
The data generator itself is in fact an iterator, returning batches of image samples when requested. We can configure the batch size and prepare the data generator and get batches of images by calling the flow() function.
X_batch, y_batch = datagen.flow(train, train, batch_size=32)
You can learn more about the Keras image data generator API in the [Keras documentation](https://keras.io/preprocessing/image/).## TASK: AlexNet with Image Augmentation
Using the skeleton code below (please use AlexNet that is defined using the functional API) use image augmentation to train an AlexNet network on the CIFAR10 dataset. Please your experimental results using the following format and contrast and DISCUSS with respect to now augmentation:
| Model | Detail| Input size| Top-1 Test Acc| Param(M)| Mult-Adds (B)| Depth|train time|Num of Epochs|batchsize|GPU desc|
| ------------- |:-------------:| -----|---------|---------|---------|----|---|---|
| AlexNet| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
<jupyter_code>def create_functional_api_model(img_dim, save_model_img_name):
DROPOUT = 0.5
PADDING = "same"
# (3) Create a sequential model
# AlexNet Define the Model
# Feature BODY - Classification HEAD
model_input = Input(shape = img_dim)
# FEATURE Body
# model.add(Conv2D(96, (11,11), strides=(4,4), activation='relu', padding='same', input_shape=(img_height, img_width, channel,)))
# for original Alexnet
# CIFAR10 images are only 32x32x3 and not 224x224x3 so use smaller kernel size here
# Also use 96 kernels and not 2 x 48 as in the the original e
# First convolutional Layer (96x3x3)
z = Conv2D(filters = 96, kernel_size = (3,3), strides = (2,2), activation = "relu", padding=PADDING)(model_input)
z = MaxPooling2D(pool_size = (2,2), strides=(2,2))(z)
z = BatchNormalization()(z)
# Second convolutional Layer (256x5x5)
z = ZeroPadding2D(padding = (1,1))(z)
z = Convolution2D(filters = 256, kernel_size = (5,5), strides = (1,1), activation = "relu", padding=PADDING)(z)
z = MaxPooling2D(pool_size = (3,3), strides=(2,2))(z)
z = BatchNormalization()(z)
# Rest 3 convolutional layers
z = ZeroPadding2D(padding = (1,1))(z)
z = Convolution2D(filters = 384, kernel_size = (3,3), strides = (1,1), activation = "relu", padding=PADDING)(z)
z = ZeroPadding2D(padding = (1,1))(z)
z = Convolution2D(filters = 384, kernel_size = (3,3), strides = (1,1), activation = "relu", padding=PADDING)(z)
z = ZeroPadding2D(padding = (1,1))(z)
z = Convolution2D(filters = 256, kernel_size = (3,3), strides = (1,1), activation = "relu", padding=PADDING)(z)
z = MaxPooling2D(pool_size = (3,3), strides=(2,2))(z)
z = Flatten()(z)
z = Dense(4096, activation="tanh")(z)
z = Dropout(DROPOUT)(z)
z = Dense(4096, activation="tanh")(z)
z = Dropout(DROPOUT)(z)
model_output = Dense(nb_classes, activation="softmax")(z)
model = Model(model_input, model_output)
display_nn_model(model, save_model_img_name)
return model
from __future__ import print_function
from __future__ import division
import os
import numpy as np
import sklearn.metrics as metrics
from keras import backend as K
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.datasets import cifar10
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.layers import Input, Dense, Dropout, Flatten, Activation
from keras.models import Model, load_model
from keras.layers import Input, Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, Convolution2D, MaxPooling2D
from keras.layers.convolutional import ZeroPadding2D
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras import optimizers
from keras.preprocessing import sequence
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
def run_alexnet_model(img_dim, imgNet="cifar10", data_aug = True, weights_file="weights/ResNext-8-64d.h5"):
# INSERT ALEXNET archittecture code in the style of the Functional API here
model = create_functional_api_model(img_dim, 'alexnet_functional_api_' + imgNet + '.png')
model.compile(loss='categorical_crossentropy', optimizer='adam',\
metrics=['accuracy'])
if (imgNet == "cifar10"):
(trainX, trainY), (testX, testY) = cifar10.load_data()
elif (imgNet == "cifar100"):
(trainX, trainY), (testX, testY) = cifar100.load_data()
trainX = trainX.astype('float32')
testX = testX.astype('float32')
trainX /= 255.
testX /= 255.
Y_train = np_utils.to_categorical(trainY, nb_classes)
Y_test = np_utils.to_categorical(testY, nb_classes)
generator = ImageDataGenerator(rotation_range=15,
width_shift_range=5./32,
height_shift_range=5./32,
horizontal_flip=True)
generator.fit(trainX, seed=0)
out_dir = "weights/"
if not os.path.exists(out_dir):
os.makedirs(out_dir)
# Load model
if os.path.exists(weights_file):
model.load_weights(weights_file)
print("Model loaded.")
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.1),
cooldown=0, patience=10, min_lr=1e-6)
model_checkpoint = ModelCheckpoint(weights_file, monitor="val_acc", save_best_only=True,
save_weights_only=True, mode='auto')
callbacks = [lr_reducer, model_checkpoint]
#Fit the model with batches of data sampled from the original data that may augmented or not
#
model.fit_generator(generator.flow(trainX, Y_train, batch_size=batch_size),
steps_per_epoch=len(trainX) // batch_size,
epochs=nb_epoch,
callbacks=callbacks,
validation_data=(testX, Y_test),
validation_steps=testX.shape[0] // batch_size, verbose=1)
yPreds = model.predict(testX)
yPred = np.argmax(yPreds, axis=1)
yTrue = testY
accuracy = metrics.accuracy_score(yTrue, yPred) * 100
error = 100 - accuracy
print("Accuracy : ", accuracy)
print("Error : ", error)
run_alexnet_model(img_dim, 'cifar10', True, "weights/AlexNext-cifar10.h5")<jupyter_output>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 32, 32, 3) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 16, 16, 96) 2688
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 8, 8, 96) 0
_________________________________________________________________
batch_normalization_7 (Batch (None, 8, 8, 96) 384
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 10, 10, 96) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 10, 10, 256) 614656
_________________________________________________________________
max_poolin[...]<jupyter_text># AlexNet on CIFAR100
## The CIFAR-100 dataset
This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs).
Here is the list of classes in the CIFAR-100:
|Superclass |Classes|
|---|---|
|aquatic mammals| beaver, dolphin, otter, seal, whale|
|fish| aquarium fish, flatfish, ray, shark, trout|
|flowers| orchids, poppies, roses, sunflowers, tulips|
|food containers| bottles, bowls, cans, cups, plates|
|fruit and vegetables| apples, mushrooms, oranges, pears, sweet peppers|
|household electrical devices| clock, computer keyboard, lamp, telephone, television|
|household furniture| bed, chair, couch, table, wardrobe|
|insects| bee, beetle, butterfly, caterpillar, cockroach|
|large carnivores| bear, leopard, lion, tiger, wolf|
|large man-made outdoor things| bridge, castle, house, road, skyscraper|
|large natural outdoor scenes| cloud, forest, mountain, plain, sea|
|large omnivores and herbivores| camel, cattle, chimpanzee, elephant, kangaroo|
|medium-sized mammals| fox, porcupine, possum, raccoon, skunk|
|non-insect invertebrates| crab, lobster, snail, spider, worm|
|people| baby, boy, girl, man, woman|
|reptiles| crocodile, dinosaur, lizard, snake, turtle|
|small mammals| hamster, mouse, rabbit, shrew, squirrel|
|trees| maple, oak, palm, pine, willow|
|vehicles 1| bicycle, bus, motorcycle, pickup truck, train|
|vehicles 2 |lawn-mower, rocket, streetcar, tank, tractor|
## Task AlexNet on CIFAR100
Repeat the above AlexNet experiments (no image augmentation versus image augmentation) on the CIFAR100 dataset. Provide similars tables and graphs!<jupyter_code>from __future__ import print_function
from __future__ import division
import os
import numpy as np
import sklearn.metrics as metrics
from keras import backend as K
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.datasets import cifar100
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils
batch_size = 100
nb_classes = 100
#nb_epoch = 100
img_rows, img_cols = 32, 32
img_channels = 3
img_dim = (img_channels, img_rows, img_cols) if K.image_dim_ordering() == "th" else (img_rows, img_cols, img_channels)
run_alexnet_model(img_dim, 'cifar100', True, "weights/AlexNext-cifar100.h5")
<jupyter_output>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 32, 32, 3) 0
_________________________________________________________________
conv2d_16 (Conv2D) (None, 16, 16, 96) 2688
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 8, 8, 96) 0
_________________________________________________________________
batch_normalization_9 (Batch (None, 8, 8, 96) 384
_________________________________________________________________
zero_padding2d_5 (ZeroPaddin (None, 10, 10, 96) 0
_________________________________________________________________
conv2d_17 (Conv2D) (None, 10, 10, 256) 614656
_________________________________________________________________
max_poolin[...]<jupyter_text># ResNet and CIFAR10/0 Experiments (20 layers!)
## Introduction to skip connections
A residual neural network is an artificial neural network (ANN) of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections or short-cuts to jump over some layers. In its limit as ResNets it will only skip over a single layer.With an additional weight matrix to learn the skip weights it is referred to as HighwayNets. With several parallel skips it is referred to as DenseNets. In comparison, a non-residual neural network is described as a plain network in the context of residual neural networks.
A reconstruction of a pyramidal cell. Soma and dendrites are labeled in red, axon arbor in blue. (1) Soma, (2) Basal dendrite, (3) Apical dendrite, (4) Axon, (5) Collateral axon.
The brain has structures similar to residual nets, as cortical layer VI neurons gets input from layer I, skipping over all intermediary layers. In the figure (below, on the right), this compares to signals from the (3) apical dendrite skipping over layers, while the (2) basal dendrite collecting signals from the previous and/or same layer. Similar structures exists for other layers. How many layers in the cerebral cortex compare to layers in an artificial neural network is not clear, neither if every area in cerebral cortex exhibits the same structure, but over large areas they look quite similar.
One motivation for skipping over layers in ANNs is to avoid the problem of vanishing gradients by reusing activations from a previous layer until the layer next to the current one have learned its weights. During training the weights will adapt to mute the previous layer and amplify the layer next to the current. In the simplest case only the weights for the connection to the next to the current layer is adapted, with no explicit weights for the upstream previous layer. This usually works properly when a single non-linear layer is stepped over, or in the case when the intermediate layers are all linear. If not, then an explicit weight matrix should be learned for the skipped connection.
The intuition on why this works is that the neural network collapses into fewer layers in the initial phase, which makes it easier to learn, and thus gradually expands the layers as it learns more of the feature space. During later learning, when all layers are expanded, it will stay closer to the manifold and thus learn faster. A neural network without residual parts will explore more of the feature space. This makes it more vulnerable to small perturbations that cause it to leave the manifold altogether, and require extra training data to get back on track.
**Canonical form of a residual neural network. A layer ℓ − 1 is skipped over activation from ℓ − 2.**
For more background on ResNets see [the following WikiPedia page](https://en.wikipedia.org/wiki/Residual_neural_network)## ResNets: an introduction
Residual Neural Networks (ResNet) were first introduced by Kaiming He et al in 2015. Resnet network use a novel architecture with “skip connections” and heavy batch normalization. Such skip connections are also known as gated units or gated recurrent units and have a strong similarity to recent successful elements applied in RNNs. Thanks to this technique they were able to train a NN with 152 layers while still having lower complexity than VGGNet. It achieves a top-5 error rate of 3.57% which beats human-level performance on the ImageNet dataset (1000 categories).
These concepts have allowed us to train networks that
have > 200 layers on ImageNet and > 1;000 layers on CIFAR-10 – depths that were previously
thought impossible to reach while successfully traiing a network.
ResNet networks are composed of a residual building block that is shown below (let's focus on module on the left hand side to begin with). For more details on ResNet see the [original ResNet paper](https://arxiv.org/pdf/1512.03385.pdf) Formally, we consider a building block defined as:
$$y = F(x, \{W_i\}) + x \tag1$$
Here x and y are the input and output vectors of the layers considered. The function $F(x, \{W_i\})$ represents the
residual mapping to be learned. For the example in the figure below the left hand side image
has two layers, $F = W_2σ(W_1x)$ in which σ denotes ReLU and the biases are omitted for simplifying notations. The operation $F + x$ is performed by a shortcut
connection and element-wise addition. We adopt the second
nonlinearity after the addition (i.e., σ(y), see LHS of the Figure below).
The shortcut connections in Eqn.(1) introduce neither extra
parameter nor computation complexity. This is not only
attractive in practice but also important in our comparisons
between plain and residual networks. We can fairly compare
plain/residual networks that simultaneously have the
same number of parameters, depth, width, and computational
cost (except for the negligible element-wise addition).
The dimensions of x and F must be equal in Eqn.(1).
If this is not the case (e.g., when changing the input/output
channels), we can perform a linear projection Ws by the
shortcut connections to match the dimensions:
$$ y = F(x, {W_i}) + W_sx. \tag 2 $$
We can also use a square matrix $W_s$ in Eqn.(1). But we will
show by experiments that the identity mapping is sufficient
for addressing the degradation problem and is economical,
and thus Ws is only used when matching dimensions.
The form of the residual function F is flexible. Experiments in
the original ResNet paper involve a function F that has two or
three layers (RHS architecture below), while more layers are possible. But if
F has only a single layer, Eqn.(1) is similar to a linear layer:
$y = W_1x + x$, for which we have not observed advantages.
We also note that although the above notations are about
fully-connected layers for simplicity, they are applicable to
convolutional layers. The function F(x, {Wi}) can represent
multiple convolutional layers. The element-wise addition
is performed on two feature maps, channel by channel.
### ResNet Basic Flavors
The residual module consists of many flavors. Here we explore two:
* Original ResNet module
* Bottleneck Resnet module
The first is simply a shortcut which connects the input to an addition of the second branch, a series
of convolutions and activations (Figure below, left).
However, in the same paper, it was found that bottleneck residual modules perform better,
especially when training deeper networks. The bottleneck is a simple extension to the residual
module. We still have our shortcut module, only now the second branch to our micro-architecture
has changed. Instead of applying only two convolutions, we are now applying three convolutions
(Figure below, middle).
The first CONV consists of 1x1 filters, the second of 3x3 filters, and the third of 1x1 filters.
Furthermore, the number of filters learned by the first two CONV is 1=4 the number learned by the
final CONV – this point is where the “bottleneck” comes in.
Finally, in an updated 2016 publication [Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf), He
et al. experimented with the ordering of convolution, activation, and batch normalization layers
within the residual module. They found that by applying pre-activation (i.e., placing the RELU and
BN before the CONV) higher accuracy could be obtained (Figure below, right).
### ResNet V0, V1 and V2 (pre-activation Resnet Module)
So far we have introduced residual module consists of many flavors:
* Original ResNet module (referred to as V0 later)
* Bottleneck Resnet module (referred to as V2Post later)
Here we introduce another flavor:
* pre-activation Bottleneck Resnet module (referred to as V2 later)
In an updated 2016 publication [Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf), He
et al. experimented with the ordering of convolution, activation, and batch normalization layers
within the residual module. They found that by applying pre-activation (i.e., placing the RELU and
BN before the CONV) higher accuracy could be obtained (Figure below, right).
**Left: The original residual module building block (referred to as V0 above). Center: The residual module
with a bottleneck. The bottleneck adds in an extra CONV layer. This module also helps reduce the
dimensionality of spatial volumes(referred to as V1 above). Right: The updated pre-activation module that changes the
order of RELU, CONV, and BN (V2).**Going forward we will refer to these different flavors of residual units as follows:
* **V0: original 2-layer residual block (2 layer unit)**
* $2D_{3x3}-ReLU-2D_{3x3}-ADD-ReLU$
* `layers.Conv2D(64, (3,3), padding='same', strides=(1, 1), kernel_initializer='glorot_uniform', name=conv_name_base + '2a')(input_tensor)`
* Note ADD is an element-wise addition is performed on two feature maps, channel by channel.
* **V1: uses a basic building block (2-layer unit with BN-RELU)** has two flavors (for details see the following paper: [Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf)):
* Stacks of 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filters is
doubled. Within each stage, the layers have the same number filters and the
same number of filters.
Features maps sizes:
stage 0: 32x32, 16
stage 1: 16x16, 32
stage 2: 8x8, 64
* identity as the shortcut: $2D_{3x3}-BN-ReLU--2D_{3x3}-BN---ADD-ReLU$
* convolution as the short cut (required to upsize the number of features for the skip connection or shortcut):
* Note ADD is an **element-wise addition** is performed on two feature maps, channel by channel.
* The following depicts a Resnet V1 Unit. Here view the first node as the input node. Notice the short cut is a convolution here since this is the first ResNet unit in a stage.

* ** V2Post**: uses “bottleneck” building block (3-layer unit with BN-ReLU post the bottleneck layers) has two flavors:
* identity as the shortcut: $2D_{1x1}-BN-ReLU--2D_{3x3}-BN-ReLU--2D_{1x1}-BN--ADD-ReLU$
* convolution as the short cut (required to upsize the number of features for the skip connection or shortcut):
* Note ADD is an **element-wise addition** is performed on two feature maps, channel by channel.
* ** V2 ** (discussed in a later section): uses “bottleneck” building block (3-layer unit) has two flavors:
* Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as bottleneck layer
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filter maps is
doubled. Within each stage, the layers have the same number filters and the
same filter map sizes.
Features maps sizes:
conv1 : 32x32, 16
stage 0: 32x32, 64
stage 1: 16x16, 128
stage 2: 8x8, 256
* identity as the shortcut: $-BN-ReLU-2D_{1x1}-BN-ReLU--2D_{3x3}-BN-ReLU--2D_{1x1}-BN--ADD-ReLU$
* convolution as the short cut (required to upsize the number of features for the skip connection or shortcut):
* Note ADD is an **element-wise addition** is performed on two feature maps, channel by channel.
* The following depicts a Resnet V2 Unit. Here view the first node as the input node. Notice the short cut is a convolution here since this is the first ResNet unit in a stage.
### ResNet50 with the Bottleneck Building Block
Let's have a closer look at resnet_50. resnet_50 uses a “bottleneck” building block (V2) as depicted on the right in the figure below. In the [original ResNet paper](https://arxiv.org/pdf/1512.03385.pdf), the following implementations and experiments on the ImageNet2015 dataset, also used the “bottleneck” building block depicted below on the RHS:
* ResNet-50/101/152
A ResNet50 network is composed of a stages list of (3, 4, 6, 3) and a corresponding filters list
of (64, 256, 512, 1024, 2048). The first CONV layer in ResNet (before any residual module)
will learn K = 64 filters. Then, the first set of three residual modules will learn K = 256 filters. The
number of filters learned will increase to K = 512 for the four residual modules stacked together. In
the third stage, six residual modules will be stacked, each responsible for learning K = 1024 filters.
And finally, the fourth stage will stack three residual modules that will learn K = 2048 filters.
Again, we call this architecture “ResNet50” since there are 1+(3x3)+(4x3)+(6x3)+
(3x3)+1 = 50 trainable layers in the network.

### Notes on ResNet Architecture
1. The first layers are a Conv2D (stride 2) and max pooling (2x2) layers that quarter the size of the input tensor:
x = layers.Conv2D(64, (7, 7),
strides=(2, 2),
padding='valid',
kernel_initializer='he_normal',
name='conv1')(x)
x = layers.BatchNormalization(axis=bn_axis, name='bn_conv1')(x)
x = layers.Activation('relu')(x)
x = layers.ZeroPadding2D(padding=(1, 1), name='pool1_pad')(x)
x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
2. The input size and the output size for the first Resnet block are equal, whereas the the output size for the remaining Resnet blocks are halved by the stride=(2,2)## The ResNet20 V1 architecture
In this section we graphically depict ResNet20 architecture that uses the **V1** residual unit. Recall that **V1** residual unit uses a basic building block (2-layer unit with BN-RELU) that has two flavors (for details see the following paper: [Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf)):
* ResNet20 has 3 stages, where each stage has 3 ResNet V1 modules
* This leads to 6layers where we need to learn weights per stage (3 stages by 2 3x3 layers)
* A ResNet V1 Modules is made up of a stack 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filters is
doubled. Within each stage, the layers have the same number filters and the
same number of filters.
Features maps sizes:
stage 0: 32x32, 16
stage 1: 16x16, 32
stage 2: 8x8, 64
* identity as the shortcut: $2D_{3x3}-BN-ReLU--2D_{3x3}-BN---ADD-ReLU$
* convolution as the short cut (required to upsize the number of features for the skip connection or shortcut):
* Note ADD is an **element-wise addition** is performed on two feature maps, channel by channel.
Notice the stages in the architecture are punctuated by the **convolutional skip connection layer **.
### ResNet50 implementation in Keras (** V2Post**)
Keras has an implementation of ResNet50 located [here](https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py). This implementation is based on the ** V2Post** version of the the ResNet module. Please have a close look at this implementation. I include code snippets below for exposition purposes.
Notice the following for ResNet50 network:
* it is composed of a stages list of (1, 3, 4, 6, 3) and a corresponding filters list of (64, 256, 512, 1024, 2048).
* The first CONV layer in ResNet (before any residual module) will learn K = 64 filters.
* Then, for the second stage, (our first stage that is made up of ResNet Block units): the first set of three residual modules will learn K = 256 filters.
* Then, for stage 3 consists of 4 ResNet units with 256 kernels
* Etc.
* Resnet50 has 4 resnet stages (Conv2-5)
* Each Stage has $m$ ResNet Units that output Feature maps volumes of the same size depth
* Each stage begins with a ResNet Bottleneck unit where the shortcut is a Convolution that increases the number of output feature maps (from in this case 64 to 256).
E.g., the first ResNet stage (Conv2_X) consists of 3 ResNet building bottleneck blocks (V1); the code for ths stage looks like the following:
x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))
x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')
x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')
Where the above ResNet50 implementation (similar to the He et al. 2015 implementation) uses a “bottleneck” building block is implemented as follows:
The following is an implementation of ResNet Bottleneck unit where the shortcut is a Convolution that increases the number of output feature maps (e.g., case 64 feature maps to 256 feature maps).
## Tackling CIFAR10 with ResNet V1 (“bottleneck” building block)
The following is an implementation of ResNet for the CIFAR10 dataset that uses the **V1 Residual Block**.
* **V1: uses a basic building block (2-layer unit with BN-RELU)** has two flavors (for details see the following paper: [Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf)):
* ResNet20 has 3 stages, where each stage has 3 ResNet V1 modules
* This leads to 6layers where we need to learn weights per stage (3 stages by 2 3x3 layers)
* A ResNet V1 Modules is made up of a stack 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filters is
doubled. Within each stage, the layers have the same number filters and the
same number of filters.
Features maps sizes:
stage 0: 32x32, 16
stage 1: 16x16, 32
stage 2: 8x8, 64
* identity as the shortcut: $2D_{3x3}-BN-ReLU--2D_{3x3}-BN---ADD-ReLU$
* convolution as the short cut (required to upsize the number of features for the skip connection or shortcut):
* Note ADD is an **element-wise addition** is performed on two feature maps, channel by channel.

ResNet operates with multiple stages, where each stage has the same output tensor dimensions. Each stage consists of 2 ResNet Blocks. To obtain a ResNet20 archicture with V1 units then the number of stages will be 3, yielding 20 layers (3 * 6 + 2) (where each stage has 2 V1 ResNet units)
<jupyter_code>%%time
"""Trains a ResNet on the CIFAR10 dataset.
ResNet v1
[a] Deep Residual Learning for Image Recognition
https://arxiv.org/pdf/1512.03385.pdf
"""
from __future__ import print_function
import keras
from keras.layers import Dense, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, Flatten
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras.callbacks import ReduceLROnPlateau
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras.datasets import cifar10
import numpy as np
import os
def lr_schedule(epoch):
"""Learning Rate Schedule
Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
Called automatically every epoch as part of callbacks during training.
# Arguments
epoch (int): The number of epochs
# Returns
lr (float32): learning rate
"""
lr = 1e-3
if epoch > 180:
lr *= 0.5e-3
elif epoch > 160:
lr *= 1e-3
elif epoch > 120:
lr *= 1e-2
elif epoch > 80:
lr *= 1e-1
print('Learning rate: ', lr)
return lr
def resnet_layer(inputs,
num_filters=16,
kernel_size=3,
strides=1,
activation='relu',
batch_normalization=True,
conv_first=True):
"""2D Convolution-Batch Normalization-Activation stack builder
# Arguments
inputs (tensor): input tensor from input image or previous layer
num_filters (int): Conv2D number of filters
kernel_size (int): Conv2D square kernel dimensions
strides (int): Conv2D square stride dimensions
activation (string): activation name
batch_normalization (bool): whether to include batch normalization
conv_first (bool): conv-bn-activation (True) or
bn-activation-conv (False)
# Returns
x (tensor): tensor as input to the next layer
"""
conv = Conv2D(num_filters,
kernel_size=kernel_size,
strides=strides,
padding='same',
kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))
x = inputs
if conv_first:
x = conv(x)
if batch_normalization:
x = BatchNormalization()(x)
if activation is not None:
x = Activation(activation)(x)
else:
if batch_normalization:
x = BatchNormalization()(x)
if activation is not None:
x = Activation(activation)(x)
x = conv(x)
return x
def resnet_v1(input_shape, depth, num_classes=10):
"""ResNet Version 1 Model builder [a]
Stacks of 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filters is
doubled. Within each stage, the layers have the same number filters and the
same number of filters.
Features maps sizes:
stage 0: 32x32, 16
stage 1: 16x16, 32
stage 2: 8x8, 64
The Number of parameters is approx the same as Table 6 of [a]:
ResNet20 0.27M
ResNet32 0.46M
ResNet44 0.66M
ResNet56 0.85M
ResNet110 1.7M
# Arguments
input_shape (tensor): shape of input image tensor
depth (int): number of core convolutional layers
num_classes (int): number of classes (CIFAR10 has 10)
# Returns
model (Model): Keras model instance
"""
if (depth - 2) % 6 != 0:
raise ValueError('depth should be 6n+2 (eg 20, 32, 44 in [a])')
# Start model definition.
num_filters = 16
num_res_blocks = int((depth - 2) / 6)
inputs = Input(shape=input_shape)
x = resnet_layer(inputs=inputs)
# Instantiate the stack of residual units
for stack in range(3):
for res_block in range(num_res_blocks):
strides = 1
if stack > 0 and res_block == 0: # first layer but not first stack
strides = 2 # downsample
y = resnet_layer(inputs=x,
num_filters=num_filters,
strides=strides)
y = resnet_layer(inputs=y,
num_filters=num_filters,
activation=None)
if stack > 0 and res_block == 0: # first layer but not first stack
# linear projection residual shortcut connection to match
# changed dims
x = resnet_layer(inputs=x,
num_filters=num_filters,
kernel_size=1,
strides=strides,
activation=None,
batch_normalization=False)
x = keras.layers.add([x, y])
x = Activation('relu')(x)
num_filters *= 2
# Add classifier on top.
# v1 does not use BN after last shortcut connection-ReLU
x = AveragePooling2D(pool_size=8)(x)
y = Flatten()(x)
outputs = Dense(num_classes,
activation='softmax',
kernel_initializer='he_normal')(y)
# Instantiate model.
model = Model(inputs=inputs, outputs=outputs)
return model
def run_resnet_model(model, imgNet, data_augmentation, input_shape, x_train, y_train, x_test, y_test):
# get the data
# Training parameters
batch_size = 32 # orig paper trained all networks with batch_size=128
data_augmentation = data_augmentation
if (imgNet == "cifar10"):
num_classes = 10
elif (imgNet == "cifar100"):
num_classes = 100
# Subtracting pixel mean improves accuracy
subtract_pixel_mean = True
n = 3
# Model version
# Orig paper: version = 1 (ResNet v1)
version = 1
# Model name, depth and version
model_type = 'ResNet%dv%dAug_%s' % (depth, version,data_augmentation)
# Normalize data.
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
# If subtract pixel mean is enabled
if subtract_pixel_mean:
x_train_mean = np.mean(x_train, axis=0)
x_train -= x_train_mean
x_test -= x_train_mean
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print('y_train shape:', y_train.shape)
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=lr_schedule(0)),
metrics=['accuracy'])
print(model_type)
#model.summary()
#plot_model(model, to_file=model_type+'.png', show_shapes=True, show_layer_names=True)
display_nn_model(model, model_type+'.png')
# Prepare model model saving directory.
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = imgNet + '_%s_model.{epoch:03d}.h5' % model_type
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
filepath = os.path.join(save_dir, model_name)
# Prepare callbacks for model saving and for learning rate adjustment.
checkpoint = ModelCheckpoint(filepath=filepath,
monitor='val_acc',
verbose=1,
save_best_only=True)
lr_scheduler = LearningRateScheduler(lr_schedule)
lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1),
cooldown=0,
patience=5,
min_lr=0.5e-6)
callbacks = [checkpoint, lr_reducer, lr_scheduler]
# Run training, with or without data augmentation.
hist=[]
if not data_augmentation:
print('Not using data augmentation.')
hist = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=nb_epoch,
validation_data=(x_test, y_test),
shuffle=True,
callbacks=callbacks)
else:
print('Using real-time data augmentation.')
# This will do preprocessing and realtime data augmentation:
datagen = ImageDataGenerator(
# set input mean to 0 over the dataset
featurewise_center=False,
# set each sample mean to 0
samplewise_center=False,
# divide inputs by std of dataset
featurewise_std_normalization=False,
# divide each input by its std
samplewise_std_normalization=False,
# apply ZCA whitening
zca_whitening=False,
# epsilon for ZCA whitening
zca_epsilon=1e-06,
# randomly rotate images in the range (deg 0 to 180)
rotation_range=0,
# randomly shift images horizontally
width_shift_range=0.1,
# randomly shift images vertically
height_shift_range=0.1,
# set range for random shear
shear_range=0.,
# set range for random zoom
zoom_range=0.,
# set range for random channel shifts
channel_shift_range=0.,
# set mode for filling points outside the input boundaries
fill_mode='nearest',
# value used for fill_mode = "constant"
cval=0.,
# randomly flip images
horizontal_flip=True,
# randomly flip images
vertical_flip=False,
# set rescaling factor (applied before any other transformation)
rescale=None,
# set function that will be applied on each input
preprocessing_function=None,
# image data format, either "channels_first" or "channels_last"
data_format=None,
# fraction of images reserved for validation (strictly between 0 and 1)
validation_split=0.0)
# Compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)
# Fit the model on the batches generated by datagen.flow().
hist = model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size),
validation_data=(x_test, y_test),
epochs=nb_epoch, verbose=1, workers=4,
callbacks=callbacks)
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
plot_learning_progress(hist, 'ResNetV1')
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
input_shape = x_train.shape[1:]
n = 3
depth = n * 6 + 2
model = resnet_v1(input_shape=input_shape, depth=depth)
run_resnet_model(model, 'cifar10', True, input_shape, x_train, y_train, x_test, y_test)<jupyter_output>x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
y_train shape: (50000, 1)
Learning rate: 0.001
ResNet20v1Aug_True
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) (None, 32, 32, 3) 0
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, 32, 32, 16) 448 input_4[0][0]
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, 32, 32, 16) 64 conv2d_64[0][0]
____________________________________________________________________[...]<jupyter_text>### TASK: Tackling CIFAR10 with ResNet (OPTIONAL TASK)
Using the above implementation train a ResNet20 V1 network on the CIFAR10 dataset (with and without image augmentation). Please present a summary of your models, and visualize the model architectures. Plot the CX loss and accuracy at 1 curves obtaining during training. Please discuss your experimental results using the following format and contrast and DISCUSS with respect to augmentation:
| Model | Detail| Input size| Top-1 Test Acc| Param(M)| Mult-Adds (B)| Depth|train time|Num of Epochs|batchsize|GPU desc|
| ------------- |:-------------:| -----|---------|---------|---------|----|---|---|
| AlexNet| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
| ResNeXt| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|<jupyter_code>run_resnet_model(model, 'cifar10', False, input_shape, x_train, y_train, x_test, y_test)<jupyter_output>x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
y_train shape: (50000, 1)
Learning rate: 0.001
ResNet20v1Aug_True
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) (None, 32, 32, 3) 0
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, 32, 32, 16) 448 input_4[0][0]
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, 32, 32, 16) 64 conv2d_64[0][0]
____________________________________________________________________[...]<jupyter_text>## Visualize the architecture (RERUN to see the latest version)
(RERUN to see the latest version)
### TASK: Tackling CIFAR100 with ResNet (OPTIONAL TASK)
Using the above implementation train a ResNet20 V1 network on the **CIFAR100** (100 classes) dataset (with and without image augmentation). Please present a summary of your models, and visualize the model architectures. Plot the CX loss and accuracy at 1 curves obtaining during training. Please discuss your experimental results using the following format and contrast and DISCUSS with respect to augmentation:
| Model | Detail| Input size| Top-1 Test Acc| Param(M)| Mult-Adds (B)| Depth|train time|Num of Epochs|batchsize|GPU desc|
| ------------- |:-------------:| -----|---------|---------|---------|----|---|---|
| AlexNet| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
| ResNeXt| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
<jupyter_code>from keras.datasets import cifar100
(x_train, y_train), (x_test, y_test) = cifar100.load_data()
input_shape = x_train.shape[1:]
model = resnet_v1(input_shape=input_shape, depth=depth, num_classes = 100)
run_resnet_model(model, 'cifar100', True, input_shape, x_train, y_train, x_test, y_test)
run_resnet_model(model, 'cifar100', False, input_shape, x_train, y_train, x_test, y_test)<jupyter_output>x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
y_train shape: (50000, 1)
Learning rate: 0.001
ResNet20v1Aug_False
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_7 (InputLayer) (None, 32, 32, 3) 0
__________________________________________________________________________________________________
conv2d_127 (Conv2D) (None, 32, 32, 16) 448 input_7[0][0]
__________________________________________________________________________________________________
batch_normalization_115 (BatchN (None, 32, 32, 16) 64 conv2d_127[0][0]
___________________________________________________________________[...]<jupyter_text># ResNet V2 in Keras: 29 layers !(pre-activation Resnet Module)
The cornerstone of ResNet is the residual module, first introduced by He et al. in their 2015 paper,
Deep Residual Learning for Image Recognition. The residual module consists of two branches.
The first is simply a shortcut which connects the input to an addition of the second branch, a series
of convolutions and activations (Figure below, left).
However, in the same paper, it was found that bottleneck residual modules perform better,
especially when training deeper networks. The bottleneck is a simple extension to the residual
module. We still have our shortcut module, only now the second branch to our micro-architecture
has changed. Instead of applying only two convolutions, we are now applying three convolutions
(Figure below, middle).
The first CONV consists of 1x1 filters, the second of 3x3 filters, and the third of 1x1 filters.
Furthermore, the number of filters learned by the first two CONV is 1=4 the number learned by the
final CONV – this point is where the “bottleneck” comes in.
Finally, in an updated 2016 publication [Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf), He
et al. experimented with the ordering of convolution, activation, and batch normalization layers
within the residual module. They found that by applying pre-activation (i.e., placing the RELU and
BN before the CONV) higher accuracy could be obtained (Figure below, right).
**Left: The original residual module building block (referred to as V0 above). Center: The residual module
with a bottleneck. The bottleneck adds in an extra CONV layer. This module also helps reduce the
dimensionality of spatial volumes(referred to as V1 above). Right: The updated pre-activation module that changes the
order of RELU, CONV, and BN (V2).**ResNet29 has 3 stages, where each stage has 3 ResNet V1 modules:
* This leads to 9 layers (bottleneck module) where we need to learn weights per stage (3 stages by 3 convolution layers)
* A ResNet V1 Modules is made up of a stack 3 convolution layers BN-ReLU-Conv2D
* ** V2 ** (discussed in a later section): uses “bottleneck” building block (3-layer unit) has two flavors:
* Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as bottleneck layer
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filter maps is
doubled. Within each stage, the layers have the same number filters and the
same filter map sizes.
Features maps sizes:
conv1 : 32x32, 16
stage 0: 32x32, 64
stage 1: 16x16, 128
stage 2: 8x8, 256
* identity as the shortcut: $-BN-ReLU-2D_{1x1}-BN-ReLU--2D_{3x3}-BN-ReLU--2D_{1x1}-BN--ADD-ReLU$
* convolution as the short cut (required to upsize the number of features for the skip connection or shortcut):
* Note ADD is an **element-wise addition** is performed on two feature maps, channel by channel.
* The following depicts a Resnet V2 Unit. Here view the first node as the input node. Notice the short cut is a convolution here since this is the first ResNet unit in a stage.
<jupyter_code>"""Trains a ResNet on the CIFAR10 dataset.
ResNet v1
[a] Deep Residual Learning for Image Recognition
https://arxiv.org/pdf/1512.03385.pdf
ResNet v2
[b] Identity Mappings in Deep Residual Networks
https://arxiv.org/pdf/1603.05027.pdf
"""
from __future__ import print_function
import keras
from keras.layers import Dense, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, Flatten
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras.callbacks import ReduceLROnPlateau
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras.datasets import cifar10
import numpy as np
import os
# Training parameters
batch_size = 32 # orig paper trained all networks with batch_size=128
data_augmentation = True
num_classes = 10
# Subtracting pixel mean improves accuracy
subtract_pixel_mean = True
# Model parameter
# ----------------------------------------------------------------------------
# | | 200-epoch | Orig Paper| 200-epoch | Orig Paper| sec/epoch
# Model | n | ResNet v1 | ResNet v1 | ResNet v2 | ResNet v2 | GTX1080Ti
# |v1(v2)| %Accuracy | %Accuracy | %Accuracy | %Accuracy | v1 (v2)
# ----------------------------------------------------------------------------
# ResNet20 | 3 (2)| 92.16 | 91.25 | ----- | ----- | 35 (---)
# ResNet32 | 5(NA)| 92.46 | 92.49 | NA | NA | 50 ( NA)
# ResNet44 | 7(NA)| 92.50 | 92.83 | NA | NA | 70 ( NA)
# ResNet56 | 9 (6)| 92.71 | 93.03 | 93.01 | NA | 90 (100)
# ResNet110 |18(12)| 92.65 | 93.39+-.16| 93.15 | 93.63 | 165(180)
# ResNet164 |27(18)| ----- | 94.07 | ----- | 94.54 | ---(---)
# ResNet1001| (111)| ----- | 92.39 | ----- | 95.08+-.14| ---(---)
# ---------------------------------------------------------------------------
def lr_schedule(epoch):
"""Learning Rate Schedule
Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
Called automatically every epoch as part of callbacks during training.
# Arguments
epoch (int): The number of epochs
# Returns
lr (float32): learning rate
"""
lr = 1e-3
if epoch > 180:
lr *= 0.5e-3
elif epoch > 160:
lr *= 1e-3
elif epoch > 120:
lr *= 1e-2
elif epoch > 80:
lr *= 1e-1
print('Learning rate: ', lr)
return lr
def resnet_layer(inputs,
num_filters=16,
kernel_size=3,
strides=1,
activation='relu',
batch_normalization=True,
conv_first=True):
"""2D Convolution-Batch Normalization-Activation stack builder
# Arguments
inputs (tensor): input tensor from input image or previous layer
num_filters (int): Conv2D number of filters
kernel_size (int): Conv2D square kernel dimensions
strides (int): Conv2D square stride dimensions
activation (string): activation name
batch_normalization (bool): whether to include batch normalization
conv_first (bool): conv-bn-activation (True) or
bn-activation-conv (False)
# Returns
x (tensor): tensor as input to the next layer
"""
conv = Conv2D(num_filters,
kernel_size=kernel_size,
strides=strides,
padding='same',
kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))
x = inputs
if conv_first:
x = conv(x)
if batch_normalization:
x = BatchNormalization()(x)
if activation is not None:
x = Activation(activation)(x)
else:
if batch_normalization:
x = BatchNormalization()(x)
if activation is not None:
x = Activation(activation)(x)
x = conv(x)
return x
def resnet_v2(input_shape, depth, num_classes=10):
"""ResNet Version 2 Model builder [b]
Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as
bottleneck layer
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filter maps is
doubled. Within each stage, the layers have the same number filters and the
same filter map sizes.
Features maps sizes:
conv1 : 32x32, 16
stage 0: 32x32, 64
stage 1: 16x16, 128
stage 2: 8x8, 256
# Arguments
input_shape (tensor): shape of input image tensor
depth (int): number of core convolutional layers
num_classes (int): number of classes (CIFAR10 has 10)
# Returns
model (Model): Keras model instance
"""
if (depth - 2) % 9 != 0:
raise ValueError('depth should be 9n+2 (eg 56 or 110 in [b])')
# Start model definition.
num_filters_in = 16
num_res_blocks = int((depth - 2) / 9)
inputs = Input(shape=input_shape)
# v2 performs Conv2D with BN-ReLU on input before splitting into 2 paths
x = resnet_layer(inputs=inputs,
num_filters=num_filters_in,
conv_first=True)
# Instantiate the stack of residual units
for stage in range(3):
for res_block in range(num_res_blocks):
activation = 'relu'
batch_normalization = True
strides = 1
if stage == 0:
num_filters_out = num_filters_in * 4
if res_block == 0: # first layer and first stage
activation = None
batch_normalization = False
else:
num_filters_out = num_filters_in * 2
if res_block == 0: # first layer but not first stage
strides = 2 # downsample
# bottleneck residual unit
y = resnet_layer(inputs=x,
num_filters=num_filters_in,
kernel_size=1,
strides=strides,
activation=activation,
batch_normalization=batch_normalization,
conv_first=False)
y = resnet_layer(inputs=y,
num_filters=num_filters_in,
conv_first=False)
y = resnet_layer(inputs=y,
num_filters=num_filters_out,
kernel_size=1,
conv_first=False)
if res_block == 0:
# linear projection residual shortcut connection to match
# changed dims
x = resnet_layer(inputs=x,
num_filters=num_filters_out,
kernel_size=1,
strides=strides,
activation=None,
batch_normalization=False)
x = keras.layers.add([x, y])
num_filters_in = num_filters_out
# Add classifier on top.
# v2 has BN-ReLU before Pooling
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = AveragePooling2D(pool_size=8)(x)
y = Flatten()(x)
outputs = Dense(num_classes,
activation='softmax',
kernel_initializer='he_normal')(y)
# Instantiate model.
model = Model(inputs=inputs, outputs=outputs)
return model<jupyter_output><empty_output><jupyter_text>### TASK: Tackling CIFAR10 with ResNet29 V2 (OPTIONAL TASK)
Using the above implementation train a ResNet20 V2 network on the CIFAR10 dataset (with and without image augmentation). Please present a summary of your models, and visualize the model architectures. Plot the CX loss and accuracy at 1 curves obtaining during training. Please discuss your experimental results using the following format and contrast and DISCUSS with respect to augmentation:
| Model | Detail| Input size| Top-1 Test Acc| Param(M)| Mult-Adds (B)| Depth|train time|Num of Epochs|batchsize|GPU desc|
| ------------- |:-------------:| -----|---------|---------|---------|----|---|---|
| AlexNet| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
| ResNet29 V2| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|<jupyter_code># test cifar10 with img augmentation
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
input_shape = x_train.shape[1:]
n = 3
depth = n * 9 + 2
model = resnet_v2(input_shape=input_shape, depth=depth)
run_resnet_model(model, 'cifar10', True, input_shape, x_train, y_train, x_test, y_test)
# test cifar10 without img augmentation
run_resnet_model(model, 'cifar10', False, input_shape, x_train, y_train, x_test, y_test)<jupyter_output>x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
y_train shape: (50000, 1)
Learning rate: 0.001
ResNet29v1Aug_False
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_8 (InputLayer) (None, 32, 32, 3) 0
__________________________________________________________________________________________________
conv2d_148 (Conv2D) (None, 32, 32, 16) 448 input_8[0][0]
__________________________________________________________________________________________________
batch_normalization_134 (BatchN (None, 32, 32, 16) 64 conv2d_148[0][0]
___________________________________________________________________[...]<jupyter_text># TASK: implement Basic ResNet Module (OPTIONAL)
Using the following ResNetV1 or ResNetV2 as inspiration and as a starting point:
* Implement the basic ResNet building block V0 (on the left hand side in the figure below)
* Train ResNet-20 using this V0 ResNet module on the CIFAR10 small images dataset
* Report your experimental results as you have done previously

# From Resnet to ResNeXt on CIFAR10
ResNeXt extends the ResNet block with a new expanded block architecture, which depends on a cardinality parameter (32 in this case) which denotes the number of lightweight bottleneck units in the residual unit (previously we in ResNet V2 we had only one bottleneck unit (1x1, 3x3, 1x1). It can be further visualised in the below diagram from the paper.
An Implementation of ResNeXt models from the paper Aggregated Residual Transformations for Deep Neural Networks in Keras 2.0+ is available [here](https://github.com/titu1994/Keras-ResNeXt)
This repo contains code for building the general ResNeXt model (optimized for datasets similar to CIFAR) and ResNeXtImageNet (optimized for the ImageNet dataset).## Visualize the ResNeXt architecture
It is difficult to visualize the ResNeXt architecture. But here we go. Go outside the notebook and visualize the .png file using an image viewer, and zoom in!
## TASK: Tackling CIFAR10 with ResNeXt (OPTIONAL TASK)
Using the [ResNeXt implementation here](https://github.com/titu1994/Keras-ResNeXt) train a network on the CIFAR10 dataset (with and without image augmentation). Please report your experimental results using the following format and contrast and DISCUSS with respect to now augmentation, along with various training progress plots and architecture diagrams:
| Model | Detail| Input size| Top-1 Test Acc| Param(M)| Mult-Adds (B)| Depth|train time|Num of Epochs|batchsize|GPU desc|
| ------------- |:-------------:| -----|---------|---------|---------|----|---|---|
| AlexNet| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
| ResNeXt| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
Starter code for ResNeXt and CIFAR10 experiments is included below.<jupyter_code>#%%writefile __init__.py
!cat __init__.py
%%writefile resnext.py
'''ResNeXt models for Keras.
# Reference
- [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/pdf/1611.05431.pdf))
'''
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
import warnings
from keras.models import Model
from keras.layers.core import Dense, Lambda
from keras.layers.core import Activation
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import GlobalAveragePooling2D, GlobalMaxPooling2D, MaxPooling2D
from keras.layers import Input
from keras.layers.merge import concatenate, add
from keras.layers.normalization import BatchNormalization
from keras.regularizers import l2
from keras.utils.layer_utils import convert_all_kernels_in_model
from keras.utils.data_utils import get_file
from keras.engine.topology import get_source_inputs
from keras_applications.imagenet_utils import _obtain_input_shape
import keras.backend as K
CIFAR_TH_WEIGHTS_PATH = ''
CIFAR_TF_WEIGHTS_PATH = ''
CIFAR_TH_WEIGHTS_PATH_NO_TOP = ''
CIFAR_TF_WEIGHTS_PATH_NO_TOP = ''
IMAGENET_TH_WEIGHTS_PATH = ''
IMAGENET_TF_WEIGHTS_PATH = ''
IMAGENET_TH_WEIGHTS_PATH_NO_TOP = ''
IMAGENET_TF_WEIGHTS_PATH_NO_TOP = ''
def ResNext(input_shape=None, depth=29, cardinality=8, width=64, weight_decay=5e-4,
include_top=True, weights=None, input_tensor=None,
pooling=None, classes=10):
"""Instantiate the ResNeXt architecture. Note that ,
when using TensorFlow for best performance you should set
`image_data_format="channels_last"` in your Keras config
at ~/.keras/keras.json.
The model are compatible with both
TensorFlow and Theano. The dimension ordering
convention used by the model is the one
specified in your Keras config file.
# Arguments
depth: number or layers in the ResNeXt model. Can be an
integer or a list of integers.
cardinality: the size of the set of transformations
width: multiplier to the ResNeXt width (number of filters)
weight_decay: weight decay (l2 norm)
include_top: whether to include the fully-connected
layer at the top of the network.
weights: `None` (random initialization)
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(32, 32, 3)` (with `tf` dim ordering)
or `(3, 32, 32)` (with `th` dim ordering).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 8.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
# Returns
A Keras model instance.
"""
if weights not in {'cifar10', None}:
raise ValueError('The `weights` argument should be either '
'`None` (random initialization) or `cifar10` '
'(pre-training on CIFAR-10).')
if weights == 'cifar10' and include_top and classes != 10:
raise ValueError('If using `weights` as CIFAR 10 with `include_top`'
' as true, `classes` should be 10')
if type(depth) == int:
if (depth - 2) % 9 != 0:
raise ValueError('Depth of the network must be such that (depth - 2)'
'should be divisible by 9.')
# Determine proper input shape
input_shape = _obtain_input_shape(input_shape,
default_size=32,
min_size=8,
data_format=K.image_data_format(),
require_flatten=include_top)
if input_tensor is None:
img_input = Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
x = __create_res_next(classes, img_input, include_top, depth, cardinality, width,
weight_decay, pooling)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = Model(inputs, x, name='resnext')
# load weights
if weights == 'cifar10':
if (depth == 29) and (cardinality == 8) and (width == 64):
# Default parameters match. Weights for this model exist:
if K.image_data_format() == 'channels_first':
if include_top:
weights_path = get_file('resnext_cifar_10_8_64_th_dim_ordering_th_kernels.h5',
CIFAR_TH_WEIGHTS_PATH,
cache_subdir='models')
else:
weights_path = get_file('resnext_cifar_10_8_64_th_dim_ordering_th_kernels_no_top.h5',
CIFAR_TH_WEIGHTS_PATH_NO_TOP,
cache_subdir='models')
model.load_weights(weights_path)
if K.backend() == 'tensorflow':
warnings.warn('You are using the TensorFlow backend, yet you '
'are using the Theano '
'image dimension ordering convention '
'(`image_dim_ordering="th"`). '
'For best performance, set '
'`image_dim_ordering="tf"` in '
'your Keras config '
'at ~/.keras/keras.json.')
convert_all_kernels_in_model(model)
else:
if include_top:
weights_path = get_file('resnext_cifar_10_8_64_tf_dim_ordering_tf_kernels.h5',
CIFAR_TF_WEIGHTS_PATH,
cache_subdir='models')
else:
weights_path = get_file('resnext_cifar_10_8_64_tf_dim_ordering_tf_kernels_no_top.h5',
CIFAR_TF_WEIGHTS_PATH_NO_TOP,
cache_subdir='models')
model.load_weights(weights_path)
if K.backend() == 'theano':
convert_all_kernels_in_model(model)
return model
def ResNextImageNet(input_shape=None, depth=[3, 4, 6, 3], cardinality=32, width=4, weight_decay=5e-4,
include_top=True, weights=None, input_tensor=None,
pooling=None, classes=1000):
""" Instantiate the ResNeXt architecture for the ImageNet dataset. Note that ,
when using TensorFlow for best performance you should set
`image_data_format="channels_last"` in your Keras config
at ~/.keras/keras.json.
The model are compatible with both
TensorFlow and Theano. The dimension ordering
convention used by the model is the one
specified in your Keras config file.
# Arguments
depth: number or layers in the each block, defined as a list.
ResNeXt-50 can be defined as [3, 4, 6, 3].
ResNeXt-101 can be defined as [3, 4, 23, 3].
Defaults is ResNeXt-50.
cardinality: the size of the set of transformations
width: multiplier to the ResNeXt width (number of filters)
weight_decay: weight decay (l2 norm)
include_top: whether to include the fully-connected
layer at the top of the network.
weights: `None` (random initialization) or `imagenet` (trained
on ImageNet)
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `tf` dim ordering)
or `(3, 224, 224)` (with `th` dim ordering).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 8.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
# Returns
A Keras model instance.
"""
if weights not in {'imagenet', None}:
raise ValueError('The `weights` argument should be either '
'`None` (random initialization) or `imagenet` '
'(pre-training on ImageNet).')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as imagenet with `include_top`'
' as true, `classes` should be 1000')
if type(depth) == int and (depth - 2) % 9 != 0:
raise ValueError('Depth of the network must be such that (depth - 2)'
'should be divisible by 9.')
# Determine proper input shape
input_shape = _obtain_input_shape(input_shape,
default_size=224,
min_size=112,
data_format=K.image_data_format(),
require_flatten=include_top)
if input_tensor is None:
img_input = Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
x = __create_res_next_imagenet(classes, img_input, include_top, depth, cardinality, width,
weight_decay, pooling)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = Model(inputs, x, name='resnext')
# load weights
if weights == 'imagenet':
if (depth == [3, 4, 6, 3]) and (cardinality == 32) and (width == 4):
# Default parameters match. Weights for this model exist:
if K.image_data_format() == 'channels_first':
if include_top:
weights_path = get_file('resnext_imagenet_32_4_th_dim_ordering_th_kernels.h5',
IMAGENET_TH_WEIGHTS_PATH,
cache_subdir='models')
else:
weights_path = get_file('resnext_imagenet_32_4_th_dim_ordering_th_kernels_no_top.h5',
IMAGENET_TH_WEIGHTS_PATH_NO_TOP,
cache_subdir='models')
model.load_weights(weights_path)
if K.backend() == 'tensorflow':
warnings.warn('You are using the TensorFlow backend, yet you '
'are using the Theano '
'image dimension ordering convention '
'(`image_dim_ordering="th"`). '
'For best performance, set '
'`image_dim_ordering="tf"` in '
'your Keras config '
'at ~/.keras/keras.json.')
convert_all_kernels_in_model(model)
else:
if include_top:
weights_path = get_file('resnext_imagenet_32_4_tf_dim_ordering_tf_kernels.h5',
IMAGENET_TF_WEIGHTS_PATH,
cache_subdir='models')
else:
weights_path = get_file('resnext_imagenet_32_4_tf_dim_ordering_tf_kernels_no_top.h5',
IMAGENET_TF_WEIGHTS_PATH_NO_TOP,
cache_subdir='models')
model.load_weights(weights_path)
if K.backend() == 'theano':
convert_all_kernels_in_model(model)
return model
def __initial_conv_block(input, weight_decay=5e-4):
''' Adds an initial convolution block, with batch normalization and relu activation
Args:
input: input tensor
weight_decay: weight decay factor
Returns: a keras tensor
'''
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
x = Conv2D(64, (3, 3), padding='same', use_bias=False, kernel_initializer='he_normal',
kernel_regularizer=l2(weight_decay))(input)
x = BatchNormalization(axis=channel_axis)(x)
x = Activation('relu')(x)
return x
def __initial_conv_block_imagenet(input, weight_decay=5e-4):
''' Adds an initial conv block, with batch norm and relu for the inception resnext
Args:
input: input tensor
weight_decay: weight decay factor
Returns: a keras tensor
'''
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
x = Conv2D(64, (7, 7), padding='same', use_bias=False, kernel_initializer='he_normal',
kernel_regularizer=l2(weight_decay), strides=(2, 2))(input)
x = BatchNormalization(axis=channel_axis)(x)
x = Activation('relu')(x)
x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)
return x
def __grouped_convolution_block(input, grouped_channels, cardinality, strides, weight_decay=5e-4):
''' Adds a grouped convolution block. It is an equivalent block from the paper
Args:
input: input tensor
grouped_channels: grouped number of filters
cardinality: cardinality factor describing the number of groups
strides: performs strided convolution for downscaling if > 1
weight_decay: weight decay term
Returns: a keras tensor
'''
init = input
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
group_list = []
if cardinality == 1:
# with cardinality 1, it is a standard convolution
x = Conv2D(grouped_channels, (3, 3), padding='same', use_bias=False, strides=(strides, strides),
kernel_initializer='he_normal', kernel_regularizer=l2(weight_decay))(init)
x = BatchNormalization(axis=channel_axis)(x)
x = Activation('relu')(x)
return x
for c in range(cardinality):
x = Lambda(lambda z: z[:, :, :, c * grouped_channels:(c + 1) * grouped_channels]
if K.image_data_format() == 'channels_last' else
lambda z: z[:, c * grouped_channels:(c + 1) * grouped_channels, :, :])(input)
x = Conv2D(grouped_channels, (3, 3), padding='same', use_bias=False, strides=(strides, strides),
kernel_initializer='he_normal', kernel_regularizer=l2(weight_decay))(x)
group_list.append(x)
group_merge = concatenate(group_list, axis=channel_axis)
x = BatchNormalization(axis=channel_axis)(group_merge)
x = Activation('relu')(x)
return x
def __bottleneck_block(input, filters=64, cardinality=8, strides=1, weight_decay=5e-4):
''' Adds a bottleneck block
Args:
input: input tensor
filters: number of output filters
cardinality: cardinality factor described number of
grouped convolutions
strides: performs strided convolution for downsampling if > 1
weight_decay: weight decay factor
Returns: a keras tensor
'''
init = input
grouped_channels = int(filters / cardinality)
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
# Check if input number of filters is same as 16 * k, else create convolution2d for this input
if K.image_data_format() == 'channels_first':
if init._keras_shape[1] != 2 * filters:
init = Conv2D(filters * 2, (1, 1), padding='same', strides=(strides, strides),
use_bias=False, kernel_initializer='he_normal', kernel_regularizer=l2(weight_decay))(init)
init = BatchNormalization(axis=channel_axis)(init)
else:
if init._keras_shape[-1] != 2 * filters:
init = Conv2D(filters * 2, (1, 1), padding='same', strides=(strides, strides),
use_bias=False, kernel_initializer='he_normal', kernel_regularizer=l2(weight_decay))(init)
init = BatchNormalization(axis=channel_axis)(init)
x = Conv2D(filters, (1, 1), padding='same', use_bias=False,
kernel_initializer='he_normal', kernel_regularizer=l2(weight_decay))(input)
x = BatchNormalization(axis=channel_axis)(x)
x = Activation('relu')(x)
x = __grouped_convolution_block(x, grouped_channels, cardinality, strides, weight_decay)
x = Conv2D(filters * 2, (1, 1), padding='same', use_bias=False, kernel_initializer='he_normal',
kernel_regularizer=l2(weight_decay))(x)
x = BatchNormalization(axis=channel_axis)(x)
x = add([init, x])
x = Activation('relu')(x)
return x
def __create_res_next(nb_classes, img_input, include_top, depth=29, cardinality=8, width=4,
weight_decay=5e-4, pooling=None):
''' Creates a ResNeXt model with specified parameters
Args:
nb_classes: Number of output classes
img_input: Input tensor or layer
include_top: Flag to include the last dense layer
depth: Depth of the network. Can be an positive integer or a list
Compute N = (n - 2) / 9.
For a depth of 56, n = 56, N = (56 - 2) / 9 = 6
For a depth of 101, n = 101, N = (101 - 2) / 9 = 11
cardinality: the size of the set of transformations.
Increasing cardinality improves classification accuracy,
width: Width of the network.
weight_decay: weight_decay (l2 norm)
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
Returns: a Keras Model
'''
if type(depth) is list or type(depth) is tuple:
# If a list is provided, defer to user how many blocks are present
N = list(depth)
else:
# Otherwise, default to 3 blocks each of default number of group convolution blocks
N = [(depth - 2) // 9 for _ in range(3)]
filters = cardinality * width
filters_list = []
for i in range(len(N)):
filters_list.append(filters)
filters *= 2 # double the size of the filters
x = __initial_conv_block(img_input, weight_decay)
# block 1 (no pooling)
for i in range(N[0]):
x = __bottleneck_block(x, filters_list[0], cardinality, strides=1, weight_decay=weight_decay)
N = N[1:] # remove the first block from block definition list
filters_list = filters_list[1:] # remove the first filter from the filter list
# block 2 to N
for block_idx, n_i in enumerate(N):
for i in range(n_i):
if i == 0:
x = __bottleneck_block(x, filters_list[block_idx], cardinality, strides=2,
weight_decay=weight_decay)
else:
x = __bottleneck_block(x, filters_list[block_idx], cardinality, strides=1,
weight_decay=weight_decay)
if include_top:
x = GlobalAveragePooling2D()(x)
x = Dense(nb_classes, use_bias=False, kernel_regularizer=l2(weight_decay),
kernel_initializer='he_normal', activation='softmax')(x)
else:
if pooling == 'avg':
x = GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = GlobalMaxPooling2D()(x)
return x
def __create_res_next_imagenet(nb_classes, img_input, include_top, depth, cardinality=32, width=4,
weight_decay=5e-4, pooling=None):
''' Creates a ResNeXt model with specified parameters
Args:
nb_classes: Number of output classes
img_input: Input tensor or layer
include_top: Flag to include the last dense layer
depth: Depth of the network. List of integers.
Increasing cardinality improves classification accuracy,
width: Width of the network.
weight_decay: weight_decay (l2 norm)
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
Returns: a Keras Model
'''
if type(depth) is list or type(depth) is tuple:
# If a list is provided, defer to user how many blocks are present
N = list(depth)
else:
# Otherwise, default to 3 blocks each of default number of group convolution blocks
N = [(depth - 2) // 9 for _ in range(3)]
filters = cardinality * width
filters_list = []
for i in range(len(N)):
filters_list.append(filters)
filters *= 2 # double the size of the filters
x = __initial_conv_block_imagenet(img_input, weight_decay)
# block 1 (no pooling)
for i in range(N[0]):
x = __bottleneck_block(x, filters_list[0], cardinality, strides=1, weight_decay=weight_decay)
N = N[1:] # remove the first block from block definition list
filters_list = filters_list[1:] # remove the first filter from the filter list
# block 2 to N
for block_idx, n_i in enumerate(N):
for i in range(n_i):
if i == 0:
x = __bottleneck_block(x, filters_list[block_idx], cardinality, strides=2,
weight_decay=weight_decay)
else:
x = __bottleneck_block(x, filters_list[block_idx], cardinality, strides=1,
weight_decay=weight_decay)
if include_top:
x = GlobalAveragePooling2D()(x)
x = Dense(nb_classes, use_bias=False, kernel_regularizer=l2(weight_decay),
kernel_initializer='he_normal', activation='softmax')(x)
else:
if pooling == 'avg':
x = GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = GlobalMaxPooling2D()(x)
return x
if __name__ == '__main__':
model = ResNext((32, 32, 3), depth=29, cardinality=8, width=64)
#model.summary()
#plot_model(model, to_file='model_plotResNeXt-Cifar10-v1.png', show_shapes=True, show_layer_names=True)
display_nn_model(model, 'model_plotResNeXt-Cifar10-v1.png')
import resnext
#from resnext import ResNeXt
model = resnext.ResNext((32, 32, 3), depth=29, cardinality=8, width=64)
#model.summary()
#plot_model(model, to_file='model_plotResNeXt-Cifar10-v1.png', show_shapes=True, show_layer_names=True)
display_nn_model(model, 'model_plotResNeXt-Cifar10-v1.png')
%run resnext.py
rom __future__ import print_function
from __future__ import division
import os
import numpy as np
import sklearn.metrics as metrics
from keras import backend as K
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.datasets import cifar10
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils
from resnext import ResNext
batch_size = 32
nb_classes = 10
#nb_epoch = 100
img_rows, img_cols = 32, 32
img_channels = 3
img_dim = (img_channels, img_rows, img_cols) if K.image_dim_ordering() == "th" else (img_rows, img_cols, img_channels)
depth = 29
cardinality = 8
width = 16
model = ResNext(img_dim, depth=depth, cardinality=cardinality, width=width, weights=None, classes=nb_classes)
#plot_model(model, to_file='model_plotResNeXt-Cifar10.png', show_shapes=True, show_layer_names=True)
print("Model created")
display_nn_model(model, 'model_plotResNeXt-Cifar10.png')
model.summary()
optimizer = Adam(lr=1e-3) # Using Adam instead of SGD to speed up training
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=["accuracy"])
print("Finished compiling")
print("Building model...")
(trainX, trainY), (testX, testY) = cifar10.load_data()
trainX = trainX.astype('float32')
testX = testX.astype('float32')
trainX /= 255.
testX /= 255.
Y_train = np_utils.to_categorical(trainY, nb_classes)
Y_test = np_utils.to_categorical(testY, nb_classes)
generator = ImageDataGenerator(rotation_range=15,
width_shift_range=5./32,
height_shift_range=5./32,
horizontal_flip=True)
generator.fit(trainX, seed=0)
out_dir = "weights/"
if not os.path.exists(out_dir):
os.makedirs(out_dir)
# Load model
weights_file = "weights/ResNext-8-64d.h5"
if os.path.exists(weights_file):
model.load_weights(weights_file)
print("Model loaded.")
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.1),
cooldown=0, patience=10, min_lr=1e-6)
model_checkpoint = ModelCheckpoint(weights_file, monitor="val_acc", save_best_only=True,
save_weights_only=True, mode='auto')
callbacks = [lr_reducer, model_checkpoint]
hist = model.fit_generator(generator.flow(trainX, Y_train, batch_size=batch_size),
steps_per_epoch=len(trainX) // batch_size,
epochs=nb_epoch,
callbacks=callbacks,
validation_data=(testX, Y_test),
validation_steps=testX.shape[0] // batch_size, verbose=1)
yPreds = model.predict(testX)
yPred = np.argmax(yPreds, axis=1)
yTrue = testY
accuracy = metrics.accuracy_score(yTrue, yPred) * 100
error = 100 - accuracy
print("Accuracy : ", accuracy)
print("Error : ", error)
!ls -l weights/ResNext-8-64d.h5
import sys
sys.path
len(hist.history["val_loss"])
plot_learning_progress(hist, 'ResNeXt')<jupyter_output><empty_output><jupyter_text>## Visualize the architecture (RERUN to see the latest version)
(RERUN to see the latest version)
# TASK: Experiment: Tackling CIFAR100 (One hundred) with ResNeXt (OPTIONAL TASK)
Using the [ResNeXt implementation here](https://github.com/titu1994/Keras-ResNeXt) train a network on the CIFAR100 dataset (with and without image augmentation). Please report your experimental results using the following format and contrast and DISCUSS with respect to now augmentation, along with various training progress plots and architecture diagrams::
| Model | Detail| Input size| Top-1 Test Acc| Param(M)| Mult-Adds (B)| Depth|train time|Num of Epochs|batchsize|GPU desc|
| ------------- |:-------------:| -----|---------|---------|---------|----|---|---|
| AlexNet| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
| ResNeXt| XXXX |224x224| XXX| 60M| XXB| XX |XXX|XXX|XXX|XXX|
<jupyter_code>from __future__ import print_function
from __future__ import division
import os
import numpy as np
import sklearn.metrics as metrics
from keras import backend as K
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.datasets import cifar100
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils
from resnext import ResNeXt
batch_size = 32
nb_classes = 100
#nb_epoch = 100
img_rows, img_cols = 32, 32
img_channels = 3
img_dim = (img_channels, img_rows, img_cols) if K.image_dim_ordering() == "th" else (img_rows, img_cols, img_channels)
depth = 29
cardinality = 8
width = 16
model = ResNeXt(img_dim, depth=depth, cardinality=cardinality, width=width, weights=None, classes=nb_classes)
print("Model created")
#plot_model(model, to_file='model_plotResNeXt-Cifar100.png', show_shapes=True, show_layer_names=True)
display_nn_model(model, 'model_plotResNeXt-Cifar100.png')
model.summary()
optimizer = Adam(lr=1e-3) # Using Adam instead of SGD to speed up training
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=["accuracy"])
print("Finished compiling")
print("Building model...")
(trainX, trainY), (testX, testY) = cifar100.load_data()
trainX = trainX.astype('float32')
testX = testX.astype('float32')
trainX /= 255.
testX /= 255.
Y_train = np_utils.to_categorical(trainY, nb_classes)
Y_test = np_utils.to_categorical(testY, nb_classes)
generator = ImageDataGenerator(rotation_range=15,
width_shift_range=5./32,
height_shift_range=5./32,
horizontal_flip=True)
generator.fit(trainX, seed=0)
out_dir = "weights/"
if not os.path.exists(out_dir):
os.makedirs(out_dir)
# Load model
weights_file = "weights/ResNext-8-64d-CIFAR100.h5"
if os.path.exists(weights_file):
model.load_weights(weights_file)
print("Model loaded.")
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.1),
cooldown=0, patience=10, min_lr=1e-6)
model_checkpoint = ModelCheckpoint(weights_file, monitor="val_acc", save_best_only=True,
save_weights_only=True, mode='auto')
callbacks = [lr_reducer, model_checkpoint]
hist = model.fit_generator(generator.flow(trainX, Y_train, batch_size=batch_size),
steps_per_epoch=len(trainX) // batch_size,
epochs=nb_epoch,
callbacks=callbacks,
validation_data=(testX, Y_test),
validation_steps=testX.shape[0] // batch_size, verbose=1)
yPreds = model.predict(testX)
yPred = np.argmax(yPreds, axis=1)
yTrue = testY
accuracy = metrics.accuracy_score(yTrue, yPred) * 100
error = 100 - accuracy
print("Accuracy : ", accuracy)
print("Error : ", error)<jupyter_output><empty_output><jupyter_text>## Plot learning progress<jupyter_code>plot_learning_progress(hist, 'ResNeXt-Cifar100')<jupyter_output><empty_output>
|
no_license
|
/Alexnet-ResNet-Cifar-10.ipynb
|
mcheruvu/deep_learning
| 27 |
<jupyter_start><jupyter_text># First Framework1. Detection: Haar Cascade Classifier
2. Landmark Extraction: DLib (Kazemi & Sullivan Method)
3. Keyframe Extraction: Using thresholding on the size of the landmark## Detect Faces<jupyter_code>haar_classifier = cv.CascadeClassifier('model_data/haarcascade_frontalface_alt.xml')
def detect_faces(cascade_classifier, gray_img, scale_factor=1.1, min_neighbors=5):
faces = cascade_classifier.detectMultiScale(gray_img, scaleFactor=scale_factor, minNeighbors=min_neighbors)
for (x, y, w, h) in faces:
cv.rectangle(gray_img, (x, y), (x+w, y+h), (0, 255, 0), 3)
return gray_img, faces
img_with_face_bboxes, faces_position = detect_faces(haar_classifier, gray_img)
plt.imshow(img_with_face_bboxes, cmap='gray')
for x, y, w, h in faces_position:
face_img = img[y:y+h, x:x+w]
plt.imshow(cv.cvtColor(face_img, cv.COLOR_BGR2RGB))
faces_position<jupyter_output><empty_output><jupyter_text>## Facial Landmark ExtractionFor the detail of landmark index for the face regions see following link:
https://www.pyimagesearch.com/2017/04/10/detect-eyes-nose-lips-jaw-dlib-opencv-python/
Caution!!! The image on that link is 1-based index, convert it to 0-based index by subtraction of 1<jupyter_code>plt.imshow(cv.cvtColor(img, cv.COLOR_BGR2RGB))
extractor = dlib.shape_predictor('model_data/shape_predictor_68_face_landmarks.dat')
def facial_landmark_extraction(extractor, gray_img, face_bboxes_position):
'''
@brief: The function extract all facial landmark from a frame
Return: Dictionary of all facial landmark from a frame with key=face_index
'''
facial_landmark_list = dict()
for i, (x, y, w, h) in enumerate(face_bboxes_position):
rect = dlib.rectangle(x, y, x+w, y+h)
shape = extractor(gray_img, rect)
shape = face_utils.shape_to_np(shape)
facial_landmark_list[i] = shape
return facial_landmark_list
facials_landmarks_dict = facial_landmark_extraction(extractor, gray_img, faces_position)
# index point for the facial landmark for the jaw (0,16) mean point 0 until 16 is the point of jaw landmark
FACIAL_LANDMARKS_IDXS = dict({
'jaw': (0, 16),
'right_eyebrow': (17, 21),
'left_eyebrow': (22, 26),
'nose': (27, 35),
'right_eye': (36, 41),
'left_eye': (42, 47),
'mouth': (48, 68)
})
img_with_landmark = img_with_face_bboxes.copy()
for face_idx in facials_landmarks_dict:
for i, (x, y) in enumerate(facials_landmarks_dict[face_idx]):
cv.circle(img_with_landmark, (x,y), 1, (0, 255, 0), -1)
plt.imshow(img_with_landmark, cmap='gray')
for x, y, w, h in faces_position:
cropped_img = img_with_landmark[y:y+h, x:x+w]
plt.imshow(cropped_img, cmap='gray')<jupyter_output><empty_output><jupyter_text>## Keyframe Extraction<jupyter_code>FACIAL_LANDMARKS_CORNERS_IDXS = {
'jaw': (0, 16),
'right_eyebrow': (17, 21),
'left_eyebrow': (22, 26),
'nose': (27, 33), # From Top to Bot
'right_eye': (36, 39),
'left_eye': (42, 45),
'mouth': (48, 54)
}
def landmark_size_extraction(facial_landmarks_dict):
for face_idx in facial_landmarks_dict:
landmark = facial_landmarks_dict[face_idx]
jaw_idx_corner1, jaw_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['jaw']
jaw_size = distance.euclidean(landmark[jaw_idx_corner1], landmark[jaw_idx_corner2])
right_eyebrow_idx_corner1, right_eyebrow_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['right_eyebrow']
right_eyebrow_size = distance.euclidean(landmark[right_eyebrow_idx_corner1], landmark[right_eyebrow_idx_corner2])
left_eyebrow_idx_corner1, left_eyebrow_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['left_eyebrow']
left_eyebrow_size = distance.euclidean(landmark[left_eyebrow_idx_corner1], landmark[left_eyebrow_idx_corner2])
nose_idx_corner1, nose_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['nose']
nose_size = distance.euclidean(landmark[nose_idx_corner1], landmark[nose_idx_corner2])
right_eye_idx_corner1, right_eye_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['right_eye']
right_eye_size = distance.euclidean(landmark[right_eye_idx_corner1], landmark[right_eye_idx_corner2])
left_eye_idx_corner1, left_eye_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['left_eye']
left_eye_size = distance.euclidean(landmark[left_eye_idx_corner1], landmark[left_eye_idx_corner2])
mouth_idx_corner1, mouth_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['mouth']
mouth_size = distance.euclidean(landmark[mouth_idx_corner1], landmark[mouth_idx_corner2])
return (jaw_size, right_eyebrow_size, left_eyebrow_size, nose_size, right_eye_size, left_eye_size, mouth_size)
landmark_size_extraction({})<jupyter_output><empty_output><jupyter_text>## One Function to Extract the Keyframe<jupyter_code>haar_classifier = cv.CascadeClassifier('model_data/haarcascade_frontalface_alt.xml')
extractor = dlib.shape_predictor('model_data/shape_predictor_68_face_landmarks.dat')
def detect_faces_v2(cascade_classifier, gray_img, scale_factor=1.1, min_neighbors=5):
faces = cascade_classifier.detectMultiScale(gray_img, scaleFactor=scale_factor, minNeighbors=min_neighbors)
return faces
def facial_landmark_extraction_v2(extractor, gray_img, face_bboxes_position):
'''
@brief: The function extract all facial landmark from a gray face image
Return: Dictionary of all facial landmark from a frame with key=face_index
'''
x, y, w, h = face_bboxes_position
rect = dlib.rectangle(x, y, x+w, y+h)
shape = extractor(gray_img, rect)
shape = face_utils.shape_to_np(shape)
facial_landmarks_position = shape
return facial_landmarks_position
def landmark_size_extraction_v2(facial_landmarks_position):
landmark = facial_landmarks_position
jaw_idx_corner1, jaw_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['jaw']
jaw_size = distance.euclidean(landmark[jaw_idx_corner1], landmark[jaw_idx_corner2])
right_eyebrow_idx_corner1, right_eyebrow_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['right_eyebrow']
right_eyebrow_size = distance.euclidean(landmark[right_eyebrow_idx_corner1], landmark[right_eyebrow_idx_corner2])
left_eyebrow_idx_corner1, left_eyebrow_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['left_eyebrow']
left_eyebrow_size = distance.euclidean(landmark[left_eyebrow_idx_corner1], landmark[left_eyebrow_idx_corner2])
nose_idx_corner1, nose_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['nose']
nose_size = distance.euclidean(landmark[nose_idx_corner1], landmark[nose_idx_corner2])
right_eye_idx_corner1, right_eye_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['right_eye']
right_eye_size = distance.euclidean(landmark[right_eye_idx_corner1], landmark[right_eye_idx_corner2])
left_eye_idx_corner1, left_eye_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['left_eye']
left_eye_size = distance.euclidean(landmark[left_eye_idx_corner1], landmark[left_eye_idx_corner2])
mouth_idx_corner1, mouth_idx_corner2 = FACIAL_LANDMARKS_CORNERS_IDXS['mouth']
mouth_size = distance.euclidean(landmark[mouth_idx_corner1], landmark[mouth_idx_corner2])
return (jaw_size, right_eyebrow_size, left_eyebrow_size, nose_size, right_eye_size, left_eye_size, mouth_size)
def check_person_image_keyframe(filename, face_detection_classifier, facial_landmark_extractor):
# resized_dimension = (200, 900)
# gray_img = cv.resize(cv.imread(filename, cv.IMREAD_GRAYSCALE), resized_dimension, cv.INTER_AREA)
gray_img = cv.imread(filename, cv.IMREAD_GRAYSCALE)
plt.imshow(gray_img, cmap='gray')
plt.show()
faces_position = detect_faces_v2(face_detection_classifier, gray_img)
for x, y, w, h in faces_position:
face_img = gray_img[y:y+h, x:x+w]
face_resized_dimension = (60, 60)
face_img = cv.resize(face_img, face_resized_dimension, cv.INTER_AREA)
plt.imshow(face_img, cmap='gray')
plt.show()
facials_landmarks_position = facial_landmark_extraction_v2(extractor, face_img, (0,0,60,60))
face_img_copy = face_img.copy()
for (x, y) in facials_landmarks_position:
cv.circle(face_img_copy, (x,y), 1, (0, 255, 0), -1)
plt.imshow(face_img_copy, cmap='gray')
plt.show()
landmark_size = landmark_size_extraction_v2(facials_landmarks_position)
if landmark_size != None:
jaw_size, reb_size, leb_size, nose_size, re_size, le_size, mouth_size = landmark_size
print(jaw_size, reb_size, leb_size, nose_size, re_size, le_size, mouth_size)
else:
print('No Face Found')
# facials_landmarks_dict = facial_landmark_extraction(extractor, gray_img, faces_position)
# img_with_landmark = img_with_face_bboxes.copy()
# for face_idx in facials_landmarks_dict:
# for i, (x, y) in enumerate(facials_landmarks_dict[face_idx]):
# cv.circle(img_with_landmark, (x,y), 1, (0, 255, 0), -1)
# for x, y, w, h in faces_position:
# cropped_img = img_with_landmark[y:y+h, x:x+w]
# plt.imshow(cropped_img, cmap='gray')
# plt.show()
# landmarks_size = landmark_size_extraction(facials_landmarks_dict)
# if landmarks_size != None:
# jaw_size, reb_size, leb_size, nose_size, re_size, le_size, mouth_size = landmarks_size
# print(jaw_size, reb_size, leb_size, nose_size, re_size, le_size, mouth_size)
# else:
# print('No Facial Landmark Found')
check_person_image_keyframe('../person_data/1_1.png', haar_classifier, extractor)
check_person_image_keyframe('../person_data/2_40.png', haar_classifier, extractor)
check_person_image_keyframe('../person_data/2_42.png', haar_classifier, extractor)
check_person_image_keyframe('../person_data/2_6.png', haar_classifier, extractor)<jupyter_output><empty_output><jupyter_text>## Batch Function to Generate All Keyframe from All Image of a Person<jupyter_code>def generate_keyframe_from_frames(input_path_list, output_dir_path, face_detection_classifier, facial_landmark_extractor):
for file_path in input_path_list:
print(file_path)
check_person_image_keyframe(file_path, face_detection_classifier, facial_landmark_extractor)
input_path_list = os.listdir('../person_data/')
input_path_list = ['../person_data/' + element for element in input_path_list]
input_path_list = [element for element in input_path_list if '/2_' in element]
input_path_list = sorted(input_path_list)
input_path_list[0:2]
generate_keyframe_from_frames(input_path_list, 'hehe', haar_classifier, extractor)
input_path_list = os.listdir('../person_data/')
input_path_list = ['../person_data/' + element for element in input_path_list]
input_path_list = [element for element in input_path_list if '/12_' in element]
input_path_list = sorted(input_path_list)
input_path_list[0:2]
generate_keyframe_from_frames(input_path_list, 'hehe', haar_classifier, extractor)<jupyter_output>../person_data/12_0.png
<jupyter_text># Face Dataset Creation<jupyter_code>import os
haar_classifier = cv.CascadeClassifier('model_data/haarcascade_frontalface_alt.xml')
extractor = dlib.shape_predictor('model_data/shape_predictor_68_face_landmarks.dat')
person_img_dir_path = '../person_data'
person_img_path_list = os.listdir(person_img_dir_path)
person_img_path_list = sorted([os.path.join(person_img_dir_path, img_path) for img_path in person_img_path_list])
face_img_dir_path = '../face_data'
def create_face_dataset(person_id, person_img_path_list):
counter = 0
for idx, img_path in enumerate(person_img_path_list):
if ('/' + str(person_id) + '_' in img_path):
filename = img_path
print(filename)
person_img = cv.imread(filename)
img_with_face_bboxes, faces_position = detect_faces(haar_classifier, person_img)
if len(faces_position) != 0:
x, y, w, h = faces_position[0]
cropped_face_img = person_img[y:y+h, x:x+w]
cv.imwrite(os.path.join('../face_data', str(person_id) + '_' + str(counter) + '.png'), cropped_face_img)
print(counter)
counter += 1
# create_face_dataset(17, person_img_path_list)<jupyter_output><empty_output>
|
no_license
|
/notebook/face-keyframe-extraction.ipynb
|
huyquoctrinh/person-reid
| 6 |
<jupyter_start><jupyter_text># DrawResolution, version 2 for paper
- author : Sylvie Dagoret-Campagne
- affiliation : IJCLab/IN2P3/CNRS
- creation date : May 27th 2020
- update = June 2nd 2020
<jupyter_code>%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
import matplotlib as mpl
import pandas as pd
import itertools
import matplotlib.gridspec as gridspec
from matplotlib.patches import Circle,Ellipse
# to enlarge the sizes
params = {'legend.fontsize': 'x-large',
'figure.figsize': (13, 13),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large',
'font.size': 14}
plt.rcParams.update(params)
from scipy.interpolate import interp1d<jupyter_output><empty_output><jupyter_text>## Constants for conversions<jupyter_code>m_to_mm=1000.
mm_to_m=1e-3
inch_to_mm=25.4
mm_to_inch=1./inch_to_mm
mm_to_micr=1e3
micr_to_m=1e-6
micr_to_mm=1e-3
m_to_micr=1./micr_to_m
m_to_cm=100.
m_to_nm=1e9
nm_to_m=1./m_to_nm
nm_to_micr=0.001
arcdeg_to_arcmin=60.
arcmin_to_arcdeg=1./arcdeg_to_arcmin
arcmin_to_arcsec=60.
arcdeg_to_arcsec=arcdeg_to_arcmin*arcmin_to_arcsec
arcsec_to_arcdeg=1./arcdeg_to_arcsec
deg_to_rad=np.pi/180.
rad_to_deg=1./deg_to_rad
rad_to_arcsec=rad_to_deg*arcdeg_to_arcsec
rad_to_arcmin=rad_to_deg*arcdeg_to_arcmin
arcmin_to_rad=1./rad_to_arcmin<jupyter_output><empty_output><jupyter_text>## Configuration parameters at the telescope#### telescope<jupyter_code>Tel_Focal_Length=20.6 # m : Focal length of the telescope
Tel_Diameter=1.2 # m : Diameter of the telescope
Tel_Fnum=Tel_Focal_Length/Tel_Diameter
plt_scale=206265/(Tel_Focal_Length*m_to_mm) # arcsec per mm<jupyter_output><empty_output><jupyter_text>#### filter<jupyter_code>Filt_D=0.200 # m distance of the filter position wrt CCD plane
Filt_size=3*inch_to_mm<jupyter_output><empty_output><jupyter_text>#### CCD detector<jupyter_code>Det_xpic=10.0 # microns per pixel
Det_NbPix=4096 # number of pixels per CCD side For 400 only
Det_size=Det_xpic*Det_NbPix*micr_to_mm # CCD size in mm, 5 cm or 2 inch
def find_nearest_idx(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return idx
def Dispersion(wl,a,D):
"""
"""
X=D/a*wl/np.sqrt(1-(wl/a)**2)
return X
def Dispersion_Rate(wl,a,D):
"""
Dispersion_Rate(wl) : number of dx per wavelength
input arguments:
- wl : wavelength
- a : line pitch
- D : Distance CCD-Hologram
recommended : all input arguments should be expressed in microns.
- output : dx/dlambda, x in microns and lambdas in microns
"""
dxdlambda=D/a*(np.sqrt(1-(wl/a)**2)+ (wl/a)**2)/(1-(wl/a)**2)
return dxdlambda
fig, ax1 = plt.subplots(figsize=(12,8))
WL=np.linspace(300.,1000.,100)
a=1/150.*mm_to_micr
D=200*mm_to_micr
Y1=Dispersion(WL*nm_to_micr,a,D)*micr_to_mm
ax1.plot(WL,Y1,"b",label="$x(\lambda)$")
ax1.set_xlabel("$\lambda$ (nm)")
ax1.set_ylabel("$x(\lambda)$ ($mm$)",color="blue")
ax1.set_title("Dispersion X")
ax1.legend(loc="upper left")
ax1.grid()
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
Y2=Dispersion_Rate(WL*nm_to_micr,a,D)*micr_to_mm
ax2.plot(WL, Y2,"r",label="$dx/d\lambda$")
ax2.set_xlabel("$\lambda$ (nm)")
ax2.set_ylabel("$dx/d\lambda$ ($mm/\mu m$)",color="red")
#ax2.set_title("Dispersion rate")
ax2.legend(loc="lower right")
#ax2.grid()
fig, ax1 = plt.subplots(figsize=(12,8))
WL=np.linspace(300.,1000.,100)
a=1/150.*mm_to_micr
D=200*mm_to_micr
Y1=Dispersion(WL*nm_to_micr,a,D)/Det_xpic # pixel
ax1.plot(WL,Y1,"b",label="$x(\lambda)$")
ax1.set_xlabel("$\lambda$ (nm)")
ax1.set_ylabel("$x(\lambda)$ (pixel)",color="blue")
ax1.set_title("Dispersion X")
ax1.legend(loc="upper left")
ax1.grid()
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
Y2=Dispersion_Rate(WL*nm_to_micr,a,D)/Det_xpic*1e-3 # pixel per nm
ax2.plot(WL, Y2,"r",label="$dx/d\lambda$")
ax2.set_xlabel("$\lambda$ (nm)")
ax2.set_ylabel("$dx/d\lambda$ ($pixel/nm$)",color="red")
#ax2.set_title("Dispersion rate")
ax2.legend(loc="lower right")
#ax2.grid()
fig, ax1 = plt.subplots(figsize=(12,8))
WL=np.linspace(300.,1000.,100)
a=1/150.*mm_to_micr
D=200*mm_to_micr
Y1=Dispersion(WL*nm_to_micr,a,D)/Det_xpic # pixel
Y1150=Dispersion(WL*nm_to_micr,1/150.*mm_to_micr,D)/Det_xpic # pixel
ax1.plot(WL,Y1,"b",label="Hologram")
ax1.plot(WL,Y1150,"b-.",label="Ronchi 150")
ax1.set_xlabel("$\lambda$ (nm)")
ax1.set_ylabel("$x(\lambda)$ (pixel)",color="blue")
ax1.set_title("Dispersion X")
ax1.legend(loc="upper left")
ax1.grid()
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
Y2=Dispersion_Rate(WL*nm_to_micr,a,D)/Det_xpic*1e-3 # pixel per nm
ax2.plot(WL, Y2,"r",label="Hologram")
Y2150=Dispersion_Rate(WL*nm_to_micr,1/150.*mm_to_micr,D)/Det_xpic*1e-3 # pixel per nm
ax2.plot(WL, Y2150,"r-.",label="Ronchi 150")
ax2.set_xlabel("$\lambda$ (nm)")
ax2.set_ylabel("$dx/d\lambda$ ($pixel/nm$)",color="red")
#ax2.set_title("Dispersion rate")
ax2.legend(loc="lower right")
#ax2.grid()<jupyter_output><empty_output><jupyter_text>## Input file<jupyter_code># number of rays
NBEAM_X=11
NBEAM_Y=11
NBEAM=NBEAM_X*NBEAM_Y
NWL=4
NBTOT=NBEAM*NWL
theta_x=0. # angle in arcmin
theta_y=0. # angle in arcmin
theta_x_num=int(theta_x*10)
theta_y_num=int(theta_y*10)
if theta_x_num>0:
theta_nstr='{:0>2}'.format(theta_x_num)
theta_x_str="p"+theta_nstr
else:
theta_nstr='{:0>2}'.format(-theta_x_num)
theta_x_str="m"+theta_nstr
if theta_y_num>0:
theta_nstr='{:0>2}'.format(theta_y_num)
theta_y_str="p"+theta_nstr
else:
theta_nstr='{:0>2}'.format(-theta_y_num)
theta_y_str="m"+theta_nstr
Beam4_Rayfile="Beam4_Rayfile_{:d}_allwl_{}_{}".format(NBTOT,theta_x_str,theta_y_str)
Beam4_Rayfile
order="OP1"
order_str="+1"
WL=np.array([400.,600.,800.,1000.])<jupyter_output><empty_output><jupyter_text># Read input files<jupyter_code>disperser_name=["Ronchi 150","Hologram"]
disperser_color=["b","r"]
infileR150data_excel="R150_PSF_"+ Beam4_Rayfile+"_out.xlsx"
infileHOEdata_excel="HOE_PSF_"+ Beam4_Rayfile+"_out.xlsx"
all_files=[infileR150data_excel,infileHOEdata_excel]
all_df=[]
for file in all_files:
df = pd.read_excel(file,index_col=0)
all_df.append(df)
df
all_reso= []
for df in all_df:
all_reso.append(df["xstd"].values)
all_reso<jupyter_output><empty_output><jupyter_text># Resolution## Interpolate<jupyter_code>Xint=np.linspace(WL[0],WL[-1],200)
all_Yint_mm=[]
all_Yint_arcsec=[]
for reso in all_reso:
f = interp1d(WL, reso, kind='cubic')
Yint=f(Xint)
all_Yint_mm.append(Yint)
all_Yint_arcsec.append(Yint*plt_scale)<jupyter_output><empty_output><jupyter_text>## Plot <jupyter_code># Tableau Colors from the 'T10' categorical palette (the default color cycle):
#{'tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive',
#'tab:cyan'} (case-insensitive);
fig, ax1 = plt.subplots(figsize=(12,8))
#color = 'tab:red'
color = 'red'
idx=0
for Yint in all_Yint_mm:
if idx==2:
idx+=1
continue
ax1.plot(WL,all_reso[idx],"o",color=disperser_color[idx])
ax1.plot(Xint,Yint,'-',label=disperser_name[idx],color=disperser_color[idx])
idx+=1
ax1.set_title(" Beam spot size (dispersion axis) ")
ax1.set_xlabel("$\lambda$")
ax1.set_ylabel("$\sigma_x$ (mm)",color=color)
ax1.grid()
ax1.legend()
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
#color = 'tab:blue'
color = 'blue'
idx=0
for Yint in all_Yint_arcsec:
if idx==2:
idx+=1
continue
ax2.plot(WL,all_reso[idx]*plt_scale,"o",color=disperser_color[idx])
ax2.plot(Xint,Yint,'-',label=disperser_name[idx],color=disperser_color[idx])
idx+=1
ax2.axhline(y=1./2.36,color="grey")
ax2.set_ylabel('$\sigma_x$ (arcsec)', color=color) # we already handled the x-label with ax1
#ax2.plot(t, data2, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig("RESOLUTION_PSF_SIGMA_AUXTEL_v2.pdf")
plt.show()
fig, ax1 = plt.subplots(figsize=(12,8))
#color = 'tab:red'
color = 'red'
idx=0
for Yint in all_Yint_mm:
if idx==2:
idx+=1
continue
ax1.plot(WL,all_reso[idx]*2.36,"o",color=disperser_color[idx])
ax1.plot(Xint,Yint*2.36,'-',label=disperser_name[idx],color=disperser_color[idx])
idx+=1
ax1.set_title(" Beam spot size (dispersion axis) ")
ax1.set_xlabel("$\lambda$")
ax1.set_ylabel("$fwhm_x$ (mm)",color=color)
ax1.grid()
ax1.legend()
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
#color = 'tab:blue'
color = 'blue'
idx=0
for Yint in all_Yint_arcsec:
if idx==2:
idx+=1
continue
ax2.plot(WL,all_reso[idx]*plt_scale*2.36,"o",color=disperser_color[idx])
ax2.plot(Xint,Yint*2.36,'-',label=disperser_name[idx],color=disperser_color[idx])
idx+=1
ax2.axhline(y=1.,color="grey",lw=3)
ax2.set_ylabel('$fwhm_x$ (arcsec)', color=color) # we already handled the x-label with ax1
#ax2.plot(t, data2, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig("RESOLUTION_PSF_FWHM_AUXTEL_v2.pdf")
plt.show()
idx=find_nearest_idx(Xint,750)
Yint[idx]
Yint[idx]*16.4/1000
idx=find_nearest_idx(Xint,1000)
Yint[idx]
Yint[idx]*16.4/1000<jupyter_output><empty_output><jupyter_text>## $R(\lambda)$<jupyter_code>#WL
#Xint
fig, ax1 = plt.subplots(figsize=(12,8))
SEEING_ARCSEC=1.0/2.36 # arcsec
SEEING_MM=SEEING_ARCSEC/plt_scale # sigma Seeing in mm
a=1/150.
color = 'blue'
idx=0
for Yint in all_Yint_mm:
Y1=WL*micr_to_mm/all_reso[idx]*Dispersion_Rate(WL*nm_to_micr,a*mm_to_micr,D=58*mm_to_micr)*micr_to_mm
Y2=Xint*micr_to_mm/Yint*Dispersion_Rate(Xint*nm_to_micr,a*mm_to_micr,D=58*mm_to_micr)*micr_to_mm
ax1.plot(WL,Y1,"o",color=disperser_color[idx])
ax1.plot(Xint,Y2,'-',label=disperser_name[idx],color=disperser_color[idx])
idx+=1
# seeing
Y3=Xint*micr_to_mm/SEEING_MM*Dispersion_Rate(Xint*nm_to_micr,1./350.*mm_to_micr,D=58*mm_to_micr)*micr_to_mm
ax1.plot(Xint,Y3,'-',label="seeing 1 arcsec (FWHM)",color="grey")
ax1.set_title(" Spectrometric Resolution : $R(\lambda)= \lambda/\sigma(\lambda)$")
ax1.set_xlabel("$\lambda$")
ax1.set_ylabel("$R(\\lambda)$",color=color)
ax1.grid()
ax1.legend()
ax1.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig("RLAMBDA_PSF_AUXTEL_v2.pdf")
plt.yscale("log")
plt.show()<jupyter_output><empty_output>
|
no_license
|
/newAna2020/DrawPSF/notebooks/DrawResolution_v2.ipynb
|
sylvielsstfr/HOLOSPEC_B4
| 10 |
<jupyter_start><jupyter_text>## Seaborn basics<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
sns.__version__<jupyter_output><empty_output><jupyter_text>#### Tips data<jupyter_code>df = sns.load_dataset('tips')
df.head()
df.info()
df.describe().T
df.describe(include=['category']).T
_ = sns.relplot(data=df, x='total_bill', y='tip')
sns.set(font_scale = 1.2)
_ = sns.relplot(data=df, x='total_bill', y='tip')
_ = sns.relplot(data=df, x='total_bill', y='tip', size='size', hue='size')
_ = sns.relplot(data=df, x='total_bill', y='tip', size='size', hue='size', col='time')
_ = sns.relplot(data=df, x='total_bill', y='tip', size='size', hue='size', row='time')
_ = sns.relplot(data=df, x='total_bill', y='tip', size='size', hue='size', col='time', row='smoker')
_ = sns.catplot(data=df, x='day', y='total_bill')
_ = sns.catplot(data=df, x='day', y='total_bill', kind='swarm')
_ = sns.catplot(data=df, x='day', y='total_bill', kind='box')
_ = sns.catplot(data=df, x='day', y='total_bill', kind='violin')
_ = sns.catplot(data=df, x='day', y='total_bill', kind='bar')
_ = sns.catplot(data=df, x='day', kind='count')<jupyter_output><empty_output><jupyter_text>#### Titanic data<jupyter_code>df = sns.load_dataset('titanic')
df.head()
_ = sns.catplot(data=df, x='deck', kind='count', palette='Blues')
_ = sns.catplot(data=df, y='deck', kind='count', palette='Blues')<jupyter_output><empty_output><jupyter_text>#### Iris data<jupyter_code>df = sns.load_dataset('iris')
df.head()
df.info()
sns.pairplot(data=df)
sns.pairplot(data=df, hue='species')<jupyter_output><empty_output>
|
no_license
|
/Plots/02_seaborn.ipynb
|
MarcinDylong/DS
| 4 |
<jupyter_start><jupyter_text># Financial Inclusion in Africa Starter Notebook
This is a simple starter notebook to get started with the Financial Inclusion Competition on Zindi.
This notebook covers:
- Loading the data
- Simple EDA and an example of feature enginnering
- Data preprocessing and data wrangling
- Creating a simple model
- Making a submission
- Some tips for improving your score### Importing libraries<jupyter_code># dataframe and plotting
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# machine learning
from lightgbm import LGBMClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from google.colab import files
import warnings
warnings.filterwarnings('ignore')<jupyter_output><empty_output><jupyter_text>### Read files<jupyter_code># Load files into a pandas dataframe
train = pd.read_csv('Train.csv')
test = pd.read_csv('Test.csv')
ss = pd.read_csv('SampleSubmission.csv')
variables = pd.read_csv('VariableDefinitions.csv')<jupyter_output><empty_output><jupyter_text>### Some basic EDA<jupyter_code># Let's view the variables
variables
# Preview the first five rows of the train set
train.head()
# Preview the first five rows of the test set
test.head()
# Preview the first five rows of the sample submission file
ss.head()
# Check the shape of the train and test sets
print(f'The shape of the train set is: {train.shape}\nThe shape of the test set is: {test.shape}')<jupyter_output>The shape of the train set is: (23524, 13)
The shape of the test set is: (10086, 12)
<jupyter_text>## Combine train and test set for easy preprocessing <jupyter_code># mapping the bank account with 0 to NO and 1 to YES
train['bank_account'] = train['bank_account'].map({'No':0, 'Yes':1})
# Combine train and test set
ntrain = train.shape[0] # to be used to split train and test set from the combined dataframe
all_data = pd.concat((train, test)).reset_index(drop=True)
print(f'The shape of the combined dataframe is: {all_data.shape}')
# Preview the last five rows of the combined dataframe
all_data.tail()
# Check the column names and datatypes
all_data.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 33610 entries, 0 to 33609
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 country 33610 non-null object
1 year 33610 non-null int64
2 uniqueid 33610 non-null object
3 bank_account 23524 non-null float64
4 location_type 33610 non-null object
5 cellphone_access 33610 non-null object
6 household_size 33610 non-null int64
7 age_of_respondent 33610 non-null int64
8 gender_of_respondent 33610 non-null object
9 relationship_with_head 33610 non-null object
10 marital_status 33610 non-null object
11 education_level 33610 non-null object
12 job_type 33610 non-null object
dtypes: float64(1), int64(3), object(9)
memory usage: 3.3+ MB
<jupyter_text>### Distribution of the target variable<jupyter_code>sns.countplot(train.bank_account)
plt.title('Target Distribution', fontdict={'size':14});
sns.countplot(train.location_type)
plt.title('Location Distribution', fontdict={'size':14});
sns.countplot(train.cellphone_access)
plt.title('Cellphone Access Distribution', fontdict={'size':14});<jupyter_output><empty_output><jupyter_text>Here we see the overall distribution for the whole train set. Can you see if there are any differences due to country?### Number of unique values per categorical column<jupyter_code># Check unique values for each categorical column
cat_cols = ['country', 'location_type', 'cellphone_access', 'gender_of_respondent', 'relationship_with_head', 'marital_status', 'education_level', 'job_type']
for col in cat_cols:
print(col)
print(all_data[col].unique(), '\n')<jupyter_output>country
['Kenya' 'Rwanda' 'Tanzania' 'Uganda']
location_type
['Rural' 'Urban']
cellphone_access
['Yes' 'No']
gender_of_respondent
['Female' 'Male']
relationship_with_head
['Spouse' 'Head of Household' 'Other relative' 'Child' 'Parent'
'Other non-relatives']
marital_status
['Married/Living together' 'Widowed' 'Single/Never Married'
'Divorced/Seperated' 'Dont know']
education_level
['Secondary education' 'No formal education'
'Vocational/Specialised training' 'Primary education'
'Tertiary education' 'Other/Dont know/RTA']
job_type
['Self employed' 'Government Dependent' 'Formally employed Private'
'Informally employed' 'Formally employed Government'
'Farming and Fishing' 'Remittance Dependent' 'Other Income'
'Dont Know/Refuse to answer' 'No Income']
<jupyter_text>### Feature Engineering
#### Try different strategies of dealing with categorical variables
Tips:
- One hot encoding
- Label encoding
- Target encoding
- Reduce the number of unique values...<jupyter_code># Encode categorical features
all_data = pd.get_dummies(data = all_data, columns = cat_cols)
all_data.head()
# Separate train and test data from the combined dataframe
train_df = all_data[:ntrain]
test_df = all_data[ntrain:]
# Check the shapes of the split dataset
train_df.shape, test_df.shape
# Select main columns to be used in training
#main_cols = all_data.columns.difference(date_cols+['ID', 'bank_account'])
main_cols = all_data.columns.difference(['uniqueid', 'bank_account'])
X = train_df[main_cols]
y = train_df['bank_account']
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=42)
!pip install catboost
import catboost
from catboost import *
from catboost import CatBoostClassifier
cat_model = CatBoostClassifier(
iterations=10,
learning_rate=0.1,
loss_function='CrossEntropy'
)
model.fit(
X_train, y_train,
eval_set=(X_test, y_test),
verbose=False
)
print('Model is fitted: ' + str(model.is_fitted()))
print('Model params:')
print(model.get_params())
from catboost import CatBoostClassifier
cat_model = CatBoostClassifier(
iterations=100,
verbose=5,
)
cat_model.fit(
X_train, y_train,
eval_set=(X_test, y_test),
)<jupyter_output>Learning rate set to 0.172348
0: learn: 0.5413438 test: 0.5388991 best: 0.5388991 (0) total: 6.4ms remaining: 634ms
5: learn: 0.3346546 test: 0.3309552 best: 0.3309552 (5) total: 40.4ms remaining: 632ms
10: learn: 0.2955268 test: 0.2933342 best: 0.2933342 (10) total: 72.5ms remaining: 587ms
15: learn: 0.2827391 test: 0.2822808 best: 0.2822808 (15) total: 105ms remaining: 554ms
20: learn: 0.2771404 test: 0.2788672 best: 0.2788672 (20) total: 137ms remaining: 517ms
25: learn: 0.2731193 test: 0.2766871 best: 0.2766871 (25) total: 169ms remaining: 480ms
30: learn: 0.2705501 test: 0.2762332 best: 0.2762332 (30) total: 202ms remaining: 451ms
35: learn: 0.2682345 test: 0.2754577 best: 0.2754577 (35) total: 234ms remaining: 416ms
40: learn: 0.2661468 test: 0.2754167 best: 0.2753388 (39) total: 265ms remaining: 381ms
45: learn: 0.2643278 test: 0.2751042 best: 0.2750827 (43) total: 299ms remaining: 351ms
50: learn: 0.2626602 test: 0.2747720 best: 0.2747720 (50) total: 331ms remaining: 318ms
55: [...]<jupyter_text>### Making predictions of the test set and creating a submission file<jupyter_code># Make prediction on the test set
test_df = test_df[main_cols]
predictions = cat_model.predict(test_df)
# Create a submission file
sub_file = ss.copy()
sub_file.predictions = predictions
# Check the distribution of your predictions
sns.countplot(sub_file.predictions);
# Create a csv file and upload to zindi
sub_file.to_csv('Baseline.csv', index = False)
files.download('Baseline.csv') <jupyter_output><empty_output>
|
no_license
|
/my_first_sub.ipynb
|
mohmaed7777/IndabaX-Sudan-2021
| 8 |
<jupyter_start><jupyter_text>[URL](https://www.tensorflow.org/tutorials/images/transfer_learning)
<jupyter_code>import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
(raw_train, raw_validation, raw_test), metadata = tfds.load(
'cats_vs_dogs',
split = ['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info = True,
as_supervised=True
)
print(raw_train)
print(raw_validation)
print(raw_test)
get_label_name = metadata.features['label'].int2str
for image, label in raw_train.take(2):
plt.figure()
plt.imshow(image)
plt.title(get_label_name(label))
for i in raw_train.shuffle(10).take(1):
print(i[1])
plt.imshow(i[0])
IMG_SIZE = 160
def format_example(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5)-1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
train = raw_train.map(format_example)
validation = raw_validation.map(format_example)
test = raw_test.map(format_example)
BATCH_SIZE = 32
SHUFFLE_BUFFER_SIZE = 1000
train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
base_model = tf.keras.applications.MobileNetV2(
input_shape = IMG_SHAPE, include_top = False,
weights ='imagenet')
feature_batch = base_model(image_batch)
print(feature_batch.shape)
# Freeze the conv base
base_model.trainable = False
base_model.summary()
# Generate prediction from the block of features
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
# Convert these features into a single prediction per image
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
model = tf.keras.Sequential([base_model,
global_average_layer,
prediction_layer])
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
# Number of trainable layers
len(model.trainable_variables)
initial_epochs = 10
validation_steps = 20
loss0,accuracy0 = model.evaluate(validation_batches, steps = validation_steps)
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
DESIRED_ACCURACY = 0.9799
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('val_accuracy')>DESIRED_ACCURACY):
print("\nReached 97,99% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
history = model.fit(train_batches,
epochs=initial_epochs,
validation_data=validation_batches,
callbacks = [callbacks])
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
# Fine Tuning
# Unfreeze some layers of base_model to training weights
base_model.trainable = True
print("Number of layers in the base model:", len(base_model.layers))
fine_tune = 100
for layer in base_model.layers[:fine_tune]:
layer.trainable = False
# only train 55 last layers of base_model
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer = tf.keras.optimizers.RMSprop(lr=base_learning_rate/10),
metrics=['accuracy'])
model.summary()
# Number of trainable layers
len(model.trainable_variables)
DESIRED_ACCURACY = 0.99
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('val_accuracy')>DESIRED_ACCURACY):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
fine_tune_epochs = 10
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_batches,
epochs=total_epochs,
initial_epoch = history.epoch[-1],
validation_data=validation_batches,
callbacks = [callbacks])
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.8, 1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
<jupyter_output><empty_output>
|
no_license
|
/Transfer_Learning_TF.ipynb
|
PQ-Trung/hello-world-1
| 1 |
<jupyter_start><jupyter_text># Occupation### Introduction:
Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.
### Step 1. Import the necessary libraries<jupyter_code>import pandas as pd<jupyter_output><empty_output><jupyter_text>### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). ### Step 3. Assign it to a variable called users.<jupyter_code>users = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user', sep='|')
users.head()<jupyter_output><empty_output><jupyter_text>### Step 4. Discover what is the mean age per occupation<jupyter_code>users.groupby('occupation')['age'].mean()<jupyter_output><empty_output><jupyter_text>### Step 5. Discover the Male ratio per occupation and sort it from the most to the least<jupyter_code>def man(x):
if x == 'M':
return 1
else:
return 0
users['gender_n'] = users['gender'].apply(man)
a = (users.groupby('occupation')['gender_n'].sum() / users['occupation'].value_counts() * 100).sort_values(ascending=False)
a<jupyter_output><empty_output><jupyter_text># Step 6. For each occupation, calculate the minimum and maximum ages<jupyter_code>users.groupby('occupation')['age'].agg(['min','max'])<jupyter_output><empty_output><jupyter_text>### Step 7. For each combination of occupation and gender, calculate the mean age<jupyter_code>users.groupby(['occupation', 'gender'])['age'].mean()<jupyter_output><empty_output><jupyter_text>### Step 8. For each occupation present the percentage of women and men<jupyter_code>a = users.groupby(['occupation', 'gender']).agg({'gender':'count'})
b = users.groupby('occupation').count()
percentage = a.div(b, level='occupation') * 100
percentage['gender']<jupyter_output><empty_output>
|
permissive
|
/03_Grouping/Occupation/Exercise.ipynb
|
NazmiDere/pandas_exercises
| 7 |
<jupyter_start><jupyter_text># GPyOpt: Modular Bayesian Optimization
### Written by Javier Gonzalez, Amazon Reseach Cambridge
*Last updated, July 2017.*In the [Introduction Bayesian Optimization GPyOpt](./GPyOpt_reference_manual.ipynb) we showed how GPyOpt can be used to solve optimization problems with some basic functionalities. The object
```
GPyOpt.methods.BayesianOptimization
```
is used to initialize the desired functionalities, such us the acquisition function, the initial design or the model. In some cases we want to have control over those objects and we may want to replace some element in the loop without having to integrate the new elements in the base code framework. This is now possible through the modular implementation of the package using the
```
GPyOpt.methods.ModularBayesianOptimization
```
class. In this notebook we are going to show how to use the backbone of GPyOpt to run a Bayesian optimization algorithm in which we will use our own acquisition function. In particular we are going to use the Expected Improvement integrated over the jitter parameter. That is
$$acqu_{IEI}(x;\{x_n,y_n\},\theta) = \int acqu_{EI}(x;\{x_n,y_n\},\theta,\psi) p(\psi;a,b)d\psi $$
where $p(\psi;a,b)$ is, in this example, the distribution [$Beta(a,b)$](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.random.beta.html).
This acquisition is not available in GPyOpt, but we will implement and use in this notebook. The same can be done for other models, acquisition optimizers etc.
As usual, we start loading GPy and GPyOpt.<jupyter_code>%pylab inline
import GPyOpt
import GPy<jupyter_output><empty_output><jupyter_text>In this example we will use the Branin function as a test case.<jupyter_code># --- Function to optimize
func = GPyOpt.objective_examples.experiments2d.branin()
func.plot()<jupyter_output><empty_output><jupyter_text>Because we are won't use the pre implemented wrapper, we need to create the classes for each element of the optimization. In total we need to create:
* Class for the **objective function**,<jupyter_code>objective = GPyOpt.core.task.SingleObjective(func.f)<jupyter_output><empty_output><jupyter_text>* Class for the **design space**,<jupyter_code>space = GPyOpt.Design_space(space =[{'name': 'var_1', 'type': 'continuous', 'domain': (-5,10)},
{'name': 'var_2', 'type': 'continuous', 'domain': (1,15)}])<jupyter_output><empty_output><jupyter_text>* Class for the **model type**,<jupyter_code>model = GPyOpt.models.GPModel(optimize_restarts=5,verbose=False)<jupyter_output><empty_output><jupyter_text>* Class for the **acquisition optimizer**,<jupyter_code>aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(space)<jupyter_output><empty_output><jupyter_text>* Class for the **initial design**,<jupyter_code>initial_design = GPyOpt.experiment_design.initial_design('random', space, 5)<jupyter_output><empty_output><jupyter_text>* Class for the **acquisition function**. Because we want to use our own acquisition, we need to implement a class to handle it. We will use the currently available Expected Improvement to create an integrated version over the jitter parameter. Samples will be generated using a beta distribution with parameters a and b as it is done using the default [numpy beta function](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.random.beta.html).<jupyter_code>from GPyOpt.acquisitions.base import AcquisitionBase
from GPyOpt.acquisitions.EI import AcquisitionEI
from numpy.random import beta
class jitter_integrated_EI(AcquisitionBase):
analytical_gradient_prediction = True
def __init__(self, model, space, optimizer=None, cost_withGradients=None, par_a=1, par_b=1, num_samples= 10):
super(jitter_integrated_EI, self).__init__(model, space, optimizer)
self.par_a = par_a
self.par_b = par_b
self.num_samples = num_samples
self.samples = beta(self.par_a,self.par_b,self.num_samples)
self.EI = AcquisitionEI(model, space, optimizer, cost_withGradients)
def acquisition_function(self,x):
acqu_x = np.zeros((x.shape[0],1))
for k in range(self.num_samples):
self.EI.jitter = self.samples[k]
acqu_x +=self.EI.acquisition_function(x)
return acqu_x/self.num_samples
def acquisition_function_withGradients(self,x):
acqu_x = np.zeros((x.shape[0],1))
acqu_x_grad = np.zeros(x.shape)
for k in range(self.num_samples):
self.EI.jitter = self.samples[k]
acqu_x_sample, acqu_x_grad_sample =self.EI.acquisition_function_withGradients(x)
acqu_x += acqu_x_sample
acqu_x_grad += acqu_x_grad_sample
return acqu_x/self.num_samples, acqu_x_grad/self.num_samples<jupyter_output><empty_output><jupyter_text>Now we initialize the class for this acquisition and we plot the histogram of the used samples to integrate the acquisition.<jupyter_code>acquisition = jitter_integrated_EI(model, space, optimizer=aquisition_optimizer, par_a=1, par_b=10, num_samples=200)
xx = plt.hist(acquisition.samples,bins=50)<jupyter_output><empty_output><jupyter_text>* Finally we create the class for the **type of evaluator**,<jupyter_code># --- CHOOSE a collection method
evaluator = GPyOpt.core.evaluators.Sequential(acquisition)<jupyter_output><empty_output><jupyter_text>With all the classes on place,including the one we have created for this example, we can now create the **Bayesian optimization object**.<jupyter_code>bo = GPyOpt.methods.ModularBayesianOptimization(model, space, objective, acquisition, evaluator, initial_design)<jupyter_output><empty_output><jupyter_text>And we run the optimization.<jupyter_code>max_iter = 10
bo.run_optimization(max_iter = max_iter) <jupyter_output><empty_output><jupyter_text>We plot the acquisition and the diagnostic plots.<jupyter_code>bo.plot_acquisition()
bo.plot_convergence()<jupyter_output><empty_output>
|
no_license
|
/manual/GPyOpt_modular_bayesian_optimization.ipynb
|
LinfuYang/GPyOpt_biaozhun
| 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.