path
stringlengths 8
204
| content_id
stringlengths 40
40
| detected_licenses
list | license_type
stringclasses 2
values | repo_name
stringlengths 8
100
| repo_url
stringlengths 27
119
| star_events_count
int64 0
6.26k
| fork_events_count
int64 0
3.52k
| gha_license_id
stringclasses 10
values | gha_event_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_language
stringclasses 12
values | language
stringclasses 1
value | is_generated
bool 1
class | is_vendor
bool 1
class | conversion_extension
stringclasses 6
values | size
int64 172
10.2M
| script
stringlengths 367
7.46M
| script_size
int64 367
7.46M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
/assignments/assignment04/MatplotlibEx01.ipynb
|
a3eb3a914b72d1d6e03db531669e897479770acc
|
[
"MIT"
] |
permissive
|
aschaffn/phys202-2015-work
|
https://github.com/aschaffn/phys202-2015-work
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 221,320 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={}
# # Matplotlib Exercise 1
# + [markdown] nbgrader={}
# ## Imports
# + nbgrader={}
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# + [markdown] nbgrader={}
# ## Line plot of sunspot data
# + [markdown] nbgrader={}
# Download the `.txt` data for the "Yearly mean total sunspot number [1700 - now]" from the [SILSO](http://www.sidc.be/silso/datafiles) website. Upload the file to the same directory as this notebook.
# + deletable=false nbgrader={"checksum": "7f8ea13f251ef02c216ed08cad6516a7", "grade": true, "grade_id": "matplotlibex01a", "points": 1}
import os
assert os.path.isfile('yearssn.dat')
# + [markdown] nbgrader={}
# Use `np.loadtxt` to read the data into a NumPy array called `data`. Then create two new 1d NumPy arrays named `years` and `ssc` that have the sequence of year and sunspot counts.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
data = np.loadtxt('yearssn.dat')
year = np.array(data[:,0])
ssc = np.array(data[:,1])
# + deletable=false nbgrader={"checksum": "487fbe3f8889876c782a18756175d727", "grade": true, "grade_id": "matplotlibex01b", "points": 1}
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
# + [markdown] nbgrader={}
# Make a line plot showing the sunspot count as a function of year.
#
# * Customize your plot to follow Tufte's principles of visualizations.
# * Adjust the aspect ratio/size so that the steepest slope in your plot is *approximately* 1.
# * Customize the box, grid, spines and ticks to match the requirements of this data.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
plt.figure(figsize=(100,5)) # 9" x 6", default is 8" x 5.5"
plt.plot(year, ssc, '-')
plt.xlabel('Year')
plt.ylabel('Number of Sunspots')
#plt.ylim(-5,5)
plt.title('Sunspots since 1700'); # supress text output
plt.xlim(right = 2020)
plt.xticks(range(1700, 2000, 10))
max(year)
# + deletable=false nbgrader={"checksum": "d7cdb9758e069eb5f0d1c1b4c4f56668", "grade": true, "grade_id": "matplotlibex01c", "points": 3}
assert True # leave for grading
# + [markdown] nbgrader={}
# Describe the choices you have made in building this visualization and how they make it effective.
# + [markdown] deletable=false nbgrader={"checksum": "89c49052b770b981791536f5c2b07e13", "grade": true, "grade_id": "matplotlibex01d", "points": 1, "solution": true}
# Choices influencing the plot:
#
# * Axes are labeled
# * Plot has a meaningful title
# * A lineplot was chosen since we would want to visualize changes over time
# * It's tough to fit the plot with the desired aspect ratio on a small window
# + [markdown] nbgrader={}
# Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
#
# * Customize your plot to follow Tufte's principles of visualizations.
# * Adjust the aspect ratio/size so that the steepest slope in your plot is *approximately* 1.
# * Customize the box, grid, spines and ticks to match the requirements of this data.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
f, ax = plt.subplots(4, 1, sharex=False, sharey=True, figsize=(15,8))
for i in range(4):
plt.sca(ax[i])
plt.plot(year, ssc)
plt.xticks(range(1700, 2100, 10))
plt.xlim(1700 + i*100, 1700 + (i+1)*100)
plt.ylabel('Number of Spots')
plt.tight_layout()
plt.xlabel("Year")
# + deletable=false nbgrader={"checksum": "332b489afbabd6c48e3456fb8db4ee88", "grade": true, "grade_id": "matplotlibex01e", "points": 4}
assert True # leave for grading
| 4,076 |
/cifar10 with tensorboard.ipynb
|
8939ad60d68414a446c543d29f6b4fb8658c9471
|
[] |
no_license
|
AjJordy/FrankVisao
|
https://github.com/AjJordy/FrankVisao
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 32,899 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Definition for singly-linked list.
class ListNode:
def __init__(self, x):
self.val = x
self.next = None
def ls2ln(ls):
head = c = ListNode(0)
for i in ls:
c.next = ListNode(i)
c = c.next
return head.next
def ln2ls(ln):
ls = [ln.val]
while ln.next:
ln = ln.next
ls.append(ln.val)
return ls
# -
class Solution:
def swapPairs(self, head: ListNode) -> ListNode:
c_head = c0 = ListNode(0)
c0.next = head
while c0.next and c0.next.next:
c1 = c0.next
c2 = c1.next
c3 = c2.next
# swap
c0.next = c2
c2.next = c1
c1.next = c3
c0 = c1
return c_head.next
# ## Example
# +
ls = [1,2,3,4]
ans = Solution().swapPairs(ls2ln(ls))
ln2ls(ans)
# should be [2,1,4,3]
# -
# 
hale Portfolio daily returns and clean the data
# Reading whale returns
whale_returns_csv = Path('../Resources/whale_returns.csv')
csv_whale = pd.read_csv(whale_returns_csv,index_col="Date", parse_dates=True,infer_datetime_format=True)
csv_whale.sample(5)
# Count nulls
csv_whale.isnull().sum()
# Drop nulls
csv_whale.dropna(inplace=True)
csv_whale.head()
# +
#csv_whale.isnull().sum()
# -
# ## Algorithmic Daily Returns
#
# Read the algorithmic daily returns and clean the data
# Reading algorithmic returns
algo_returns_csv= Path('../Resources/algo_returns.csv')
csv_algo=pd.read_csv(algo_returns_csv,index_col="Date", parse_dates=True,infer_datetime_format=True)
csv_algo.sample(5)
# Count nulls
csv_algo.isnull().sum()
# Drop nulls
csv_algo.dropna(inplace=True)
csv_algo.head()
# ## S&P 500 Returns
#
# Read the S&P500 Historic Closing Prices and create a new daily returns DataFrame from the data.
# Reading S&P 500 Closing Prices
sp500_history_csv = Path('../Resources/sp500_history.csv')
csv_sp500=pd.read_csv(sp500_history_csv,index_col="Date", parse_dates=True,infer_datetime_format=True)
csv_sp500.sample(5)
# +
#csv_sp500.set_index(csv_sp500['Date'], inplace=True)
#csv_sp500.head()
# -
# Check Data Types
csv_sp500.dtypes
# +
#csv_sp500=csv_sp500.drop(columns=['Date'])
#csv_sp500.head()
# -
csv_sp500.columns
csv_sp500['Close']=csv_sp500['Close'].replace({'\$':''}, regex = True)
csv_sp500.head()
# +
# Fix Data Types
csv_sp500['Close'] = csv_sp500['Close'].astype(float)
# -
csv_sp500.dtypes
# +
# Calculate Daily Returns
sp_daily_returns = csv_sp500['Close'].pct_change()
sp_daily_returns.head(5)
# -
# Drop nulls
sp_daily_returns.dropna(inplace=True)
sp_daily_returns.head()
# +
# Rename Column
sp_daily_returns = sp_daily_returns.rename({'Close':'S&P500'})
#columns = ["S&P500"]
#sp_daily_returns.columns = ['Date','S&P500']
sp_daily_returns.head()
# -
# > ### For some reason I can not change the name of the column
# ## Combine Whale, Algorithmic, and S&P 500 Returns
# +
# Concatenate all DataFrames into a single DataFrame
combined_df = pd.concat([csv_whale,csv_algo,sp_daily_returns], axis="columns", join="inner")
combined_df.head()
# -
# Rename Column 'Close' to 'S&P500'
combined_df.columns = [
'SOROS FUND MANAGEMENT LLC',
'PAULSON & CO.INC.',
'TIGER GLOBAL MANAGEMENT LLC',
'BERKSHIRE HATHAWAY INC',
'Algo 1',
'Algo 2',
'S&P500',
]
combined_df.head()
# ---
# # Portfolio Analysis
#
# In this section, you will calculate and visualize performance and risk metrics for the portfolios.
# ## Performance
#
# Calculate and Plot the daily returns and cumulative returns.
# Plot daily returns
combined_df.plot(figsize=(20,10))
# +
# Plot cumulative returns
cumulative_return = (1 + combined_df).cumprod()-1
cumulative_return.head()
# -
cumulative_return.plot(figsize=(20,10))
# ## Risk
#
# Determine the _risk_ of each portfolio:
#
# 1. Create a box plot for each portfolio.
# 2. Calculate the standard deviation for all portfolios
# 4. Determine which portfolios are riskier than the S&P 500
# 5. Calculate the Annualized Standard Deviation
# Box plot to visually show risk
combined_df.plot.box(figsize=(20,10))
# Daily Standard Deviations
# Calculate the standard deviation for each portfolio. Which portfolios are riskier than the S&P 500?
daily_combined_std = combined_df.std()
daily_combined_std
# Determine which portfolios are riskier than the S&P 500
#daily_combined_std=daily_combined_std.sort_values()
daily_combined_std>daily_combined_std['S&P500']
# Calculate the annualized standard deviation (252 trading days)
annualized_std = daily_combined_std * np.sqrt(252)
annualized_std.head()
# ---
# ## Rolling Statistics
#
# Risk changes over time. Analyze the rolling statistics for Risk and Beta.
#
# 1. Calculate and plot the rolling standard deviation for the S&PP 500 using a 21 day window
# 2. Calcualte the correlation between each stock to determine which portfolios may mimick the S&P 500
# 2. Calculate and plot a 60 day Beta for Berkshire Hathaway Inc compared to the S&&P 500
# Calculate and plot the rolling standard deviation for the S&PP 500 using a 21 day window
combined_df.rolling(window=21).std().plot(figsize=(20,10))
# Correlation
combined_df.corr()
# Calculate Beta for a single portfolio compared to the total market (S&P 500)
covariance = combined_df['BERKSHIRE HATHAWAY INC'].cov(combined_df['S&P500'])
covariance
variance = combined_df['BERKSHIRE HATHAWAY INC'].var()
variance
BH_betta = covariance / variance
BH_betta
# ### Challenge: Exponentially Weighted Average
#
# An alternative way to calculate a rollwing window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the `ewm` with a 21 day half-life.
# +
# (OPTIONAL) YOUR CODE HERE
# -
# ---
# ## Sharpe Ratios
# In reality, investment managers and thier institutional investors look at the ratio of return-to-risk, and not just returns alone. (After all, if you could invest in one of two portfolios, each offered the same 10% return, yet one offered lower risk, you'd take that one, right?)
#
# Calculate and plot the annualized Sharpe ratios for all portfolios to determine which portfolio has the best performance
# Annualzied Sharpe Ratios
sharpe_ratios = (combined_df.mean() * 252) / (combined_df.std() * np.sqrt(252))
sharpe_ratios
# plot() these sharpe ratios using a barplot.
# On the basis of this performance metric, do our algo strategies outperform both 'the market' and the whales?
# Visualize the sharpe ratios as a bar plot
sharpe_ratios.plot.bar()
# ---
# # Portfolio Returns
#
# In this section, you will build your own portfolio of stocks, calculate the returns, and compare the results to the Whale Portfolios and the S&P 500.
#
# 1. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.
# 2. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock
# 3. Join your portfolio returns to the DataFrame that contains all of the portfolio returns
# 4. Re-run the performance and risk analysis with your portfolio to see how it compares to the others
# 5. Include correlation analysis to determine which stocks (if any) are correlated
# ## Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.
# Read the first stock
goog_csv_path = Path("../Resources/goog_historical.csv")
goog_df = pd.read_csv(goog_csv_path, index_col="Trade DATE", parse_dates=True, infer_datetime_format=True)
goog_df.head()
# Read the second stock
aapl_csv_path = Path("../Resources/aapl_historical.csv")
aapl_df = pd.read_csv(aapl_csv_path, index_col="Trade DATE", parse_dates=True, infer_datetime_format=True)
aapl_df.head()
# Read the third stock
cost_csv_path = Path("../Resources/cost_historical.csv")
cost_df = pd.read_csv(cost_csv_path, index_col="Trade DATE", parse_dates=True, infer_datetime_format=True)
cost_df.head()
# +
# Concatenate all stocks into a single DataFrame
cust_combined_df = pd.concat([aapl_df, goog_df, cost_df], axis="columns", join="inner")
cust_combined_df.head()
# -
# Reset the index
cust_combined_df.sort_index(inplace=True)
cust_combined_df.head()
# +
# Pivot the Data so that the stock tickers are the columns, the dates are the index, and the
# values are the closing prices
custom_df = pd.concat([aapl_df,goog_df,cost_df], axis="rows", join="inner")
custom_df = custom_df.reset_index()
custom_df = custom_df.pivot_table(values="NOCP", index="Trade DATE", columns="Symbol")
custom_df.head()
# -
#Daily returns
daily_custome_returns = custom_df.pct_change()
daily_custome_returns.head()
# Drop Nulls
daily_custome_returns.dropna(inplace=True)
daily_custome_returns.head()
# ## Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock
# +
# Calculate weighted portfolio returns
weights = [1/3, 1/3, 1/3]
aapl_weight = 1/3
goog_weight = 1/3
cost_weight = 1/3
custome_returns = aapl_weight * daily_custome_returns['AAPL'] + goog_weight*daily_custome_returns['GOOG'] + cost_weight*daily_custome_returns['COST']
custome_returns.head()
# -
# ## Join your portfolio returns to the DataFrame that contains all of the portfolio returns
# YOUR CODE HERE
all_combined =pd.concat([combined_df, custome_returns],axis="columns", join="inner")
all_combined.head(10)
#Rename the columns
all_combined.columns = [
'SOROS FUND MANAGEMENT LLC',
'PAULSON & CO.INC.',
'TIGER GLOBAL MANAGEMENT LLC',
'BERKSHIRE HATHAWAY INC',
'Algo 1',
'Algo 2',
'S&P500',
'Custom'
]
all_combined.head()
# +
# Only compare dates where the new, custom portfolio has dates
#
#
#
# -
all_combined.sort_index(axis=0)
all_combined.tail()
# ## Re-run the performance and risk analysis with your portfolio to see how it compares to the others
# +
# Risk
volatility = all_combined.std() * np.sqrt(252)
#volatility.sort_values(inplace=True)
volatility
# -
# Rolling
all_combined.rolling(window=21).std().plot(figsize=(20,10), title='21-Day Rolling Standart Deviation')
# +
# Beta
# Custom portfolio betta calculation:
#custom_beta = custom_covariance / custom_variance
#rolling_custom_betta = rolling_custom_covariance / rolling_custom_variance
# -
#custom_covariance = all_combined['Custom'].cov(all_combined['S&P500'])
rolling_custom_covariance = all_combined['Custom'].rolling(window=21).cov(all_combined['S&P500'])
#custom_variance =all_combined['Custom'].var()
rolling_custom_variance = all_combined['Custom'].rolling(window=21).var()
# +
#custom_beta = covariance / variance
rolling_custom_betta = rolling_custom_covariance / rolling_custom_variance
rolling_custom_betta.plot(figsize=(20, 10), title='Rolling 21-Day Beta of Custom')
# -
# Annualzied Sharpe Ratios
port_sharpe_ratios = (all_combined.mean() * 252) / (all_combined.std() * np.sqrt(252))
port_sharpe_ratios
# Visualize the sharpe ratios as a bar plot
port_sharpe_ratios.plot.bar(title='Portfolio Sharpe Ratios')
# ## Include correlation analysis to determine which stocks (if any) are correlated
correlation = all_combined.corr()
correlation
sns.heatmap(correlation, vmin=-2, vmax=1, cmap="YlGnBu", linewidths=.5, annot=True)
# * #### Based on portfolio correlation analysis table and heatmap chart we can see that "SOROS FUND MANAGEMENT LLC" and "Algo 2" are the most correlated portfolios
# * #### In comparison to "S&P500" the best portfolio that has list correlation is "Algo1"
| 11,924 |
/Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/Initialization.ipynb
|
1a2c21c2ca2a14e8d97c8c2392f5cda1eaba1b5a
|
[] |
no_license
|
seungwon1/Machine-Learning-Deep-Learning-on-COURSERA
|
https://github.com/seungwon1/Machine-Learning-Deep-Learning-on-COURSERA
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 170,956 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Initialization
#
# Welcome to the first assignment of "Improving Deep Neural Networks".
#
# Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning.
#
# If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results.
#
# A well chosen initialization can:
# - Speed up the convergence of gradient descent
# - Increase the odds of gradient descent converging to a lower training (and generalization) error
#
# To get started, run the following cell to load the packages and the planar dataset you will try to classify.
# +
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
# %matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
# -
# You would like a classifier to separate the blue dots from the red dots.
# ## 1 - Neural Network model
# You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with:
# - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.
# - *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values.
# - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015.
#
# **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
# ## 2 - Zero initialization
#
# There are two types of parameters to initialize in a neural network:
# - the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$
# - the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$
#
# **Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
# +
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[1], 1))
### END CODE HERE ###
return parameters
# -
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **W1**
# </td>
# <td>
# [[ 0. 0. 0.]
# [ 0. 0. 0.]]
# </td>
# </tr>
# <tr>
# <td>
# **b1**
# </td>
# <td>
# [[ 0.]
# [ 0.]]
# </td>
# </tr>
# <tr>
# <td>
# **W2**
# </td>
# <td>
# [[ 0. 0.]]
# </td>
# </tr>
# <tr>
# <td>
# **b2**
# </td>
# <td>
# [[ 0.]]
# </td>
# </tr>
#
# </table>
# Run the following code to train your model on 15,000 iterations using zeros initialization.
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
# The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# The model is predicting 0 for every example.
#
# In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression.
# <font color='blue'>
# **What you should remember**:
# - The weights $W^{[l]}$ should be initialized randomly to break symmetry.
# - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly.
#
# ## 3 - Random initialization
#
# To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values.
#
# **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
# +
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
# -
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **W1**
# </td>
# <td>
# [[ 17.88628473 4.36509851 0.96497468]
# [-18.63492703 -2.77388203 -3.54758979]]
# </td>
# </tr>
# <tr>
# <td>
# **b1**
# </td>
# <td>
# [[ 0.]
# [ 0.]]
# </td>
# </tr>
# <tr>
# <td>
# **W2**
# </td>
# <td>
# [[-0.82741481 -6.27000677]]
# </td>
# </tr>
# <tr>
# <td>
# **b2**
# </td>
# <td>
# [[ 0.]]
# </td>
# </tr>
#
# </table>
# Run the following code to train your model on 15,000 iterations using random initialization.
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
# If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes.
#
# Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# **Observations**:
# - The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.
# - Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm.
# - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.
#
# <font color='blue'>
# **In summary**:
# - Initializing weights to very large random values does not work well.
# - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part!
# ## 4 - He initialization
#
# Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)
#
# **Exercise**: Implement the following function to initialize your parameters with He initialization.
#
# **Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
# +
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*np.sqrt(2/layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
# -
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **W1**
# </td>
# <td>
# [[ 1.78862847 0.43650985]
# [ 0.09649747 -1.8634927 ]
# [-0.2773882 -0.35475898]
# [-0.08274148 -0.62700068]]
# </td>
# </tr>
# <tr>
# <td>
# **b1**
# </td>
# <td>
# [[ 0.]
# [ 0.]
# [ 0.]
# [ 0.]]
# </td>
# </tr>
# <tr>
# <td>
# **W2**
# </td>
# <td>
# [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
# </td>
# </tr>
# <tr>
# <td>
# **b2**
# </td>
# <td>
# [[ 0.]]
# </td>
# </tr>
#
# </table>
# Run the following code to train your model on 15,000 iterations using He initialization.
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# **Observations**:
# - The model with He initialization separates the blue and the red dots very well in a small number of iterations.
#
# ## 5 - Conclusions
# You have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is:
#
# <table>
# <tr>
# <td>
# **Model**
# </td>
# <td>
# **Train accuracy**
# </td>
# <td>
# **Problem/Comment**
# </td>
#
# </tr>
# <td>
# 3-layer NN with zeros initialization
# </td>
# <td>
# 50%
# </td>
# <td>
# fails to break symmetry
# </td>
# <tr>
# <td>
# 3-layer NN with large random initialization
# </td>
# <td>
# 83%
# </td>
# <td>
# too large weights
# </td>
# </tr>
# <tr>
# <td>
# 3-layer NN with He initialization
# </td>
# <td>
# 99%
# </td>
# <td>
# recommended method
# </td>
# </tr>
# </table>
# <font color='blue'>
# **What you should remember from this notebook**:
# - Different initializations lead to different results
# - Random initialization is used to break symmetry and make sure different hidden units can learn different things
# - Don't intialize to values that are too large
# - He initialization works well for networks with ReLU activations.
eleration()
ptp_dots = self.get_euler_derivatives()
X_array = np.array([self.X[6],
self.X[7],
self.X[8],
ptp_dots[0],
ptp_dots[1],
ptp_dots[2],
xyz_dot_dots[0],
xyz_dot_dots[1],
xyz_dot_dots[2],
pqr_dots[0],
pqr_dots[1],
pqr_dots[2]
])
self.X = self.X + X_array * dt
return self.X
# return super(DroneIn3D, self).advance_state(dt)
# -
testing.test_exercise_3_2(DroneIn3D)
# Exercise 3 ends here! At this point you should press "Next" and continue with the lesson.
#
# -----
#
# -----
#
# -----
# # 3D controller
# Next, we will implement the controller for the drone which will be able to control it in the 3D environment.
# From lesson, you are already familiar with the architecture of the controller. It consists of altitude controller, position controller, and attitude controller.
#
# <img src="control1_mod.png" width="800">
# The attitude controller breaks down into smaller controllers responsible for roll-pitch, yaw, and body rate.
# <img src="control2.png" width="600">
# Parameters which will be required to create the controller are:
# - Altitude controller: $k_{p-z}$, $k_{d-z}$
# - Position (lateral) controller: $k_{p-x}$, $k_{d-x}$, $k_{p-y}$, $k_{d-y}$
# - Roll-Pitch controller: $k_{p-roll}$, $k_{p-pitch}$
# - Yaw controller: $k_{p-yaw}$
# - Body rate controller: $k_{p-p}$, $k_{p-q}$, $k_{p-r}$
#
# Based on input parameters we also can calculate $\delta$ and $\omega_n$ for altitude $z$ and lateral controls $x$ and $y$.
class Controller(UDACITYController):
def __init__(self,
z_k_p=1.0,
z_k_d=1.0,
x_k_p=1.0,
x_k_d=1.0,
y_k_p=1.0,
y_k_d=1.0,
k_p_roll=1.0,
k_p_pitch=1.0,
k_p_yaw=1.0,
k_p_p=1.0,
k_p_q=1.0,
k_p_r=1.0):
self.z_k_p = z_k_p
self.z_k_d = z_k_d
self.x_k_p = x_k_p
self.x_k_d = x_k_d
self.y_k_p = y_k_p
self.y_k_d = y_k_d
self.k_p_roll = k_p_roll
self.k_p_pitch = k_p_pitch
self.k_p_yaw = k_p_yaw
self.k_p_p = k_p_p
self.k_p_q = k_p_q
self.k_p_r = k_p_r
print('x: delta = %5.3f'%(x_k_d/2/math.sqrt(x_k_p)), ' omega_n = %5.3f'%(math.sqrt(x_k_p)))
print('y: delta = %5.3f'%(y_k_d/2/math.sqrt(y_k_p)), ' omega_n = %5.3f'%(math.sqrt(y_k_p)))
print('z: delta = %5.3f'%(z_k_d/2/math.sqrt(z_k_p)), ' omega_n = %5.3f'%(math.sqrt(z_k_p)))
self.g= 9.81
# ## Exercise 4
# Note that we are using a slightly different control architecture than what is discussed in the lesson (and what you will implement in the final project).
#
# For now, the job of the lateral controller is to generate commanded values for the rotation matrix elements $\mathbf{R_{13}}$ (also referred to as $b^x$) and $\mathbf{R_{23}}$ (also referred to as $b^y$).
#
# ### 4.1 Lateral controller
# The lateral controller will use a PD controller to command target values for elements of the drone's rotation matrix. The drone generates lateral acceleration by changing the body orientation which results in non-zero thrust in the desired direction. This will translate into the commanded rotation matrix elements $b^x_c$ and $b^y_c$. The control equations have the following form:
#
# $$
# \begin{align}
# \ddot{x}_{\text{command}} &= c b^x_c \\
# \ddot{x}_{\text{command}} &= k^x_p(x_t-x_a) + k_d^x(\dot{x}_t - \dot{x}_a)+ \ddot{x}_t \\
# b^x_c &= \ddot{x}_{\text{command}}/c
# \end{align}
# $$
#
# for the $y$ direction the control equations will have the same form as above.
# +
# %%add_to Controller
# Exercise 4.1
def lateral_controller(self,
x_target,
x_dot_target,
x_dot_dot_target,
x_actual,
x_dot_actual,
y_target,
y_dot_target,
y_dot_dot_target,
y_actual,
y_dot_actual,
c):
# TODO replace with your own implementation
x_k_p = self.x_k_p * (x_target - x_actual)
x_k_d = self.x_k_d * (x_dot_target - x_dot_actual)
x_dot_dot_command = x_k_p + x_k_d + x_dot_dot_target
y_k_p = self.y_k_p * (y_target - y_actual)
y_k_d = self.y_k_d * (y_dot_target - y_dot_actual)
y_dot_dot_command = y_k_p + y_k_d + y_dot_dot_target
b_x_c = x_dot_dot_command / c
b_y_c = y_dot_dot_command / c
return b_x_c, b_y_c
# return super(Controller, self).lateral_controller(
# x_target,
# x_dot_target,
# x_dot_dot_target,
# x_actual,
# x_dot_actual,
# y_target,
# y_dot_target,
# y_dot_dot_target,
# y_actual,
# y_dot_actual,
# c)
# -
testing.test_exercise_4_1(Controller)
# ### 4.2 Roll-Pitch controller
#
# The roll-pitch controller is a P controller responsible for commanding the roll and pitch rates ($p_c$ and $q_c$) in the body frame. First, it sets the desired rate of change of the given matrix elements using a P controller.
#
# **Note** - subscript c means "commanded" and a means "actual"
#
# $\dot{b}^x_c = k_p(b^x_c - b^x_a)$
#
# $\dot{b}^y_c = k_p(b^y_c - b^y_a)$
#
# where $b^x_a = R_{13}$ and $b^y_a = R_{23}$. The given values can be converted into the angular velocities into the body frame by the next matrix multiplication.
#
# $$
# \begin{pmatrix} p_c \\ q_c \\ \end{pmatrix} = \frac{1}{R_{33}}\begin{pmatrix} R_{21} & -R_{11} \\ R_{22} & -R_{12} \end{pmatrix} \times \begin{pmatrix} \dot{b}^x_c \\ \dot{b}^y_c \end{pmatrix}
# $$
#
# +
# %%add_to Controller
# Exercise 4.2
def roll_pitch_controller(self,
b_x_c_target,
b_y_c_target,
rot_mat):
# TODO replace with your own implementation
b_x = rot_mat[0,2]
b_y = rot_mat[1,2]
b_x_commanded_dot = self.k_p_roll * (b_x_c_target - b_x)
b_y_commanded_dot = self.k_p_pitch * (b_y_c_target - b_y)
rot_mat1 = np.array([[rot_mat[1,0], -rot_mat[0,0]],[rot_mat[1,1], -rot_mat[0,1]]]) / rot_mat[2,2]
rot_rate = np.matmul(rot_mat1, np.array([b_x_commanded_dot, b_y_commanded_dot]))
p_c = rot_rate[0]
q_c = rot_rate[1]
return p_c, q_c
# return super(Controller, self).roll_pitch_controller(b_x_c_target,
# b_y_c_target,
# rot_mat)
# -
testing.test_exercise_4_2(Controller)
# Exercise 4 ends here! At this point you should press "Next" and continue with the lesson.
#
# ------
# ## Exercise 5
#
# ### 5.1 Body rate controller
# The commanded roll, pitch, and yaw are collected by the body rate controller, and they are translated into the desired rotational accelerations along the axis in the body frame.
#
# $p_{\text{error}} = p_c - p$
#
# $\bar{u}_p= k_{p-p} p_{\text{error}}$
#
# $q_{\text{error}} = q_c - q$
#
# $\bar{u}_q= k_{p-q} q_{\text{error}}$
#
# $r_{\text{error}} = r_c - r$
#
# $\bar{u}_r= k_{p-r} r_{\text{error}}$
#
# +
# %%add_to Controller
# Exercise 5.1
def body_rate_controller(self,
p_c,
q_c,
r_c,
p_actual,
q_actual,
r_actual):
# TODO replace with your own implementation
# return u_bar_p, u_bar_q, u_bar_r
return super(Controller, self).body_rate_controller(p_c,
q_c,
r_c,
p_actual,
q_actual,
r_actual)
# -
testing.test_exercise_5_1(Controller)
# ### 5.2 Yaw controller
#
# Control over yaw is decoupled from the other directions. A P controller is used to control the drone's yaw.
#
# $r_c = k_p (\psi_t - \psi_a)$
# +
# %%add_to Controller
# Exercise 5.2
def yaw_controller(self,
psi_target,
psi_actual):
# TODO replace with your own implementation
# return r_c
return super(Controller, self).yaw_controller(psi_target,
psi_actual)
# -
testing.test_exercise_5_2(Controller)
# ### 5.3 Altitude Controller
#
# Linear acceleration can be expressed by the next linear equation
# $$
# \begin{pmatrix} \ddot{x} \\ \ddot{y} \\ \ddot{z}\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ g\end{pmatrix} + R \begin{pmatrix} 0 \\ 0 \\ c \end{pmatrix}
# $$
#
# where $R = R(\psi) \times R(\theta) \times R(\phi)$. The individual linear acceleration has the form of
#
# $$
# \begin{align}
# \ddot{x} &= c b^x \\
# \ddot{y} &= c b^y \\
# \ddot{z} &= c b^z +g
# \end{align}
# $$
# where $b^x = R_{13}$, $b^y= R_{23}$ and $b^z = R_{33}$ are the elements of the last column of the rotation matrix.
#
# We are controlling the vertical acceleration:
#
# $$\bar{u}_1 = \ddot{z} = c b^z +g$$
#
# Therefore
#
# $$c = (\bar{u}_1-g)/b^z$$
#
#
# In this exercise a PD controller is used for the altitude which results in:
#
# $$\bar{u}_1 = k_{p-z}(z_{t} - z_{a}) + k_{d-z}(\dot{z}_{t} - \dot{z}_{a}) + \ddot{z}_t$$
#
# +
# %%add_to Controller
# Exercise 5.3
def altitude_controller(self,
z_target,
z_dot_target,
z_dot_dot_target,
z_actual,
z_dot_actual,
rot_mat):
# TODO replace with your own implementation
# return c
return super(Controller, self).altitude_controller(z_target,
z_dot_target,
z_dot_dot_target,
z_actual,
z_dot_actual,
rot_mat)
# -
testing.test_exercise_5_3(Controller)
# ### Attitude controller (provided)
#
# For the purposes of this lesson, the attitude controller consists of the roll-pitch controller, yaw controller, and body rate controller. We've provided this code for you (but you should read through it).
# +
# %%add_to Controller
def attitude_controller(self,
b_x_c_target,
b_y_c_target,
psi_target,
psi_actual,
p_actual,
q_actual,
r_actual,
rot_mat):
p_c, q_c = self.roll_pitch_controller(b_x_c_target,
b_y_c_target,
rot_mat)
r_c = self.yaw_controller(psi_target,
psi_actual)
u_bar_p, u_bar_q, u_bar_r = self.body_rate_controller(
p_c,
q_c,
r_c,
p_actual,
q_actual,
r_actual)
return u_bar_p, u_bar_q, u_bar_r
# -
# # Flight planning
#
# In order to test the developed drone dynamics and the controller, we will execute simple three-dimensional flight with changing yaw angle.
#
# _Keep in mind that the desired flight path needs to have an analytical form (differentiable for the entirety of the path)._
#
# The selected test path is a figure 8 in three dimensions with yaw that keeps the drone always facing along the motion direction.
#
# $$
# \begin{align}
# x &= \sin{\omega_x t} \\
# y &= \cos{\omega_y t} \\
# z &= \cos{\omega_z t} \\
# \end{align}
# $$
# $\omega_z = \omega_y = \omega_x/2$.
# +
total_time = 20.0
dt = 0.01
t=np.linspace(0.0,total_time,int(total_time/dt))
omega_x = 0.8
omega_y = 0.4
omega_z = 0.4
a_x = 1.0
a_y = 1.0
a_z = 1.0
x_path= a_x * np.sin(omega_x * t)
x_dot_path= a_x * omega_x * np.cos(omega_x * t)
x_dot_dot_path= -a_x * omega_x**2 * np.sin(omega_x * t)
y_path= a_y * np.cos(omega_y * t)
y_dot_path= -a_y * omega_y * np.sin(omega_y * t)
y_dot_dot_path= -a_y * omega_y**2 * np.cos(omega_y * t)
z_path= a_z * np.cos(omega_z * t)
z_dot_path= -a_z * omega_z * np.sin(omega_z * t)
z_dot_dot_path= - a_z * omega_z**2 * np.cos(omega_z * t)
psi_path= np.arctan2(y_dot_path,x_dot_path)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x_path, y_path, z_path)
plt.title('Flight path').set_fontsize(20)
ax.set_xlabel('$x$ [$m$]').set_fontsize(20)
ax.set_ylabel('$y$ [$m$]').set_fontsize(20)
ax.set_zlabel('$z$ [$m$]').set_fontsize(20)
plt.legend(['Planned path'],fontsize = 14)
plt.figure(figsize=(10,10))
plt.show()
# -
# ### Plotting the drone's headings
# +
fig = plt.figure()
ax = fig.gca(projection='3d')
u = np.cos(psi_path)
v = np.sin(psi_path)
w = np.zeros(psi_path.shape)
for i in range(0,z_path.shape[0],20):
ax.quiver(x_path[i], y_path[i], z_path[i], u[i], v[i], w[i], length=0.2, normalize=True,color='green')
plt.title('Flight path').set_fontsize(20)
ax.set_xlabel('$x$ [$m$]').set_fontsize(20)
ax.set_ylabel('$y$ [$m$]').set_fontsize(20)
ax.set_zlabel('$z$ [$m$]').set_fontsize(20)
plt.legend(['Planned yaw',],fontsize = 14)
plt.show()
# -
# # Executing the flight
#
# In this section, we will set up the entire system and will connect the drone object with the controller object. Next, execute the flight and compare the desired path with the executed one.
# +
# how fast the inner loop (Attitude controller) performs calculations
# relative to the outer loops (altitude and position controllers).
inner_loop_relative_to_outer_loop = 10
# creating the drone object
drone = DroneIn3D()
# creating the control system object
control_system = Controller(z_k_p=2.0,
z_k_d=1.0,
x_k_p=6.0,
x_k_d=4.0,
y_k_p=6.0,
y_k_d=4.0,
k_p_roll=8.0,
k_p_pitch=8.0,
k_p_yaw=8.0,
k_p_p=20.0,
k_p_q=20.0,
k_p_r=20.0)
# declaring the initial state of the drone with zero
# height and zero velocity
drone.X = np.array([x_path[0],
y_path[0],
z_path[0],
0.0,
0.0,
psi_path[0],
x_dot_path[0],
y_dot_path[0],
z_dot_path[0],
0.0,
0.0,
0.0])
# arrays for recording the state history,
# propeller angular velocities and linear accelerations
drone_state_history = drone.X
omega_history = drone.omega
accelerations = drone.linear_acceleration()
accelerations_history= accelerations
angular_vel_history = drone.get_euler_derivatives()
# executing the flight
for i in range(0,z_path.shape[0]):
rot_mat = drone.R()
c = control_system.altitude_controller(z_path[i],
z_dot_path[i],
z_dot_dot_path[i],
drone.X[2],
drone.X[8],
rot_mat)
b_x_c, b_y_c = control_system.lateral_controller(x_path[i],
x_dot_path[i],
x_dot_dot_path[i],
drone.X[0],
drone.X[6],
y_path[i],
y_dot_path[i],
y_dot_dot_path[i],
drone.X[1],
drone.X[7],
c)
for _ in range(inner_loop_relative_to_outer_loop):
rot_mat = drone.R()
angular_vel = drone.get_euler_derivatives()
u_bar_p, u_bar_q, u_bar_r = control_system.attitude_controller(
b_x_c,
b_y_c,
psi_path[i],
drone.psi,
drone.X[9],
drone.X[10],
drone.X[11],
rot_mat)
drone.set_propeller_angular_velocities(c, u_bar_p, u_bar_q, u_bar_r)
drone_state = drone.advance_state(dt/inner_loop_relative_to_outer_loop)
# generating a history of the state history, propeller angular velocities and linear accelerations
drone_state_history = np.vstack((drone_state_history, drone_state))
omega_history=np.vstack((omega_history,drone.omega))
accelerations = drone.linear_acceleration()
accelerations_history= np.vstack((accelerations_history,accelerations))
angular_vel_history = np.vstack((angular_vel_history,drone.get_euler_derivatives()))
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x_path, y_path, z_path,linestyle='-',marker='.',color='red')
ax.plot(drone_state_history[:,0],
drone_state_history[:,1],
drone_state_history[:,2],
linestyle='-',color='blue')
plt.title('Flight path').set_fontsize(20)
ax.set_xlabel('$x$ [$m$]').set_fontsize(20)
ax.set_ylabel('$y$ [$m$]').set_fontsize(20)
ax.set_zlabel('$z$ [$m$]').set_fontsize(20)
plt.legend(['Planned path','Executed path'],fontsize = 14)
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
ax.set_zlim(-1, 1)
plt.show()
# -
# # Flight path comparison
#
# Comparing the desired heading and the actual heading (Yaw angle).
# +
fig = plt.figure()
ax = fig.gca(projection='3d')
u = np.cos(psi_path)
v = np.sin(psi_path)
w = np.zeros(psi_path.shape)
drone_u = np.cos(drone_state_history[:,5])
drone_v = np.sin(drone_state_history[:,5])
drone_w = np.zeros(psi_path.shape)
for i in range(0,z_path.shape[0],20):
ax.quiver(x_path[i], y_path[i], z_path[i], u[i], v[i], w[i], length=0.2, normalize=True,color='red')
ax.quiver(drone_state_history[i,0],
drone_state_history[i,1],
drone_state_history[i,2],
drone_u[i], drone_v[i], drone_w[i],
length=0.2, normalize=True,color='blue')
ax.set_ylim(-1, 1)
ax.set_zlim(-1, 1)
plt.title('Flight path').set_fontsize(20)
ax.set_xlabel('$x$ [$m$]').set_fontsize(20)
ax.set_ylabel('$y$ [$m$]').set_fontsize(20)
ax.set_zlabel('$z$ [$m$]').set_fontsize(20)
plt.legend(['Planned yaw','Executed yaw'],fontsize = 14)
plt.show()
# -
# Calculating the error in position $\epsilon^2(t) = \left(x_t(t)-x_a(t)\right)^2 + \left(y_t(t)-y_a(t)\right)^2+ \left(z_t(t)-z_a(t)\right)^2$.
# +
err= np.sqrt((x_path-drone_state_history[:-1,0])**2
+(y_path-drone_state_history[:-1,1])**2
+(y_path-drone_state_history[:-1,2])**2)
plt.plot(t,err)
plt.title('Error in flight position').set_fontsize(20)
plt.xlabel('$t$ [$s$]').set_fontsize(20)
plt.ylabel('$e$ [$m$]').set_fontsize(20)
plt.show()
# -
# Plotting the angular velocities of the propellers in time.
# +
plt.plot(t,-omega_history[:-1,0],color='blue')
plt.plot(t,omega_history[:-1,1],color='red')
plt.plot(t,-omega_history[:-1,2],color='green')
plt.plot(t,omega_history[:-1,3],color='black')
plt.title('Angular velocities').set_fontsize(20)
plt.xlabel('$t$ [$s$]').set_fontsize(20)
plt.ylabel('$\omega$ [$rad/s$]').set_fontsize(20)
plt.legend(['P 1','P 2','P 3','P 4' ],fontsize = 14)
plt.grid()
plt.show()
# -
# Plotting the Yaw angle of the drone in time.
plt.plot(t,psi_path,marker='.')
plt.plot(t,drone_state_history[:-1,5])
plt.title('Yaw angle').set_fontsize(20)
plt.xlabel('$t$ [$s$]').set_fontsize(20)
plt.ylabel('$\psi$ [$rad$]').set_fontsize(20)
plt.legend(['Planned yaw','Executed yaw'],fontsize = 14)
plt.show()
# [Solution](/notebooks/solution.py)
| 38,272 |
/Business Problem2.ipynb
|
91f729d1b45406aea436497ac73256b605f95a83
|
[] |
no_license
|
jacques591886/CapstoneProject
|
https://github.com/jacques591886/CapstoneProject
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,555 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
# -
x.shape
y = x[1,:,:,:].flatten()
y.shape
t = x.reshape(x.shape[0], -1)
t.shape
a = np.array([[1,2,3,4],[32,4,23,5],[2,4,5,6]])
a.shape
b = np.array([1,2,3,1])
a+b
b
np.max(0,b)
# +
def maxx(x):
t = max(0,x)
return t
maxx(b)
# -
np.maximum(0,b)
b = np.array([-1,2,3,1])
np.maximum(0,b)
[b<=0]
np.zeros(3)
np.sum(b)
q = np.array([[1,2,3], [2,3,4]])
np.sum(q)**2
[{'mode': 'train'} for i in range(5 - 1)]
"arrasdy"+str(1)
import numpy as np
a = np.array([[1,2,3],[4,5,6,]])
a
mu = np.mean(a, axis =0 )
mu
# + active=""
#
# -
var = np.var(a, axis = 0)
var
a+mu
(a+mu)/var
| 1,128 |
/assignments/Assignment2_r2/.ipynb_checkpoints/Assignment2-checkpoint.ipynb
|
22969c7353e5b802d03d7ba30cc14d31413fddab
|
[] |
no_license
|
sarmilaupadhyaya/Statistical-NLP-2021-Summer
|
https://github.com/sarmilaupadhyaya/Statistical-NLP-2021-Summer
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,443 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="QXSB0ZZUmrLd"
# # SNLP Assignment 2
#
# Name 1: <br/>
# Student id 1: <br/>
# Email 1: <br/>
#
#
# Name 2: <br/>
# Student id 2: <br/>
# Email 2: <br/>
#
# **Instructions:** Read each question carefully. <br/>
# Make sure you appropriately comment your code wherever required. Your final submission should contain the completed Notebook and the respective Python files for exercises 2 and 3. There is no need to submit the data files. <br/>
# Upload the zipped folder in Teams. Make sure to click on "Turn-in" after you upload your submission, otherwise the assignment will not be considered as submitted. Only one member of the group should make the submisssion.
#
# ---
#
# + [markdown] id="5IQh2t-LF1uz"
# ## Exercise 1 (1.5 + 1.5 = 3 points)
#
# The perplexity of a model can also be defined as $2^{-\frac{1}{n} \sum^n_1 \log p(w_i|w_{i-1})}$. For the following exercise, use the log probabilities given this pretrained bigram language model. Tokenization is apparent from the tokens in the following table.
#
# |A|B|log p(B\|A)|
# |-|-|-|
# |`The`|`man`|-1.8|
# |`the`|`man`|-2.2|
# |`the`|`post`|-2.7|
# |`Man`|`the`|-5.1|
# |`man`|`the`|-3.7|
# |`man`|`shouted`|-2.9|
# |`shouted`|`"`|-3.1|
# |`post`|`!`|-3.1|
# |`"`|`Man`|-1.9|
# |`"`|`man`|-1.7|
# |`!`|`"`|-1.2|
# |`"`|`The`|-0.9|
# |`"`|`the`|-1.2|
#
# Assume probabilities not listed are $0^+$ (and the respective logarithm $-\infty$). For counting bigrams, consider your corpus as a circular structure i.e. include the bigram $(w_N, w_1)$ in your final counts. Therefore the weight of each bigram is $\frac{1}{|\text{words}|}$.
#
# ### 1.1 Lowercasing Input (1.5 points)
#
# Compute the perplexity of the following two sentences (and show the steps).
#
# ```
# The man shouted "Man the post!"
# the man shouted "man the post!"
# ```
#
# Is lowercasing the input always a good idea? What are the advantages and disadvantages?
#
# ### 1.2 Unknown Tokens (1.5 points)
#
# Compute the perplexity of the following two sentences.
#
# ```
# The man shouted "Man the stations!"
# The man shouted "Man the the!"
# ```
#
# Elaborate on the computed results. 2. Do you consider both sentences to be equally probable?
# + [markdown] id="Z3HUlC50F6uj"
# ## Exercise 2 (N-gram models) (1 + 2 = 3 points)
#
# ### 2.1
#
# Consider the formula on Page 28 in Chapter 2.
#
# $$P(w_2 | w_1) = \frac{P(w_1,w_2)}{P(w_1)}$$
#
# To actually estimate these n-gram probabilities over a text corpus, we use **Maximum Likelihood Estimation (MLE)**. The estimate for the parameters of the MLE is obtained by getting counts from the corpus and then normalising them so they lie between 0 and 1.
#
# Using this, state the empirical formula for finding the conditional probability of unigrams $P(w)$, bigrams $P(w_2|w_1)$, and trigrams $P(w_3|w_1,w_2)$ for a corpus of N words. We do not expect any mathematical proof here, but just the formula for finding the conditional probabilities from the words in the corpus using the shown equation as the starting point. (1 pt)
#
#
# ### 2.2
#
# Given the corpus `orient_express.txt`, find the unigram, bigram, and trigram probability distributions of the text using the formulae obtained in 2.1. Implement the function `find_ngram_probs` in the file `exercise_2.py`. For counting bigrams and trigrams, consider your corpus as a circular structure i.e. include the bigram $(w_N, w_1)$ and trigrams $(w_{N-1}, w_N, w_1)$ and $(w_{N}, w_1, w_2)$ in your final counts.
#
# Using the probabilities you obtain,
# 1. Plot the probabilities of the 20 most frequent unigrams
# 2. For the most frequent unigram, plot the 20 most frequent bigrams starting with that unigram
# 3. For the most frequent bigram, plot the 20 most frequent trigrams starting with that bigram
#
# Use the function `plot_most_frequent`. Briefly explain your observations (1-2 lines).
#
# NOTE: You must preprocess the text (remove punctuation, special characters, lowercase, tokenise) before you create your n-gram model. **You are NOT allowed to use nltk or any other tokeniser for this purpose**. Write your own function called `preprocess` in `exercise_2.py`. (2 points)
# + id="tOFieJxFn3lo"
from importlib import reload
import exercise_2
exercise_2 = reload(exercise_2)
file = open("data/orient_express.txt", "r")
text = file.read()
# TODO: Preprocess text
tokens = exercise_2.preprocess(text)
# TODO: Find conditional probabilities of unigrams, bigrams, trigrams
"""
Modify your function call based on how you have defined find_ngram_probs
in exercise_2.py
"""
unigrams = exercise_2.find_ngram_probs(tokens, model='unigram')
bigrams = exercise_2.find_ngram_probs(tokens, model='bigram')
trigrams = exercise_2.find_ngram_probs(tokens, model='trigram')
# TODO: Plot most frequent ngrams
"""
Modify the function signature as per your definition of plot_most_frequent
in exercise_2.py
"""
exercise_2.plot_most_frequent(unigrams)
exercise_2.plot_most_frequent(bigrams)
exercise_2.plot_most_frequent(trigrams)
# + [markdown] id="-vTjntEhF6UE"
# ## Exercise 3 (4 points)
#
# ### 3.1
#
# Read the corpus file again and apply the preprocessing steps from Exercise 2. Split the corpus into a train and test sections; the size of the test section should be 10% of the corpus. Do this by implementing the `train_test_split` function in `exercise_3.py`. Then, train 3-, 2- and 1-gram language models with your implementation from Exercise 2 on the train section. You may change the parameters of the functions if you find it necessary, but the code should still be written in the .py file. (1 point)
# + id="u09wtYPAB928"
from importlib import reload
import exercise_3, exercise_2
exercise_3 = reload(exercise_3)
exercise_2 = reload(exercise_2)
file = Path("../data/orient_express.txt").open('r')
text = file.read()
# TODO: apply tokenizer from exercise 2
tokenized = exercise_2.preprocess(text)
# TODO: split the corpus into a train corpus and a test corpus, with test_size=10%
train, test = exercise_3.train_test_split(tokenized, 0.1)
# TODO: train unigram, bigram, trigram LM using the method defined in exercise_2
# call each method as per your function definition
unigram_lm = exercise_2.find_ngram_probs(train, model='unigram')
bigram_lm = exercise_2.find_ngram_probs(train, model='bigram')
trigram_lm = exercise_2.find_ngram_probs(train, model='trigram')
# + [markdown] id="zJTD84VFB_LS"
# ### 3.2
#
# Calculate relative frequencies for all three test corpora. Do this by implementing the function `relative_frequencies` in `exercise_3.py`. <br/>
# Relative frequency is calculated as follows: <br/>
# e. g. for bigrams, $ f(w_{i-1}, w_i) = \frac{N(w_{i-1}, w_i)}{N(\bullet,\bullet)}$, where $N( w_{i-1},w_i)$ is the count of the bigram and $N(\bullet,\bullet)$ is the total number of bigrams in the corpus. For consistency, you should include a bigram $(w_N, w_1)$, where $N$ is the length of the corpus (and likewise for trigrams) as you have done in 2.2. (0.5 points)
# + id="b013xCFKB-VM"
# TODO: calculate unigram, bigram, trigram relative frequencies
unigram_rfs = exercise_3.relative_frequencies(test)
bigram_rfs = exercise_3.relative_frequencies(test, model='bigram')
trigram_rfs = exercise_3.relative_frequencies(test, model='trigram')
# + [markdown] id="nFxe81afB_7f"
# ### 3.3
#
# Implement the perplexity calculation for all 3 language models in the function `pp`, and perform the calculation on the test section of the corpus. You should use the perplexity formula from slide 21, chapter 3:
# \begin{equation}
# PP = 2^{-\sum_{w,h}f(w,h)\log_2 P(w|h)}
# \end{equation}
#
# * Can you simply apply the formula to the language model and the relative frequencies? What would happen if an ngram from the test set is absent in the train set?
#
# * Why is it possible to calculate perplexity with this formula? How does it differ from the formula in exercise 1 of this sheet?
#
# (1.5 points)
# + id="aJPplv-zB-ak"
# "Smoothing"
unigram_rfs = {unigram:rf for unigram, rf in unigram_rfs.items() if unigram in unigram_lm}
bigram_rfs = {bigram:rf for bigram, rf in bigram_rfs.items() if bigram in bigram_lm}
trigram_rfs = {trigram:rf for trigram, rf in trigram_rfs.items() if trigram in trigram_lm}
# TODO: compute perplexity for each LM
unigram_pp = exercise_3.pp(unigram_lm, unigram_rfs)
bigram_pp = exercise_3.pp(bigram_lm, bigram_rfs)
trigram_pp = exercise_3.pp(trigram_lm, trigram_rfs)
# + [markdown] id="rgspP4f4CGqq"
# ### 3.4
#
# Plot perplexity scores for all 3 language models. Do so by implementing the `plot_pps` function.
# * Explain the differences between the language models.
# * Is it always a good idea to increase the history for n-gram based language models? What can happen if n is too large? (1 point)
# + id="NHFKLCM1CGyl"
# TODO: plot
pps = [unigram_pp, bigram_pp, trigram_pp]
exercise_3.plot_pps(pps)
# + [markdown] id="RtAvYxzMF608"
# ## Bonus (1.5 points)
#
# Revisit exercise 1.
#
# 1. Come up with another metric (not language model) as an alternative to perplexity that could measure language model capabilities.
# 2. What are the advantages and disadvantages of such a metric in comparison to perplexity?
# 3. Compute your metric with respect to the four sentences (in exercise 1) and the provided language model.
| 9,513 |
/assignment2_v2/FullyConnectedNets.ipynb
|
f04ad5efef83f0bbfa7152cf2cd423f278ca7baf
|
[] |
no_license
|
rohan6471/deep_learn
|
https://github.com/rohan6471/deep_learn
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 383,334 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sklearn.datasets as datasets
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
path='/home/carbon13/machine_learning_study_follow_yincheng-master/decision_tree/lenses.txt'
data=pd.read_table(path,header=None)
data.info
dict1={'young':0,'pre':1,'presbyopic':2}
data[0]=data[0].map(dict1)
dict2={'myope':0, 'hyper':1}
data[1]=data[1].map(dict2)
data[1].unique()
dict3={'no':0, 'yes':1}
data[2]=data[2].map(dict3)
data[2].unique()
dict4={'reduced':0, 'normal':1}
data[3]=data[3].map(dict4)
data[3].unique()
dict5={'no lenses':0, 'soft':1, 'hard':2}
data[4]=data[4].map(dict5)
data[4].unique()
data
x_train,x_test,y_train,y_test=train_test_split(data.iloc[:,0:4],data[4])
x_test
x_train
y_train
tree=DecisionTreeClassifier()
tree.fit(x_train,y_train)
tree.score(x_test,y_test)
| 1,207 |
/scratchbook.ipynb
|
2a9061d8579094fd2d8d8bafadc94366fe1731f7
|
[] |
no_license
|
adam-gomez/regression-exercises
|
https://github.com/adam-gomez/regression-exercises
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 23,710 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from env import user, password, host
import os
import pandas as pd
from wrangle import wrangle_telco
import split_scale
df = wrangle_telco()
df.head()
train, validate, test = split_scale.split_my_data(df)
train.head()
scaler, train_scaled, validate_scaled, test_scaled = split_scale.iqr_robust_scaler(train, validate, test)
scaler
train_scaled
train
train_unscaled, validate_unscaled, test_unscaled = split_scale.scale_inverse(scaler, train_scaled, validate_scaled, test_scaled)
train_unscaled
train
| 779 |
/sf-crime-data/classroom/3.5-intro-rdd.ipynb
|
03d089ada38479ddd99aea56fb0e5f16f2e8224e
|
[] |
no_license
|
fxrcode/Udacity-Data-Streaming
|
https://github.com/fxrcode/Udacity-Data-Streaming
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,975 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pyspark
# -
sc = pyspark.SparkContext(appName="3.5 intro rdd")
rdd = sc.parallelize([('Richard', 22), ('Alfred', 23), ('Loki',4), ('Albert', 12), ('Alfred', 9)])
rdd
rdd.collect()
left = sc.parallelize([("Richard", 1), ("Alfred", 4)])
right = sc.parallelize([("Richard", 2), ("Alfred", 5)])
left
right
joined_rdd = left.join(right)
# + jupyter={"outputs_hidden": true}
joined_rdd
# -
collected_rdd = joined_rdd.collect()
collected_rdd
| 728 |
/core/utils/.ipynb_checkpoints/Pixel Formation Notebook-checkpoint.ipynb
|
b5899dae2ecf21c5cae7f899ef621c2e9fd96794
|
[] |
no_license
|
coreyjadams/pixlar-evd
|
https://github.com/coreyjadams/pixlar-evd
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 839,215 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import ROOT
from ROOT import pixevd
import numpy
# Get an instance of the dqm file reader:
# +
geom = pixevd.GeoService.GetME()
# override the wire drawing process for lariat
process = pixevd.ReadDQMFile(geom.n_time_ticks())
process.initialize()
# -
_file = "/home/cadams/pixlar-evd/dqm_r014727_sr0002.root"
process.setInput(_file)
process.goToEvent(15)
# Plot the data to make sure we've got an event with some interesting stuff in it:
pixels = process.as_array(0)
pads = process.as_array(1)
from matplotlib import pyplot as plt
# %matplotlib inline
fig = plt.figure(figsize=(16,9))
plt.imshow(pixels.T, interpolation="none", aspect=240./3072)
plt.show()
fig = plt.figure(figsize=(16,9))
plt.imshow(-pads.T, interpolation="none", aspect=240./3072)
plt.show()
# ### Hit finding
# Draw a waveform with pixel signals, and try the hit finding algorithms:
pixel_signal = pixels[1]
fig = plt.figure(figsize=(20,5))
x = numpy.arange(3072)
plt.plot(x, pixel_signal)
plt.show()
# Convert this signal to a root vector just o pass it to the hit finding function (returns a sparse vector)
root_pix_signal = ROOT.vector(float)()
root_pix_signal.resize(3072)
for i, v in enumerate(pixel_signal):
root_pix_signal[i] = v
utils = pixevd.Utils.GetME()
sparse_pixel = utils.suppressed_waveform(root_pix_signal, 11, 2)
print sparse_pixel
for v in sparse_pixel.as_vector():
print v.time
print v.size()
# Plot them over the signal above:
fig = plt.figure(figsize=(20,5))
x = numpy.arange(3072)
plt.plot(x, pixel_signal)
for v in sparse_pixel.as_vector():
print "({}, {})".format(v.time, v.time+ v.size())
if v.size() < 2:
continue
x_v = numpy.arange(v.size()) + v.time
y_v = numpy.zeros((v.size()))
for i, val in enumerate(v):
y_v[i] = val
# y_v = numpy.asarray(v)
plt.plot(x_v, y_v, lw=5, c='red')
plt.show()
# Let's do the same for a pad value:
pad_signal = pads[2]
fig = plt.figure(figsize=(20,5))
x = numpy.arange(3072)
plt.plot(x[300:500], pad_signal[300:500])
plt.plot(x[300:500], pixel_signal[300:500])
plt.show()
root_pad_signal = ROOT.vector(float)()
root_pad_signal.resize(3072)
for i, v in enumerate(pad_signal):
root_pad_signal[i] = v
sparse_pad = utils.suppressed_waveform(root_pad_signal, -10, 3)
fig = plt.figure(figsize=(20,5))
x = numpy.arange(3072)
plt.plot(x, pad_signal)
for v in sparse_pad.as_vector():
print "({}, {})".format(v.time, v.time + v.size())
if v.size() < 2:
continue
x_v = numpy.arange(v.size()) + v.time
y_v = numpy.zeros((v.size()))
for i, val in enumerate(v):
y_v[i] = val
# y_v = numpy.asarray(v)
plt.plot(x_v, y_v, lw=5, c='red')
plt.show()
# ## Pixel/Pad Matching
# Hit finding looks good. So now try to find all hits and match across pixels/pads. Creating a voxel set on the fly to store the output.
pixel_hits = []
pad_hits = []
temp_signal = ROOT.vector(float)()
temp_signal.resize(3072)
for pixel_waveform in pixels:
for i, v in enumerate(pixel_waveform):
temp_signal[i] = v
pixel_hits.append(utils.suppressed_waveform(temp_signal, 11, 2))
for pad_waveform in pads:
for i, v in enumerate(pad_waveform):
temp_signal[i] = v
pad_hits.append(utils.suppressed_waveform(temp_signal, -10, 3))
print len(pixel_hits)
print len(pad_hits)
# +
# Now we make matches. First set up a voxel meta and voxel storage
# -
voxel_meta = pixevd.Voxel3DMeta()
xmin = 0.
xmax = 47.
ymin = -20.
ymax = 20.
zmin = 0.
zmax = 90.
xnum = int((xmax - xmin) / 0.3)
ynum = int((ymax - ymin) / 0.3)
znum = int((zmax - zmin) / 0.2)
voxel_meta.set(xmin, ymin, zmin, xmax, ymax, zmax, xnum, ynum, znum)
voxel_set = pixevd.SparseTensor3D()
voxel_set.meta(voxel_meta)
# +
# Now, loop over every sparse waveform and compare hits to compatible pads.
# If the Intersection over union of the waveforms is above threshold, make voxels out of the two waveforms.
# Assuming the drift velocity is 1.51 mm/microsecond, and 128 ns/timetick, that means 0.19328 mm /timetick
# For the display, we're gonna call it 0.2 mm/timetick
# -
IoU_threshold = 0.3
prespill_ticks = 320
for i_pixel, pixel_hit in enumerate(pixel_hits):
for i_pad, pad_hit in enumerate(pad_hits):
# Check for compatibility
if i_pixel == 1 and i_pad == 2:
print "This one"
print pixel_hit.as_vector().size()
print pad_hit.as_vector().size()
print geom.compatible(i_pixel, i_pad)
if not geom.compatible(i_pixel, i_pad):
# print "NC"
continue
# Now check every chunk of each sparse vector for overlap
for pixel_region in pixel_hit.as_vector():
for pad_region in pad_hit.as_vector():
intersection = min(pixel_region.time + pixel_region.size(),
pad_region.time + pad_region.size()) - max(pixel_region.time,
pad_region.time)
if i_pixel == 1 and i_pad == 2:
print "I: {}".format(intersection)
print "U: {}".format(pixel_region.size() + pad_region.size() - intersection)
print "IoU: {}".format(1.*intersection / (pixel_region.size() + pad_region.size() - intersection))
if intersection <= 0:
continue
if 1.0*intersection / (pixel_region.size() + pad_region.size() - intersection) > IoU_threshold:
print "Making a voxel"
# Make a voxel:
# Get the XY coordinates of this pixel/pad combo:
xy = geom.xy(i_pixel, i_pad)
print geom.pad_top_left(i_pad)
# Use just the pixel timing for creation of voxels
for offset in xrange(pixel_region.size()):
print "Putting a voxel at ({}, {}, {})".format(0.02*(pixel_region.time + offset), xy.x, xy.y)
voxel_set.emplace(0.02* (pixel_region.time + offset), xy.y, xy.x, pixel_region[offset])
print voxel_set.as_vector().size()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = numpy.zeros((voxel_set.as_vector().size()))
y = numpy.zeros((voxel_set.as_vector().size()))
z = numpy.zeros((voxel_set.as_vector().size()))
for i, vox in enumerate(voxel_set.as_vector()):
x[i] = voxel_meta.pos_x(vox.id())
]) - 2)
else:
flag += chr(ord(f2[i]) - 1)
print(flag)
# -
# # Java Android 程序
# ## Java逆向解密
#
# 题目地址:https://buuoj.cn/challenges#Java%E9%80%86%E5%90%91%E8%A7%A3%E5%AF%86
#
#
# 直接使用 jd-gui 打开 Reverse.class 文件
#
# 获取源代码
#
# ```Java
# import java.util.ArrayList;
# import java.util.Scanner;
#
# public class Reverse {
# public static void main(String[] args) {
# Scanner s = new Scanner(System.in);
# System.out.println("Please input the flag );
# String str = s.next();
# System.out.println("Your input is );
# System.out.println(str);
# char[] stringArr = str.toCharArray();
# Encrypt(stringArr);
# }
#
# public static void Encrypt(char[] arr) {
# ArrayList<Integer> Resultlist = new ArrayList<>();
# for (int i = 0; i < arr.length; i++) {
# int result = arr[i] + 64 ^ 0x20;
# Resultlist.add(Integer.valueOf(result));
# }
# int[] KEY = {
# 180, 136, 137, 147, 191, 137, 147, 191, 148, 136,
# 133, 191, 134, 140, 129, 135, 191, 65 };
# ArrayList<Integer> KEYList = new ArrayList<>();
# for (int j = 0; j < KEY.length; j++)
# KEYList.add(Integer.valueOf(KEY[j]));
# System.out.println("Result:");
# if (Resultlist.equals(KEYList)) {
# System.out.println("Congratulations);
# } else {
# System.err.println("Error);
# }
# }
# }
#
# ```
#
# 正向思考,分析每个字母加密后的值,生成彩虹表
#
# ```Java
# private static HashMap<Integer, String> map = new HashMap<>();
#
# public static void rainbowWatch() {
# for (int i = 0; i <= 122; i++) {
# int result = i + 64 ^ 0x20;
# //System.out.println(result);
# map.put(result, String.valueOf((char) i));
# }
# }
# ```
#
# 之后直接循环获取源 flag 即可
#
# 完整代码如下
#
# ```Java
# import java.util.ArrayList;
# import java.util.HashMap;
#
# public class Answer {
# private static HashMap<Integer, String> map = new HashMap<>();
#
# public static void main(String[] args) {
# rainbowWatch();
# int[] KEY = { 180, 136, 137, 147, 191, 137, 147, 191, 148, 136, 133, 191, 134, 140, 129, 135, 191, 65 };
# String flag = "";
#
# for (int i : KEY) {
# flag += map.get(i);
# }
#
# System.out.println(flag);
# }
#
#
# public static void rainbowWatch() {
# for (int i = 0; i <= 122; i++) {
# int result = i + 64 ^ 0x20;
# //System.out.println(result);
# map.put(result, String.valueOf((char) i));
# }
# }
# }
#
# ```
#
# 以下为Python代码
# +
rainbowWatch = {}
for i in range(0, 123):
result = i + 64 ^ 0x20
rainbowWatch[result] = chr(i)
KEY = [ 180, 136, 137, 147, 191, 137, 147, 191, 148, 136, 133, 191, 134, 140, 129, 135, 191, 65 ]
for i in KEY:
print(rainbowWatch[i], end="")
# -
# ## Bilibili 2021 1024 程序员节 第五题
#
# 安卓程序员小明学习了新的开发语言,兴奋地写了一个demo app
#
# https://security.bilibili.com/sec1024/q/r5.html
#
#
# 反编译 test.apk 后 在 MainActivity 得到
#
# ```java
# public void onClick(View view) {
# String obj = ((EditText) MainActivity.this.findViewById(R.id.TextAccount)).getText().toString();
# String obj2 = ((EditText) MainActivity.this.findViewById(R.id.TextPassword)).getText().toString();
# byte[] b = Encrypt.b(Encrypt.a(obj.getBytes(), 3));
# byte[] b2 = Encrypt.b(Encrypt.a(obj2.getBytes(), 3));
# byte[] bArr = {89, 87, 66, 108, 79, 109, 90, 110, 78, 106, 65, 117, 79, 109, 74, 109, 78, 122, 65, 120, 79, 50, 89, 61};
# if (!Arrays.equals(b, new byte[]{78, 106, 73, 49, 79, 122, 65, 51, 89, 71, 65, 117, 78, 106, 78, 109, 78, 122, 99, 55, 89, 109, 85, 61}) || !Arrays.equals(b2, bArr)) {
# Toast.makeText(MainActivity.this, "还差一点点~~~", 1).show();
# } else {
# Toast.makeText(MainActivity.this, "bilibili- ( ゜- ゜)つロ 乾杯~", 1).show();
# }
# }
#
# ```
#
# 然后查看 Encrypt 类
#
# ```java
# package com.example.test;
#
# import android.util.Base64;
#
# public class Encrypt {
# public static byte[] a(byte[] bArr, int i) {
# if (bArr == null || bArr.length == 0) {
# return null;
# }
# int length = bArr.length;
# for (int i2 = 0; i2 < length; i2++) {
# bArr[i2] = (byte) (bArr[i2] ^ i);
# }
# return bArr;
# }
#
# public static byte[] b(byte[] bArr) {
# return Base64.encode(bArr, 2);
# }
# }
# ```
#
# 分析代码是对用户输入的账号密码先进行异或运算,然后再 base64 加密,与 bArr 进行比较。
#
# 编写代码,分析账号密码值,最后将其拼接在一起就行
#
# ```java
# import java.util.HashMap;
#
# /**
# * @author Pu Zhiwei {@literal [email protected]}
# * create 2021-10-24 14:58
# */
# public class Main5 {
# public static HashMap<Byte, Byte> A_MAP = new HashMap<>();
#
#
# public static void main(String[] args) {
# byte[] username = new byte[]{78, 106, 73, 49, 79, 122, 65, 51, 89, 71, 65, 117, 78, 106, 78, 109, 78, 122, 99, 55, 89, 109, 85, 61};
# byte[] password = {89, 87, 66, 108, 79, 109, 90, 110, 78, 106, 65, 117, 79, 109, 74, 109, 78, 122, 65, 120, 79, 50, 89, 61};
#
# byte[] array1 = Base64.decode(username, 2);
# byte[] array2 = Base64.decode(password, 2);
#
# byte[] array3 = new byte[array1.length];
# byte[] array4 = new byte[array2.length];
#
# for (int i = 0; i < array1.length; i++) {
# array3[i] = encryptANoList(array1[i], 3);
# }
#
# for (int i = 0; i < array2.length; i++) {
# array4[i] = encryptANoList(array2[i], 3);
# }
#
# System.out.println("username" + new String(array3) + " password: " + new String(array4));
# System.out.println("flag: " + new String(array3) + "-" + new String(array4));
# }
#
# public static byte[] encryptA(byte[] bArr, int i) {
# if (bArr == null || bArr.length == 0) {
# return null;
# }
# int length = bArr.length;
# for (int i2 = 0; i2 < length; i2++) {
# bArr[i2] = (byte) (bArr[i2] ^ i);
# }
# return bArr;
# }
#
# public static byte encryptANoList(byte bArr, int i) {
# bArr = (byte) (bArr ^ i);
# return bArr;
# }
#
# public static byte[] encryptB(byte[] bArr) {
#
# return Base64.encode(bArr, 2);
# }
#
# }
#
# ```
# # 前端
# ## [FlareOn4]login
#
#
# 打开 html 文件,分析其中 JavaScript 代码
#
# 正向思考,复制原始加密算法,生成每一个字符加密后的值,然后保存,生成彩虹表
#
# ```JavaScript
# function mathValue(key) {
# return key.replace(/[a-zA-Z]/g, function (c) {
# return String.fromCharCode((c <= "Z" ? 90 : 122) >= (c = c.charCodeAt(0) + 13) ? c : c - 26);
# });
# }
#
# var str = "[email protected]"
# var map = {}
# for (var i = 0; i <= 122; i++) {
# var key = String.fromCharCode(i)
# var value = mathValue(key)
# map[value] = key
# //console.log(key, value)
# }
# ```
#
# 之后循环求解即可
#
# ```JavaScript
#
# var flag = ""
# for (var i = 0; i < str.length; i++) {
# flag += map[str[i]]
# }
# console.log(flag)
#
# ```
#
#
#
#
# ## findit Android 反编译
#
#
# 题目地址:https://buuoj.cn/challenges#findit
#
# 直接使用 jadx-gui 打开 App
#
# 分析 MainActivity 文件
#
# 分析代码,将其直接复制出来写成 Java 代码
#
# ```Java
# public class Main {
# public static void main(String[] args) {
# char[] a = {'T', 'h', 'i', 's', 'I', 's', 'T', 'h', 'e', 'F', 'l', 'a', 'g', 'H', 'o', 'm', 'e'};
# char[] b = {'p', 'v', 'k', 'q', '{', 'm', '1', '6', '4', '6', '7', '5', '2', '6', '2', '0', '3', '3', 'l', '4', 'm', '4', '9', 'l', 'n', 'p', '7', 'p', '9', 'm', 'n', 'k', '2', '8', 'k', '7', '5', '}'};
#
# char[] x = new char[17];
# char[] y = new char[38];
#
# for (int i = 0; i < 17; i++) {
# if ((a[i] < 'I' && a[i] >= 'A') || (a[i] < 'i' && a[i] >= 'a')) {
# x[i] = (char) (a[i] + 18);
# } else if ((a[i] < 'A' || a[i] > 'Z') && (a[i] < 'a' || a[i] > 'z')) {
# x[i] = a[i];
# } else {
# x[i] = (char) (a[i] - '\b');
# }
# }
# System.out.println(String.valueOf(x));
#
# if (String.valueOf(x).equals("LzakAkLzwXdsyZgew")) {
# for (int i2 = 0; i2 < 38; i2++) {
# if ((b[i2] < 'A' || b[i2] > 'Z') && (b[i2] < 'a' || b[i2] > 'z')) {
# y[i2] = b[i2];
# } else {
# y[i2] = (char) (b[i2] + 16);
# if ((y[i2] > 'Z' && y[i2] < 'a') || y[i2] >= 'z') {
# y[i2] = (char) (y[i2] - 26);
# }
# }
# }
# System.out.println(String.valueOf(y));
# }
# }
# }
# ```
#
# 先运行第一遍计算得出 `edit.getText().toString()` 的值 `LzakAkLzwXdsyZgew`, 然后再运行得到 y 的值,得到 flag
#
#
and 1/0.3, min and max values for erased areas as 0.0 and 1.0
eraser = get_random_eraser(p=0.5, s_l=0.02, s_h=0.4, r_1=0.3, r_2=1/0.3,
v_l=0.0, v_h=1.0, pixel_level=False)
# Initialize ImageDataGenerator for image normalization for training
datagen = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True, zoom_range=0.0,
horizontal_flip=False,preprocessing_function=eraser)
# Learn stats of training data eg. mean, std
datagen.fit(train_features)
# Define batch size
BS=128
# Generate an iterator for training. It will provide nomalized image data of size BS for training
train_iterator = datagen.flow(train_features, train_labels, batch_size = BS)
# Since we will train the model on normalized image data we must used normalized images for validation as well
# Initialize ImageDataGenerator for image normalization for testing
datagenTest = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True, zoom_range=0.0,
horizontal_flip=False)
# Learn stats of testing data eg. mean, std
datagenTest.fit(test_features)
# Generate an iterator for training. It will provide nomalized image data of size BS for testing
test_iterator = datagenTest.flow(test_features, test_labels, batch_size = BS)
# + colab_type="code" outputId="b5a1a875-d915-4eec-fe74-e92815f53cf4" id="D1Z7tHRVJDK7" colab={"base_uri": "https://localhost:8080/", "height": 1000}
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
start = time.time()
# Train the model
# argument 'steps_per_epoch' defines how many times train_iterator will be asked for batches of training data
# argument 'validation_steps' defines how many times test_iterator will be asked for batches of validation data
# here validation is being performed in batches instead of one shot to avoid using up too much memory while training
model_info = model2.fit_generator(train_iterator, steps_per_epoch = (len(train_features)//BS), epochs = 50,
validation_data = test_iterator, validation_steps = (len(test_features)//BS),verbose=1)
end = time.time()
print ("Model took %0.2f seconds to train"%(end - start))
# plot model history
plot_model_history(model_info)
# + colab_type="code" outputId="e178aaa0-56d9-464b-a43f-d0c22d6803d7" id="g4v8EsjBlcXr" colab={"base_uri": "https://localhost:8080/", "height": 124}
from google.colab import drive
# mount google drive to save model
drive.mount('/content/gdrive')
fileNameWholeModelCutout = '/content/gdrive/My Drive/TSAI/Session9/FinalModelCutout.h5'
# + colab_type="code" id="YOz39g_uJDLQ" colab={}
model2.save(fileNameWholeModelCutout,overwrite=True)
# + colab_type="code" outputId="4a68c1fa-1f55-4821-d956-edd3ba4d2c75" id="LQWgvcj3lcX9" colab={"base_uri": "https://localhost:8080/", "height": 34}
from keras.models import load_model
model2 = load_model(fileNameWholeModelCutout)
# Since we have trained our model on Normalized input images, we must normalize the test data before predicting test accuracy
test_iterator = datagenTest.flow(test_features, test_labels, batch_size = test_features.shape[0],shuffle=False)
# Obtain a batch a normalized test data
batchX, batchY = test_iterator.next()
# compute test accuracy
print ("Accuracy on test data is: %0.2f"%accuracy(batchX, batchY, model))
# + [markdown] id="HSSKgjxfM8mZ" colab_type="text"
# At this point we have models without and with cutouts trained and saved. So now we proceed with implementation of GradCam algorithm. After implementing GradCam we will compare outputs of both the trained models side-by-side.
# + colab_type="code" id="mib_gUKeWbnk" colab={}
# Define function to apply gradCam algorithm. Defining function for this
# allows reusability.
def getGradCamImage(model,layerName,imageInputBatch,imgIndex,originalImageSet):
"""
Given cnn model,layer name, preprocessed image batch, original untouched imageset and
image id, this function applies gradCam algorithm on the model for given layer and
input image and returns the image with heatmap superimposed over it.
Parameters:
model - Trained cnn model for which gradCam is required
layerName - Name of the model layer at which gradCam is to be applied
imageInputBatch - Whole dataset with necessary preprocessing applied
It should be unshuffled so that image ids matches with ids of images in
originalImageSet. Model prediction will be applied on image obtained from
this set.
originalImageSet - Original untouched image dataset. GradCam heatmap will be superimposed
on image obtained from this set.
Returns: Image with gradCam heatmap superimposed on the image from original dataset
"""
x = np.expand_dims(imageInputBatch[imgIndex], axis=0)
preds = model.predict(x)
class_idx = np.argmax(preds[0])
class_output = model.output[:, class_idx]
last_conv_layer = model.get_layer(layerName)
grads = K.gradients(class_output, last_conv_layer.output)[0]
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([model.input], [pooled_grads, last_conv_layer.output[0]])
pooled_grads_value, conv_layer_output_value = iterate([x])
for i in range(last_conv_layer.output_shape[3]):
conv_layer_output_value[:, :, i] *= pooled_grads_value[i]
heatmap = np.mean(conv_layer_output_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap)
img = originalImageSet[imgIndex].copy()
heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
superimposed_img = cv2.addWeighted(img, 0.7, heatmap, 0.3, 0)
return cv2.cvtColor(superimposed_img, cv2.COLOR_BGR2RGB)
# + id="3zaoUGKyWkeS" colab_type="code" outputId="8047cad6-3db1-4b7b-f295-897169d4a77a" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Prepare to call getGradCamImage function
# Load models, load original image dataset, generate preprocessed image batch
from google.colab import drive
from keras.models import load_model
# mount google drive to save model
drive.mount('/content/gdrive')
# Load model without cutout
fileNameWholeModel = '/content/gdrive/My Drive/TSAI/Session9/FinalModel.h5'
model1 = load_model(fileNameWholeModel)
# Load model with cutout
fileNameWholeModelCutout = '/content/gdrive/My Drive/TSAI/Session9/FinalModelCutout.h5'
model2 = load_model(fileNameWholeModelCutout)
# Use entire test dataset. make shuffle= False so that image id of preprocessed
# images and original image dataset matches.
test_iterator = datagenTest.flow(test_features, test_labels, batch_size = test_features.shape[0],shuffle=False)
# Obtain a batch a normalized test data
batchX, batchY = test_iterator.next()
# Load the original untouched dataset
(train_features2, train_labels2), (test_features2, test_labels2) = cifar10.load_data()
# + colab_type="code" outputId="3c8eec3a-1cc6-45d2-92e6-77911250c936" id="Snq14D4kRqPR" colab={"base_uri": "https://localhost:8080/", "height": 754}
class_names = ['Airplane','Automobile','Bird','Cat','Deer',
'Dog','Frog','Horse','Ship','Truck']
numImages = 4
# Generate a rows x cols sized image grid
f, axarr = plt.subplots(numImages,3)
for i in range(numImages):
imgIndex = np.random.randint(0,test_features2.shape[0])
actualClass = class_names[test_labels2[imgIndex][0]]
axarr[i][0].imshow(test_features2[imgIndex])
axarr[i][0].set_title('Actual:{}'.format(actualClass),fontsize=12)
axarr[i][0].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model1.predict(x)
class_idx = np.argmax(preds[0])
predictedClass1 = class_names[class_idx]
axarr[i][1].imshow(getGradCamImage(model1,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][1].set_title('No cutout- Predicted:{}'.format(predictedClass1),fontsize=12)
axarr[i][1].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model2.predict(x)
class_idx = np.argmax(preds[0])
predictedClass2 = class_names[class_idx]
axarr[i][2].imshow(getGradCamImage(model2,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][2].set_title('With cutout- Predicted:{}'.format(predictedClass2),fontsize=12)
axarr[i][2].axis('off')
f.subplots_adjust(hspace=0.2)
f.set_size_inches(13,13)
# + colab_type="code" outputId="911694e9-f1ff-431c-ba89-2c81e2f5c868" id="RZvA5fk3C3Pb" colab={"base_uri": "https://localhost:8080/", "height": 699}
class_names = ['Airplane','Automobile','Bird','Cat','Deer',
'Dog','Frog','Horse','Ship','Truck']
numImages = 4
# Generate a rows x cols sized image grid
f, axarr = plt.subplots(numImages,3)
for i in range(numImages):
imgIndex = np.random.randint(0,test_features2.shape[0])
actualClass = class_names[test_labels2[imgIndex][0]]
axarr[i][0].imshow(test_features2[imgIndex])
axarr[i][0].set_title('Actual:{}'.format(actualClass),fontsize=12)
axarr[i][0].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model1.predict(x)
class_idx = np.argmax(preds[0])
predictedClass1 = class_names[class_idx]
axarr[i][1].imshow(getGradCamImage(model1,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][1].set_title('No cutout- Predicted:{}'.format(predictedClass1),fontsize=12)
axarr[i][1].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model2.predict(x)
class_idx = np.argmax(preds[0])
predictedClass2 = class_names[class_idx]
axarr[i][2].imshow(getGradCamImage(model2,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][2].set_title('With cutout- Predicted:{}'.format(predictedClass2),fontsize=12)
axarr[i][2].axis('off')
f.subplots_adjust(hspace=0.3)
f.set_size_inches(12,12)
# + id="2CTY3wFemfGV" colab_type="code" outputId="02c37d31-2ff9-41fe-c01a-74a09814e621" colab={"base_uri": "https://localhost:8080/", "height": 754}
# Randomly select an image Id apply gradCam on this for both the models (with and without cutout)
# show original and superimposed images side-by-side. Repeat this "numImages" times
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
numImages = 4
# Generate 'numImages'rows by 3 cols sized image grid
f, axarr = plt.subplots(numImages,3)
for i in range(numImages):
# get a random integer as image id
imgIndex = np.random.randint(0,test_features2.shape[0])
actualClass = class_names[test_labels2[imgIndex][0]]
axarr[i][0].imshow(test_features2[imgIndex])
axarr[i][0].set_title('Actual:{}'.format(actualClass),fontsize=12)
axarr[i][0].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model1.predict(x)
class_idx = np.argmax(preds[0])
predictedClass1 = class_names[class_idx]
axarr[i][1].imshow(getGradCamImage(model1,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][1].set_title('No cutout- Predicted:{}'.format(predictedClass1),fontsize=12)
axarr[i][1].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model2.predict(x)
class_idx = np.argmax(preds[0])
predictedClass2 = class_names[class_idx]
axarr[i][2].imshow(getGradCamImage(model2,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][2].set_title('With cutout- Predicted:{}'.format(predictedClass2),fontsize=12)
axarr[i][2].axis('off')
f.subplots_adjust(hspace=0.2)
f.set_size_inches(13,13)
# + colab_type="code" outputId="00fd7157-eb39-4d1e-ce55-a8c8f02c2442" id="0ahST06pZd9n" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# Randomly select an image Id apply gradCam on this for both the models (with and without cutout)
# show original and superimposed images side-by-side. Repeat this "numImages" times
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
numImages = 10
# Generate 'numImages'rows by 3 cols sized image grid
f, axarr = plt.subplots(numImages,3)
for i in range(numImages):
# get a random integer as image id
imgIndex = np.random.randint(0,test_features2.shape[0])
actualClass = class_names[test_labels2[imgIndex][0]]
axarr[i][0].imshow(test_features2[imgIndex])
axarr[i][0].set_title('Actual:{}'.format(actualClass),fontsize=12)
axarr[i][0].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model1.predict(x)
class_idx = np.argmax(preds[0])
predictedClass1 = class_names[class_idx]
axarr[i][1].imshow(getGradCamImage(model1,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][1].set_title('No cutout- Predicted:{}'.format(predictedClass1),fontsize=12)
axarr[i][1].axis('off')
x = np.expand_dims(batchX[imgIndex], axis=0)
preds = model2.predict(x)
class_idx = np.argmax(preds[0])
predictedClass2 = class_names[class_idx]
axarr[i][2].imshow(getGradCamImage(model2,'conv2d_9',batchX,imgIndex,test_features2))
axarr[i][2].set_title('With cutout- Predicted:{}'.format(predictedClass2),fontsize=12)
axarr[i][2].axis('off')
f.subplots_adjust(hspace=0.2)
f.set_size_inches(13,30)
| 29,093 |
/Task_3/Exploratory Data Analysis - Retail (Level - Beginner).ipynb
|
60c8894145ab410fa03510d9260e49882c72595e
|
[] |
no_license
|
ShubhamSalokhe/The_Sparks_Foundation_Tasks
|
https://github.com/ShubhamSalokhe/The_Sparks_Foundation_Tasks
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 9,186,592 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# Project
# 1. Prepare the data
# 2. Build the model
# 3. Train the model
# 4. Analyze the model’s results
# + active=""
# ETL Process
# = Extract data from a data source.
# - Transform data into a desirable format.
# - Load data into a suitable structure.
# -
import torch
import torchvision
import torchvision.transforms as transforms
# ##### PyTorch Dataset Class
# root vị trí lưu trữ dữ liệu
train_set = torchvision.datasets.FashionMNIST(
root='./data'
,train=True
,download=True
,transform=transforms.Compose([
transforms.ToTensor()
])
)
# Kiểm tra độ dài của dữ liệu
len(train_set)
# Kiểm tra nhãn của dữ liệu
train_set.targets
# Kiểm tra số lượng của từng nhãn
train_set.targets.bincount()
# Xử lý dữ liệu
sample = next(iter(train_set))
len(sample)
image, label = sample
type(image)
type(label)
image.shape
torch.tensor(label).shape
image.squeeze().shape
import matplotlib.pyplot as plt
import numpy as np
plt.imshow(image.squeeze(), cmap="gray")
torch.tensor(label)
# ##### PyTorch DataLoader Class
train_loader = torch.utils.data.DataLoader(train_set
,batch_size=1000
,shuffle=True
)
display_loader = torch.utils.data.DataLoader(
train_set, batch_size=10
)
batch = next(iter(display_loader))
print('len:', len(batch))
images, labels = batch
print('types:', type(images), type(labels))
print('shapes:', images.shape, labels.shape)
grid = torchvision.utils.make_grid(images, nrow=10)
plt.figure(figsize=(15,15))
plt.imshow(np.transpose(grid, (1,2,0)))
print('labels:', labels)
grid = torchvision.utils.make_grid(images, nrow=10)
plt.figure(figsize=(15,15))
plt.imshow(grid.permute(1,2,0))
print('labels:', labels)
# +
how_many_to_plot = 20
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=1, shuffle=True
)
plt.figure(figsize=(50,50))
for i, batch in enumerate(train_loader, start=1):
image, label = batch
plt.subplot(10,10,i)
plt.imshow(image.reshape(28,28), cmap='gray')
plt.axis('off')
plt.title(train_set.classes[label.item()], fontsize=28)
if (i >= how_many_to_plot): break
plt.show()
y Data Analysis
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mlp
from plotnine import *
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
# ### Step:2 load the dataset
# importing data
data = pd.read_csv(r'E:\Data science Project\The_Spark_Foundation_Task\Task_3\SampleSuperstore.csv')
data.head()
data.shape
data.describe()
# The first thing to notice is that we have some negative values in our dataset. Maybe, this could mean that these quantities (with corresponding Profit values) was returned or cancelled.
data.columns
# ### Step:3 Explore our dataset first by getting more information about rows and columns.
data.info()
# Note: The number of NA customers is 0%. So, that would't impact on our result.
#
# Now, let's have an idea about the quantitative data (Profit).
data.State.nunique()
data.nunique()
data.isnull().sum() #Cheacking the missing values
# ### Step:5 Now we are going to cheking the duplicate data
data.duplicated().sum()
data.drop_duplicates()
# ### Step:6 Droping irrelevant columns
data.drop(['Postal Code'],axis= 1,inplace= True)
data
plt.figure(figsize=(10,5))
correlation = data.corr()
sns.heatmap(correlation,annot=True)
# ### Step:7 Checking between relation between various rows and columns
data.corr() #Corealtion between variables
data.cov() #Covariance between variables
# Above figure shows profit and sales has strong co-relation we analyse the both term
# ### Step:8 Data visualization
# Line Plot
mlp.rcParams['figure.dpi'] = 300
plt.figure(figsize = (30,15))
plt.plot(data['Profit']) #
plt.figure(figsize=(10,5))
sns.lineplot('Discount','Profit',data=data,)
# Distribution Plot
# +
#also, we can show that data by using
#data.hist(bins=50, figsize = (20,15))
#plt.show()
#that command,but for better visualization we use seabornfig, axs = plt.subplots(ncols= 2,nrows= 2, figsize = (10,10))
fig, axs = plt.subplots(ncols=2, nrows=2,figsize=(10,10))
sns.distplot(data['Sales'], color = 'Green', ax=axs[0][0])
sns.distplot(data['Profit'], color = 'Red', ax=axs[0][1])
sns.distplot(data['Quantity'], color = 'Blue', ax=axs[1][0])
sns.distplot(data['Discount'], color = 'Orange', ax=axs[1][1])
axs[0][0].set_title('Sales Distribution', fontsize = 20 )
axs[0][1].set_title('Profit Distribution', fontsize = 20 )
axs[1][0].set_title('Quantity Distribution', fontsize = 20 )
axs[1][1].set_title('Discount Distribution', fontsize = 20 )
plt.show()
# -
# So by this histoogram we can see the data is not normal
# ### State wise analysis of Profit,Discount,Sale
#Count the repeatable states
data['State'].value_counts()
data_state = data.groupby(['State'])[['Sales', 'Discount', 'Profit']].mean()
data_state
#visualizing the data
plt.figure(figsize=(15,15))
sns.countplot(x = data['State'])
plt.title('STATE')
plt.xticks(rotation=90)
plt.show
# California has highest numbers of repeatable value
# +
profit_plot = (ggplot(data,aes( x='Category', y='Profit', fill= 'Sub-Category'))
+ geom_col()
+ coord_flip()
+ scale_fill_brewer(type = 'div',palette= 'Spectral')
+ theme_classic()
+ ggtitle('pie chart'))
display(profit_plot)
# -
# <b>The above pie chart shows the profit and loss of each and every categories.</b>
# >Here from the graph we can visualize that "Furniture" category has suffered the highest amount of loss and also profit amongst all other Categories (For now we can't say that what is the reason it ma se because of discounts given on binders subcategory).
# ><br><br>Next,Technology category has gained highest amount of profit with less loss. There are other categories too haven't faced any kind of losses but their profit margins are also low.
profit_plot = (ggplot(data, aes(x= 'Sub-Category', y='Profit', fill = 'Sub-Category'))
+ geom_col()
+ coord_flip()
+ scale_fill_brewer(type = 'div',palette = 'Spectral')
+ theme_classic()
+ ggtitle('Pie Chart'))
display(profit_plot)
# <b>The above pie chart shows the profit and loss of each and every subcategories.</b>
# >Here from the graph we can visualize that "binders" sub-category has suffered the highest amount of loss and also profit amongst all other sub Categories (For now we can't say that what is the reason it ma se because of discounts given on binders subcategory).
# <br><br>Next,"Copiers" sub-category has gained highest amount of profit with no loss. There are other sub-categories too haven't faced any kind of losses but their profit margins are also low.
# <br><br>Next, suffering from highest loss is machines.
sns.set(style = "whitegrid")
plt.figure(2, figsize = (20,15))
sns.barplot(x = 'Sub-Category' , y = 'Profit' , data = data , palette = "Spectral")
plt.title("Pie consumption Ptterns in the US", fontsize = 30)
ggplot(data, aes(x='Ship Mode', fill ='Category')) + geom_bar(stat='count')
mlp.rcParams['figure.dpi'] = 300
sns.pairplot(data, hue='Sub-Category')
figsize =(15,10)
plt.show()
# From the above plot we can say that our <b>data is not normal and it has some amount of outliers too</b>, Let's explore more about these outliers by using boxplots 1st we'll check sales from every segments of whole data
flip_xlabels = theme(axis_text_x = element_text(angle=90, hjust=1),
figure_size=(10,5),
axis_ticks_length_major=10,
axis_ticks_length_minor=5)
(ggplot(data, aes(x='Sub-Category', fill='Sales'))
+ geom_bar()
+ facet_wrap(['Segment'])
+ flip_xlabels +theme(axis_text_x = element_text(size=12))
+ ggtitle("Sales From Every Segment Of United States of Whole Data"))
# From the above Graph we can say that <b>"Home Office" segment has less purchased sub-categories and in that "Tables", "Supplies", "Machines", "Copiers", "Bookcases" has the lowest sales</b>. "Consumer" has purchased more sub-categories as compared to other segments.
flip_xlabels = theme(axis_text_x = element_text(angle=90, hjust=1),
figure_size=(10,5),
axis_ticks_length_major=10,
axis_ticks_length_minor=5)
(ggplot(data, aes(x='Sub-Category', fill='Discount'))
+ geom_bar()
+ facet_wrap(['Segment'])
+ flip_xlabels
+ theme(axis_text_x = element_text(size=12))
+ ggtitle("Discount on Categories From Every Segment Of United States of Whole Data"))
flip_xlabels = theme(axis_text_x = element_text(angle=90, hjust=10),
figure_size=(10,10),
axis_ticks_length_major=50,
axis_ticks_length_minor=50)
(ggplot(data, aes(x='Category', fill='Sales'))
+ geom_bar()
+ theme(axis_text_x = element_text(size=10))
+ facet_wrap(['Region'])
+ flip_xlabels
+ ggtitle("Sales From Every Region Of United States of Whole Data"))
# In above graph Region wise data analysis done.
#line plot
plt.figure(figsize=(10,4))
sns.lineplot('Discount','Profit', data=data , color='y',label='Discount')
plt.legend()
plt.show()
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
state_code = {'Alabama': 'AL','Alaska': 'AK','Arizona': 'AZ','Arkansas': 'AR',
'California': 'CA','Colorado': 'CO','Connecticut': 'CT','Delaware': 'DE','Florida': 'FL',
'Georgia': 'GA','Hawaii': 'HI','Idaho': 'ID','Illinois': 'IL',
'Indiana': 'IN','Iowa': 'IA','Kansas': 'KS','Kentucky': 'KY','Louisiana': 'LA','Maine': 'ME','Maryland': 'MD',
'Massachusetts': 'MA','Michigan': 'MI','Minnesota': 'MN','Mississippi': 'MS','Missouri': 'MO','Montana': 'MT',
'Nebraska': 'NE','Nevada': 'NV','New Hampshire': 'NH','New Jersey': 'NJ','New Mexico': 'NM',
'New York': 'NY','North Carolina': 'NC','North Dakota': 'ND','Ohio': 'OH','Oklahoma': 'OK',
'Oregon': 'OR','Pennsylvania': 'PA','Rhode Island': 'RI',
'South Carolina': 'SC','South Dakota': 'SD','Tennessee': 'TN',
'Texas': 'TX','Utah': 'UT','Vermont': 'VT','Virginia': 'VA',
'District of Columbia': 'WA','Washington': 'WA','West Virginia': 'WV','Wisconsin': 'WI','Wyoming': 'WY'}
data['state_code'] = data.State.apply(lambda x: state_code[x])
# +
state_data = data[['Sales', 'Profit', 'state_code']].groupby(['state_code']).sum()
fig = go.Figure(data=go.Choropleth(
locations=state_data.index,
z = state_data.Sales,
locationmode = 'USA-states',
colorscale = 'Reds',
colorbar_title = 'Sales in USD',
))
fig.update_layout(
title_text = 'Total State-Wise Sales',
geo_scope='usa',
height=800,
)
fig.show()
# -
# <b>Now, let us analyze the sales of a few random states from each profit bracket (high profit, medium profit, low profit, low loss and high loss) and try to observe some crucial trends which might help us in increasing the sales.</b>
#
# We have a few questions to answer here.
#
# 1. What products do the most profit making states buy?
#
# 2. What products do the loss bearing states buy?
#
# 3. What product segment needs to be improved in order to drive the profits higher?
def state_data_viewer(states):
"""Plots the turnover generated by different product categories and sub-categories for the list of given states.
Args:
states- List of all the states you want the plots for
Returns:
None
"""
product_data = data.groupby(['State'])
for state in states:
df = product_data.get_group(state).groupby(['Category'])
fig, ax = plt.subplots(1, 3, figsize = (28,5))
fig.suptitle(state, fontsize=14)
ax_index = 0
for cat in ['Furniture', 'Office Supplies', 'Technology']:
cat_data = df.get_group(cat).groupby(['Sub-Category']).sum()
sns.barplot(x = cat_data.Profit, y = cat_data.index, ax = ax[ax_index])
ax[ax_index].set_ylabel(cat)
ax_index +=1
fig.show()
mlp.rcParams['figure.dpi'] = 500
states = ['California', 'Washington', 'Mississippi', 'Arizona', 'Texas']
state_data_viewer(states)
# From the above data visualization,we can see the states and the category where sales and profits are high or less. We can improve in those states by providing discounts in prefered range so that the company and cosumer will both be in profit. Here, while the superstore is incurring losses by providing discounts on their products, they can't stop doing so. Most of the heavy discounts are during festivals, end-of-season and clearance sales which are necessary so that the store can make space in their warehouses for fresh stock. Also, by incurring small losses, the company gains in the future by attracting more long term customers. Therefore, the small losses from discounts are an essential part of company's businesses.
| 13,300 |
/算法练习/算法练习.ipynb
|
e2dddb0edcf56d25346a85fccaace017937a1c40
|
[] |
no_license
|
lqleon1214/data_analysis_test
|
https://github.com/lqleon1214/data_analysis_test
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,679 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Convolution operator
# Image Processing 時,單純使用 fully connected networks (dense networks) 有許多問題。
#
# 1. 是否需要考慮各變數 (features) 之間的所有關係:一張照片的最左上角和最右下角的 pixels 之間可能完全沒有關係,因此這兩個 inputs 的 weights 不需要連到同一個 hidden layer 的 node。因此,使用 dense networks 會產生過多多餘的 weights,造成計算的成本增加以及耗時太久。
# 2. 太多 weights 和 bias,可能造成 overfitting。
#
# 因此,hidden layers 的 nodes 只需要連到前一個 layer 的幾個 nodes (相鄰的 pixels) 即可。現今最常用的架構就是 Convolutional Neural Network (CNN)
# ## Implementing CNN in PyTorch
#
# 有兩種方式:
# 1. 物件導向的方式建立一個 network 的 class (使用 torch.nn) (下圖左)
# 2. Functional (使用 torch.nn.functional) (下圖右)
#
# 
# ### 實際寫法的差異
#
# **僅使用一個 kernel**
# 
#
# **使用五個 kernel**
# 
#
# in_channels = 3 代表輸入的照片有 RGB 三張圖層。 <br/>
# out_channels = 1 代表輸出的 feature map 只有一張,也就代表只有使用一個 kernel。 <br/>
# out_channels = 5 代表輸出的 feature map 有五張,也就代表有使用五個 kernels。 <br/>
# stride 代表 kernel 移動的速度。 <br/>
# padding 代表 kernel 是否要超出照片的邊緣,可以讓 kernel 掃出來的 feature maps 不會變小。
# # Pooling operators
# 
#
# 如上圖,假設有一組 64 個經過 kernels 掃出來的 feature maps, **Pooling** 就是將每個 feature maps 的解析度減少 (typically 長寬都變一半)。
#
# Pooling 可以讓訓練過程更有效率,也可以讓模型較不 overfitting (參數減少)。
#
# ---
#
# 有兩種常用的 pooling:
# 1. Max pooling
#
# 
#
# 2. Average pooling
#
# 
# ## Max-pooling in PyTorch
#
# 一樣有兩種方式:
# 1. 物件導向的方式建立一個 network 的 class (使用 torch.nn) (下圖左)
# 2. Functional (使用 torch.nn.functional) (下圖右)
#
# 
#
# Tensor 被四個中括號包含,代表 image 有四個 dimensions (minibatch size, depth, height, width)
#
# **torch.nn.MaxPool2d(2)** 以及 **F.max_pool2d(im, 2)** 代表將整個 image 分成許多 2 * 2 的格子,再取每個 2 * 2 的格子的 max 值來完成 pooling
# ## Average-pooling in PyTorch
#
# 一樣有兩種方式:
# 1. 物件導向的方式建立一個 network 的 class (使用 torch.nn) (下圖左)
# 2. Functional (使用 torch.nn.functional) (下圖右)
#
# 
# # Convolutional Neural Networks
#
# **AlexNet 架構**
#
# 
#
# CNNs 其實就是包含許多 convolutional kernels 、 pooling layers 以及一些 dense layers 的 neural networks。上圖就有 5 個 convolutional layers、3 個 max-pooling layers、1 個 average-pooling layer、最後還有 3 個 fully connected layers,並且可以將 images 分成 1000 個不同的 classes。
# ## 建立 AlexNet 架構 in PyTorch
#
# 
#
# 256 * 6 * 6 代表輸入的 channels 有 256 個,而每個 channel 的解析度都是 6 * 6。
#
# 
# ### 實際操作
#
# Convolutional Layers 的 channels 數量是沒有固定的 (hyper parameter),可嘗試不同的數量。
# +
# import
import torch
import torch.nn as nn
# define the CNN model
class Net(nn.Module):
def __init__(self, num_classes):
super(Net, self).__init__()
# Instantiate the ReLU nonlinearity
self.relu = nn.ReLU()
# Instantiate two convolutional layers
self.conv1 = nn.Conv2d(in_channels=1, out_channels=5, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(in_channels=5, out_channels=10, kernel_size=3, padding=1)
# Instantiate a max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# Instantiate a fully connected layer
self.fc = nn.Linear(7 * 7 * 10, 10)
def forward(self, x):
# Apply conv followd by relu, then in next line pool
x = self.relu(self.conv1(x))
x = self.pool(x)
# Apply conv followd by relu, then in next line pool
x = self.relu(self.conv2(x))
x = self.pool(x)
# Prepare the image for the fully connected layer
x = x.view(-1, 7 * 7 * 10)
# Apply the fully connected layer and return the result
return self.fc(x)
# -
# # Training Convolutional Neural Networks (CIFAR-10)
# import
import torch
import torchvision # a package which deals with datasets and pre-trained neural networks
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# ## Create training dataloader and testing dataloader
# +
# define a transformation of images to torch tensors using torchvision transforms
# 下載資料時就可以直接用這個定義好的 transform 物件來轉換資料
transformCIFAR = transforms.Compose(
[transforms.ToTensor(), # define a transformation of images object
transforms.Normalize((0.4914, 0.48216, 0.44653), # R, G, B 的平均 (pre-computed) 用來將數據標準化
(0.24703, 0.24349, 0.26159))] # R, G, B 的標準差 (pre-computed) 用來將數據標準化
)
# download the CIFAR-10 datasets
training_setCIFAR = torchvision.datasets.CIFAR10(root = "./Datasets/CIFAR-10", train = True, download = True, transform = transformCIFAR)
testing_setCIFAR = torchvision.datasets.CIFAR10(root = "./Datasets/CIFAR-10", train = False, download = True, transform = transformCIFAR)
# load the CIFAR-10 datasets
trainloaderCIFAR = torch.utils.data.DataLoader(training_setCIFAR, batch_size = 32, shuffle = True, num_workers = 4)
testloaderCIFAR = torch.utils.data.DataLoader(testing_setCIFAR, batch_size = 32, shuffle = False, num_workers = 4)
# -
# ## Define the model
# define the CNN model
class CIFAR_Net(nn.Module):
def __init__(self, num_classes = 10):
super(CIFAR_Net, self).__init__()
# Instantiate the ReLU nonlinearity
self.relu = nn.ReLU()
# Instantiate two convolutional layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=1)
# Instantiate a max pooling layer
self.pool = nn.MaxPool2d(2, 2) # pooling 會被用在每個 convolutional layer 之後 (所以會用 3 次)
# Instantiate a fully connected layer
self.fc = nn.Linear(4 * 4 * 128, num_classes) # pooling 3 次所以解析度變成 4 * 4 (32 / 2 --> 16 / 2 --> 8 / 2 --> 4)
def forward(self, x):
# Apply conv followd by relu, then in next line pool
x = self.relu(self.conv1(x))
x = self.pool(x)
# Apply conv followd by relu, then in next line pool
x = self.relu(self.conv2(x))
x = self.pool(x)
# Apply conv followd by relu, then in next line pool
x = self.relu(self.conv3(x))
x = self.pool(x)
# Prepare the image for the fully connected layer
x = x.view(-1, 4 * 4 * 128)
# Apply the fully connected layer and return the result
return self.fc(x)
# ## Define the optimizer and loss function
# +
# 建立模型
net = CIFAR_Net()
# 建立 cross entropy loss function 的物件
criterion = nn.CrossEntropyLoss()
# 設定 optimizer
optimizer = optim.Adam(net.parameters(), lr = 3e-4)
# -
# ## Training
# +
for epoch in range(10): # 所有的資料要被訓練10次 (10 個 epochs)
for i, data in enumerate(trainloaderCIFAR, start = 0): # 每個 batch 之後都會更新 weights (i 為 batch number, data 為每個 batch 的資料集)
# get the inputs
inputs, labels = data
# 將 optimizer 的 gradient 歸零,以避免迴圈前一次的 gradient 的影響。每一次更新 weights 的 gradients 都是重新開始算的
optimizer.zero_grad()
# Forward propagation (計算 output)
outputs = net(inputs)
# optimisation
loss = criterion(outputs, labels)
# Backpropagation
loss.backward() # 計算 loss 這個函數中各個 weights 的 gradients
# 利用計算得到的 gradients 更新 weights
optimizer.step()
print("Finish Training!")
# -
# ## Evaluate the result
# +
correct, total = 0, 0
predictions = []
# 將 net 這個 Net 的物件設定為 evaluation mode
net.eval()
for i, data in enumerate(testloaderCIFAR, start = 0):
# get the testing data
inputs, labels = data
# 預測結果 (計算 output scores for each class)
outputs = net(inputs)
# 將 output scores 轉換成類別 (誰的 output score 最大就屬於哪個類別)
_, predicted = torch.max(outputs.data, 1)
# 將預測的類別轉換成 list
predictions.append(outputs)
# 計算資料的總數
total += labels.size(0)
# 計算預測正確的數量
correct += (predicted == labels).sum().item()
# 印出預測正確率
print("The CIFAR-10 testing set accuracy of the network if: %d %%" % (100 * correct / total))
# -
# ---
# # Training Convolutional Neural Networks (MNIST)
# import
import torch
import torchvision # a package which deals with datasets and pre-trained neural networks
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# ## Create training dataloader and testing dataloader
# +
# Transform the data to torch tensors and normalize it
transformMNIST = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307), ((0.3081)))]) # 由於 MNIST 是灰階,因此沒有像 RGB 一樣三個平均和三個標準差,這裡平均為 0.1307,標準差為 0.3081
# Prepare training set and testing set
trainsetMNIST = torchvision.datasets.MNIST(root = "./Datasets/MNIST", train=True,
download=True, transform=transformMNIST)
testsetMNIST = torchvision.datasets.MNIST(root = "./Datasets/MNIST", train=False,
download=True, transform=transformMNIST)
# Prepare training loader and testing loader
trainloaderMNIST = torch.utils.data.DataLoader(trainsetMNIST, batch_size = 32,
shuffle = True, num_workers=8)
testloaderMNIST = torch.utils.data.DataLoader(testsetMNIST, batch_size = 32,
shuffle = False, num_workers=8)
# -
# ## Define the model
# define the CNN model
class MNIST_Net(nn.Module):
def __init__(self, num_classes = 10):
super(MNIST_Net, self).__init__()
# Instantiate the ReLU nonlinearity
self.relu = nn.ReLU()
# Instantiate two convolutional layers
self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=1)
# Instantiate a max pooling layer
self.pool = nn.MaxPool2d(2, 2) # pooling 會被用在每個 convolutional layer 之後 (所以會用 3 次)
# Instantiate a fully connected layer
self.fc = nn.Linear(7 * 7 * 64, num_classes) # pooling 2 次所以解析度變成 7 * 7 (28 / 2 --> 14 / 2 --> 7)
def forward(self, x):
# Apply conv followd by relu, then in next line pool
x = self.relu(self.conv1(x))
x = self.pool(x)
# Apply conv followd by relu, then in next line pool
x = self.relu(self.conv2(x))
x = self.pool(x)
# Prepare the image for the fully connected layer
x = x.view(-1, 7 * 7 * 64)
# Apply the fully connected layer and return the result
return self.fc(x)
# ## Define the optimizer and loss function
# +
# 建立模型
net = MNIST_Net()
# 建立 cross entropy loss function 的物件
criterion = nn.CrossEntropyLoss()
# 設定 optimizer
optimizer = optim.Adam(net.parameters(), lr = 3e-4)
# -
# ## Training
# +
for epoch in range(10): # 所有的資料要被訓練10次 (10 個 epochs)
for i, data in enumerate(trainloaderMNIST, start = 0): # 每個 batch 之後都會更新 weights (i 為 batch number, data 為每個 batch 的資料集)
# get the inputs
inputs, labels = data
# 將 optimizer 的 gradient 歸零,以避免迴圈前一次的 gradient 的影響。每一次更新 weights 的 gradients 都是重新開始算的
optimizer.zero_grad()
# Forward propagation (計算 output)
outputs = net(inputs)
# optimisation
loss = criterion(outputs, labels)
# Backpropagation
loss.backward() # 計算 loss 這個函數中各個 weights 的 gradients
# 利用計算得到的 gradients 更新 weights
optimizer.step()
print("Finish Training!")
# -
# ## Evaluate the result
# +
correct, total = 0, 0
predictions = []
# 將 net 這個 Net 的物件設定為 evaluation mode
net.eval()
for i, data in enumerate(testloaderMNIST, start = 0):
# get the testing data
inputs, labels = data
# 預測結果 (計算 output scores for each class)
outputs = net(inputs)
# 將 output scores 轉換成類別 (誰的 output score 最大就屬於哪個類別)
_, predicted = torch.max(outputs.data, 1)
# 將預測的類別轉換成 list
predictions.append(outputs)
# 計算資料的總數
total += labels.size(0)
# 計算預測正確的數量
correct += (predicted == labels).sum().item()
# 印出預測正確率
print("The MNIST testing set accuracy of the network if: %d %%" % (100 * correct / total))
# -
# Convolutional Neural Networks 的預測結果遠高於 fully connected networks
| 12,687 |
/.local/share/Trash/files/Untitled3 1.ipynb
|
c2c7325db1e7c7a0a0bfb3a538a38e6049f1835d
|
[] |
no_license
|
kaibawong/differential
|
https://github.com/kaibawong/differential
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 9,053 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
import ipywidgets as widgets
from ipywidgets import interact
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn import linear_model
import statsmodels.api as sm
import ipwidgets import interactive
combi = pd.read_table('Historical_combi2.csv',delimiter =';')
import datetime
datetimes = [datetime.datetime.strptime(d, '%d.%m.%Y') for d in combi["Date"]]
df = pd.DataFrame(datetimes, columns=['date'])
df.loc[:, 'date'] = pd.to_datetime(df['date'], format='%Y-%m-%d')
combi = combi.assign(df=df['date'].values)
def regressionlinear(X,Y):
regr = linear_model.LinearRegression()
regr.fit(X, Y)
#print('Intercept: \n', regr.intercept_)
#print('Coefficients: \n', regr.coef_)
X = sm.add_constant(X)
model = sm.OLS(Y, X).fit()
predictions = model.predict(X)
print_model = model.summary()
#print(print_model)
return regr.intercept_,regr.coef_
def randomforest(x_train,y_train,x_test,y_test):
plt.rcParams['figure.dpi'] = 100
regressor = RandomForestRegressor(n_estimators=200, max_depth=5)
clf=regressor.fit(x_train, y_train)
y_pred=regressor.predict(x_test)
y_pred=pd.DataFrame(y_pred)
plt_train=plt.scatter(x_train.iloc[:,0],y_train, color='grey')
plt_test=plt.scatter(x_test.iloc[:,0],y_test, color='green')
plt_pred=plt.scatter(x_test.iloc[:,0], y_pred, color='black')
plt.xlabel("Dated Brent")
plt.ylabel("Bonny light")
plt.legend((plt_train, plt_test,plt_pred),("train data", "test data","prediction"))
plt.show()
print("Mean squared error: %.2f" % np.mean((regressor.predict(x_train) - y_train) ** 2))
import seaborn as sns
importances=regressor.feature_importances_
indices=list(x_train)
print("Feature ranking:")
for f in range(x_train.shape[1]):
print("Feature %s (%f)" % (indices[f], importances[f]))
f, (ax1) = plt.subplots(1, 1, figsize=(6, 4), sharex=True)
sns.barplot(indices, importances, palette="BrBG", ax=ax1)
ax1.set_ylabel("Importance")
ax1.set_xticklabels(
ax1.get_xticklabels(),
rotation=45,
horizontalalignment='right'
);
return regressor.predict
def NPV_Calc(DB,FO35,FO1,model):
if model == 'Bonny_light':
widget = interactive(NPV_Calc,DB=(0,200))
plt.rcParams['figure.dpi'] = 100
rma60 = combi["Bonny light"].rolling(window=60).mean()
ema60 = combi["Bonny light"].ewm(span=60, adjust=False).mean()
plt.plot(combi["df"],combi["Bonny light"])
plt.plot(combi["df"],rma60)
plt.plot(combi["df"],ema60)
plt.legend(("Monthly","rma","ema"))
plt.show()
#Non linear model
nonlinear_BL = 0.0164924882990988*(DB) + 4.43302177278368e-5*np.power(DB,2) - 0.157317431833725
#print("Non.linear predicted price =[", nonlinear_BL,"]")
#Linear model
Y = combi[['Bonny light']].dropna()
K=list(Y.index.values)
K[0]
X = combi[['Dated Brent']].iloc[K[0]:]
#print(X)
#print(Y)
intercept_, coef_ = regressionlinear(X,Y)
linear_BL=intercept_+ coef_[0]*DB
#print("Linear predicted price =", linear_BL)
#Random forest
All = pd.concat([X,Y],axis=1,sort=False)
train = All.iloc[:,:]
test =All.iloc[-100:,:]
x_train=train["Dated Brent"].to_frame()
y_train=train["Bonny light"]
x_test=test["Dated Brent"].to_frame()
y_test=test["Bonny light"].to_frame()
F=randomforest(x_train,y_train,x_test,y_test)
invar = {'Dated Brent':[DB]}
invar_df = pd.DataFrame(invar)
y_pred = F(invar_df)
print("Linear predicted price =", linear_BL)
print("Non.linear predicted price =[", nonlinear_BL,"]")
print("Random forest price =",y_pred)
elif model == 'Urals_NWE':
plt.rcParams['figure.dpi'] = 100
rma60 = combi["Urals NWE"].rolling(window=60).mean()
ema60 = combi["Urals NWE"].ewm(span=60, adjust=False).mean()
plt.plot(combi["df"],combi["Urals NWE"])
plt.plot(combi["df"],rma60)
plt.plot(combi["df"],ema60)
plt.legend(("Monthly","rma","ema"))
plt.show()
#Non linear model
nonlinear_UralN = 0.243310947652501*(FO35) + 0.0327070285007665*(DB) + 0.000931100809264595*np.power(FO1,3) + 3.01672677408283e-5*np.power(FO1,4) - 0.771156577782479 - 0.00241982760220774*(DB)*(FO1) - 0.000191940652210639*np.power(DB,2)
#Linear model
Y = combi[['Urals NWE']].dropna()
K=list(Y.index.values)
K[0]
X = combi[['Dated Brent','FO 3.5%','FO 1%']].iloc[K[0]:]
intercept_, coef_ = regressionlinear(X,Y)
coef_ = coef_.reshape(-1)
linear_UralN=intercept_+ coef_[0]*DB + coef_[1]*FO35 + coef_[2]*FO1
#Random forest
All = pd.concat([X,Y],axis=1,sort=False)
train = All.iloc[-100:,:]
test =All.iloc[:-100,:]
x_train=train[["Dated Brent","FO 3.5%","FO 1%"]]
y_train=train["Urals NWE"]
x_test=test[["Dated Brent","FO 3.5%","FO 1%"]]
y_test=test["Urals NWE"].to_frame()
F=randomforest(x_train,y_train,x_test,y_test)
invar = {'Dated Brent':[DB], 'FO 3.5%':[FO35], 'FO 1%':[FO1]}
invar_df = pd.DataFrame(invar)
y_pred = F(invar_df)
print("Linear predicted price =", linear_UralN)
print("Non.linear predicted price =[", nonlinear_UralN,"]")
print("Random forest price =",y_pred)
else:
print('choose something')
return
# -
| 6,308 |
/MultivariateLinearRegression/Multivariate_Linear_Regression.ipynb
|
e6433d4deb9ab019eb2dc006bd91aae92784d87d
|
[
"MIT"
] |
permissive
|
Harsh188/100_Days_of_ML
|
https://github.com/Harsh188/100_Days_of_ML
| 8 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 311,427 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Notebook 3b: Using Random Forests to classify phases in the Ising Model Using Random Forests to classify phases in the Ising Model
# ## Source
#
# https://github.com/Emergent-Behaviors-in-Biology/mlreview_notebooks
#
# ## Must read !!
# Before diving into this tutorial, please read: https://towardsdatascience.com/understanding-random-forest-58381e0602d2
#
# ## Learning Goal
#
# The goal of this notebook is to show how one can employ ensemble methods such as Random Forests to classify the states of the 2D Ising model according to their phases. We discuss concepts like decision trees, extreme decision trees, and out-of-bag error. The notebook also introduces the powerful scikit-learn `Ensemble` class.
#
#
# ## Setting up the problem (review)
#
# The Hamiltonian for the classical Ising model is given by
#
# $$ H = -J\sum_{\langle ij\rangle}S_{i}S_j,\qquad \qquad S_j\in\{\pm 1\} $$
#
# where the lattice site indices $i,j$ run over all nearest neighbors of a 2D square lattice of side $L$, and $J$ is some arbitrary interaction energy scale. We adopt periodic boundary conditions. Onsager proved that this model undergoes a phase transition in the thermodynamic limit from an ordered ferromagnet with all spins aligned to a disordered phase at the critical temperature $T_c/J=1/\log(1+\sqrt{2})\approx 2.26$. For any finite system size, this critical point is expanded to a critical region around $T_c$.
#
# We will use the same basic idea as we did for logistic regression. An interesting question to ask is whether one can train a statistical model to distinguish between the two phases of the Ising model. In other words, given an Ising state, we would like to classify whether it belongs to the ordered or the disordered phase, without any additional information other than the spin configuration itself. This categorical machine learning problem is well suited for ensemble methods and in particular Random Forests.
#
# To this end, we consider the 2D Ising model on a $40\times 40$ square lattice, and use Monte-Carlo (MC) sampling to prepare $10^4$ states at every fixed temperature $T$ out of a pre-defined set. Using Onsager's criterion, we can assign a label to each state according to its phase: $0$ if the state is disordered, and $1$ if it is ordered.
#
# It is well-known that, near the critical temperature $T_c$, the ferromagnetic correlation length diverges which, among others, leads to a critical slowing down of the MC algorithm. Therefore, we expect identifying the phases to be harder in the critical region. With this in mind, consider the following three types of states: ordered ($T/J<2.0$), critical ($2.0\leq T/J\leq 2.5)$ and disordered ($T/J>2.5$). We use both ordered and disordered states to train the random forest and, once the supervised training procedure is complete, we shall evaluate the performance of our classifier on unseen ordered, disordered and critical states.
#
# A link to the Ising dataset can be found at [https://physics.bu.edu/~pankajm/MLnotebooks.html](https://physics.bu.edu/~pankajm/MLnotebooks.html).
# +
import numpy as np
np.random.seed() # shuffle random seed generator
# Ising model parameters
L=40 # linear system size
J=-1.0 # Ising interaction
T=np.linspace(0.25,4.0,16) # set of temperatures
T_c=2.26 # Onsager critical temperature in the TD limit
# +
import pickle, os
from urllib.request import urlopen
# path to data directory (for testing)
#path_to_data=os.path.expanduser('~')+'/Dropbox/MachineLearningReview/Datasets/isingMC/'
def read_t(root="./"):
data = pickle.load(open(root+'Ising2DFM_reSample_L40_T=All.pkl','rb'))
return np.unpackbits(data).astype(int).reshape(-1,1600)
data = read_t('IsingData/')
data[np.where(data==0)]=-1 # map 0 state to -1 (Ising variable can take values +/-1)
#LABELS (convention is 1 for ordered states and 0 for disordered states)
labels = pickle.load(open('IsingData/Ising2DFM_reSample_L40_T=All_labels.pkl','rb'))
# +
###### define ML parameters
from sklearn.model_selection import train_test_split
train_to_test_ratio=0.8 # training samples
# divide data into ordered, critical and disordered
X_ordered=data[:70000,:]
Y_ordered=labels[:70000]
X_critical=data[70000:100000,:]
Y_critical=labels[70000:100000]
X_disordered=data[100000:,:]
Y_disordered=labels[100000:]
del data,labels
# define training and test data sets
X=np.concatenate((X_ordered,X_disordered))
Y=np.concatenate((Y_ordered,Y_disordered))
# pick random data points from ordered and disordered states
# to create the training and test sets
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,train_size=train_to_test_ratio,test_size=1.0-train_to_test_ratio)
print('X_train shape:', X_train.shape)
print('Y_train shape:', Y_train.shape)
print()
print(X_train.shape[0], 'train samples')
print(X_critical.shape[0], 'critical samples')
print(X_test.shape[0], 'test samples')
# +
##### plot a few Ising states
# %matplotlib inline
#import ml_style as style
import matplotlib as mpl
import matplotlib.pyplot as plt
#mpl.rcParams.update(style.style)
from mpl_toolkits.axes_grid1 import make_axes_locatable
# set colourbar map
cmap_args=dict(cmap='plasma_r')
# plot states
fig, axarr = plt.subplots(nrows=1, ncols=3)
axarr[0].imshow(X_ordered[20001].reshape(L,L),**cmap_args)
#axarr[0].set_title('$\\mathrm{ordered\\ phase}$',fontsize=16)
axarr[0].set_title('ordered phase',fontsize=16)
axarr[0].tick_params(labelsize=16)
axarr[1].imshow(X_critical[10001].reshape(L,L),**cmap_args)
#axarr[1].set_title('$\\mathrm{critical\\ region}$',fontsize=16)
axarr[1].set_title('critical region',fontsize=16)
axarr[1].tick_params(labelsize=16)
im=axarr[2].imshow(X_disordered[50001].reshape(L,L),**cmap_args)
#axarr[2].set_title('$\\mathrm{disordered\\ phase}$',fontsize=16)
axarr[2].set_title('disordered phase',fontsize=16)
axarr[2].tick_params(labelsize=16)
fig.subplots_adjust(right=2.0)
plt.show()
# -
# ## Random Forests
#
# **Hyperparameters**
#
# We start by training with Random Forests. As discussed in Sec. VIII of the review, Random Forests are ensemble models. Here we will use the sci-kit learn implementation of random forests. There are two main hyper-parameters that will be important in practice for the performance of the algorithm and the degree to which it overfits/underfits: the number of estimators in the ensemble and the depth of the trees used. The former is controlled by the parameter `n_estimators` whereas the latter (the complexity of the trees used) can be controlled in many distinct ways (`min_samples_split`, `min_samples_leaf`, `min_impurity_decrease`, etc). For our simple dataset, it does not really make much difference which one of these we use. We will just use the `min_samples_split` parameter that dictates how many samples need to be in each node of the classification tree. The bigger this number, the more coarse our trees and data partitioning.
#
# In the code below, we will just consider extremely fine trees (`min_samples_split=2`) or extremely coarse trees (`min_samples_split=10000`). As we will see, both of these tree complexities are sufficient to distinguish the ordered from the disordered samples. The reason for this is that the ordered and disordered phases are distinguished by the magnetization order parameter which is an equally weighted sum of all features. However, if we want to train deep in these simple phases, and then use our algorithm to distinguish critical samples it is crucial we use more complex trees even though the performance on the disordered and ordered phases is indistinguishable for coarse and complex trees.
#
# **Out of Bag (OOB) Estimates**
#
# For more complicated datasets, how can we choose the right hyperparameters? We can actually make use of one of the most important and interesting features of ensemble methods that employ Bagging: out-of-bag (OOB) estimates. Whenever we bag data, since we are drawing samples with replacement, we can ask how well our classifiers do on data points that are *not used* in the training. This is the out-of-bag prediction error and plays a similar role to cross-validation error in other ML methods. Since this is the best proxy for out-of-sample prediction, we choose hyperparameters to minimize the out-of-bag error.
# +
# Apply Random Forest
#This is the random forest classifier
from sklearn.ensemble import RandomForestClassifier
#This is the extreme randomized trees
from sklearn.ensemble import ExtraTreesClassifier
#import time to see how perforamance depends on run time
import time
import warnings
#Comment to turn on warnings
warnings.filterwarnings("ignore")
#We will check
min_estimators = 10
max_estimators = 101
classifer = RandomForestClassifier # BELOW WE WILL CHANGE for the case of extremly randomized forest
n_estimator_range=np.arange(min_estimators, max_estimators, 10)
leaf_size_list=[2,10000]
m=len(n_estimator_range)
n=len(leaf_size_list)
#Allocate Arrays for various quantities
RFC_OOB_accuracy=np.zeros((n,m))
RFC_train_accuracy=np.zeros((n,m))
RFC_test_accuracy=np.zeros((n,m))
RFC_critical_accuracy=np.zeros((n,m))
run_time=np.zeros((n,m))
print_flag=True
for i, leaf_size in enumerate(leaf_size_list):
# Define Random Forest Classifier
myRF_clf = classifer(
n_estimators=min_estimators,
max_depth=None,
min_samples_split=leaf_size, # minimum number of sample per leaf
oob_score=True,
random_state=0,
warm_start=True # this ensures that you add estimators without retraining everything
)
for j, n_estimator in enumerate(n_estimator_range):
print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size))
start_time = time.time()
myRF_clf.set_params(n_estimators=n_estimator)
myRF_clf.fit(X_train, Y_train)
run_time[i,j] = time.time() - start_time
# check accuracy
RFC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train)
RFC_OOB_accuracy[i,j]=myRF_clf.oob_score_
RFC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test)
RFC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical)
if print_flag:
result = (run_time[i,j], RFC_train_accuracy[i,j], RFC_OOB_accuracy[i,j], RFC_test_accuracy[i,j], RFC_critical_accuracy[i,j])
print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score"))
print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result))
# +
plt.figure()
plt.plot(n_estimator_range,RFC_train_accuracy[1],'--b^',label='Train (coarse)')
plt.plot(n_estimator_range,RFC_test_accuracy[1],'--r^',label='Test (coarse)')
plt.plot(n_estimator_range,RFC_critical_accuracy[1],'--g^',label='Critical (coarse)')
plt.plot(n_estimator_range,RFC_train_accuracy[0],'o-b',label='Train (fine)')
plt.plot(n_estimator_range,RFC_test_accuracy[0],'o-r',label='Test (fine)')
plt.plot(n_estimator_range,RFC_critical_accuracy[0],'o-g',label='Critical (fine)')
#plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Accuracy')
lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse')
plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Run time (s)')
plt.legend(loc=2)
#plt.savefig("Ising_RF_Runtime.pdf")
plt.show()
# -
# ## Extremely Randomized Trees##
#
# As discussed in the main text, the effectiveness of ensemble methods generally increases as the correlations between members of the ensemble decrease. This idea has been leveraged to make methods that introduce even more randomness into the ensemble by randomly choosing features to split on as well as randomly choosing thresholds to split on. See Section 4.3 of Louppe 2014 [arxiv:1407.7502](https://arxiv.org/pdf/1407.7502.pdf).
#
# Here we will make use of the scikit-learn function `ExtremeTreesClassifier` and we will just rerun what we did above. Since there is extra randomization compared to random forests, one can imagine that the performance of the critical samples will be much worse. Indeed, this is the case.
# +
#This is the extreme randomized trees
from sklearn.ensemble import ExtraTreesClassifier
#import time to see how perforamance depends on run time
import time
import warnings
#Comment to turn on warnings
warnings.filterwarnings("ignore")
#We will check
min_estimators = 10
max_estimators = 101
classifer = ExtraTreesClassifier # only changing this
n_estimator_range=np.arange(min_estimators, max_estimators, 10)
leaf_size_list=[2,10000]
m=len(n_estimator_range)
n=len(leaf_size_list)
#Allocate Arrays for various quantities
ETC_OOB_accuracy=np.zeros((n,m))
ETC_train_accuracy=np.zeros((n,m))
ETC_test_accuracy=np.zeros((n,m))
ETC_critical_accuracy=np.zeros((n,m))
run_time=np.zeros((n,m))
print_flag=True
for i, leaf_size in enumerate(leaf_size_list):
# Define Random Forest Classifier
myRF_clf = classifer(
n_estimators=min_estimators,
max_depth=None,
min_samples_split=leaf_size, # minimum number of sample per leaf
oob_score=True,
bootstrap=True,
random_state=0,
warm_start=True # this ensures that you add estimators without retraining everything
)
for j, n_estimator in enumerate(n_estimator_range):
print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size))
start_time = time.time()
myRF_clf.set_params(n_estimators=n_estimator)
myRF_clf.fit(X_train, Y_train)
run_time[i,j] = time.time() - start_time
# check accuracy
ETC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train)
ETC_OOB_accuracy[i,j]=myRF_clf.oob_score_
ETC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test)
ETC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical)
if print_flag:
result = (run_time[i,j], ETC_train_accuracy[i,j], ETC_OOB_accuracy[i,j], ETC_test_accuracy[i,j], ETC_critical_accuracy[i,j])
print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score"))
print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result))
# +
plt.figure()
plt.plot(n_estimator_range,ETC_train_accuracy[1],'--b^',label='Train (coarse)')
plt.plot(n_estimator_range,ETC_test_accuracy[1],'--r^',label='Test (coarse)')
plt.plot(n_estimator_range,ETC_critical_accuracy[1],'--g^',label='Critical (coarse)')
plt.plot(n_estimator_range,ETC_train_accuracy[0],'o-b',label='Train (fine)')
plt.plot(n_estimator_range,ETC_test_accuracy[0],'o-r',label='Test (fine)')
plt.plot(n_estimator_range,ETC_critical_accuracy[0],'o-g',label='Critical (fine)')
#plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Accuracy')
lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse')
plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Run time (s)')
plt.legend(loc=2)
#plt.savefig("Ising_ETC_Runtime.pdf")
plt.show()
# -
# ### Exercises: ###
# <ul>
#
# <li> [Random Forest] Consider $B$ random variables, each with variance $\sigma^2$. Show that **1)** If these random variables are i.i.d., then their average has a variance $\frac{1}{B}\sigma^2$. **2)** If they are only i.d. (i.e. identically distributed but not necessarily independent) with positive pairwise correlation $\rho$, then the variance of their average is $\rho\sigma^2+\frac{1-\rho}{B}\sigma^2$. In this case, what does $B\gg 1$ imply? Justify that *by random selection of input features, random forest improves the variance reduction of bagging by reducing the correlation between the trees without dramatic increase of variance*.
#
# <li> [Random Forest] Consider a random forest $G$ consisting of $K$ binary classification trees, $\{g_k| k=1,\cdots K\}$, where $K$ is an odd integer. Suppose each tree evaluates classification error based on binary counts (i.e. 0/1 error) with $E_{out}(g_k)=e_k$, prove or disprove that $\frac{2}{K+1}\sum_{k=1}^K e_k$ upper bounds $E_{out}(G)$.
#
#
# <li> [OOB] For a data set with $N$ samples, what's the probability that one of samples, say $(\boldsymbol{x}_n,y_n)$, is never sampled after bootstrapping $N'$ times? Show that if $N'=N$ and $N\gg 1$, this probability is approximately $e^{-1}$.
#
# <li> [OOB] Following the previous question, if $N'=pN$ and assuming $N\gg 1$, argue that $Ne^{-p}$ examples in the data set will never be sampled at all.
#
# <li> [OOB- programming] We argued that OOB is a good proxy for out-of-sample prediction due to bootstrapping. However, in practice OOB tends to give overly pessimistic estimate. To explore this, now instead of using OOB, try to cross-validation and redo all the analysis. You may find this tutorial [this tutorial](http://scikit-learn.org/stable/modules/cross_validation.html) on Scikit useful.
# </ul>
| 17,596 |
/notebooks/ajustes_modelo/samples/Ejercicio - KNN - kfold.ipynb
|
2532fa143cf13d65fb5fd0912ad08bbbdcc64689
|
[] |
no_license
|
DhyegoNieto/acamica
|
https://github.com/DhyegoNieto/acamica
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 15,944 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
from scipy import stats
import os,pickle
np.set_printoptions(suppress=True)
set_printoptions(suppress=True)
from scipy.special import digamma
#from analysis import printCI
DPATH=os.getcwd()+os.path.sep+'standata'+os.path.sep
from matusplotlib import errorbar
import pandas as pd
#import statsmodels.api as st
#import statsmodels.formula.api as sf
# +
# %pylab inline
from analysis import printCI,getMetadata,LBLS,plotSample,getData,plotSmResults
info,Dres=getMetadata()
#plotSample(info,Dres)
yLT,xAll=getData(info,Dres)
# -
# # Lognormal markov model no-pooling
plotSmResults('LNNP',yLT,xAll,zlu=[[0,2.5],[0,2.5],[-2,4],[-4,5],[-1,1],[0,3],[0,3],[0,3]])
# # Lognormal markov model partial-pooling
plotSmResults('LNMP',yLT,xAll,
zlu=[[0,2.5],[0,2.5],[1,2.25],[-0.3,0.5],[-0.06,-0.04],[0,3],[0,3],[0,3]])
# # Lognormal hidden markov model partial-pooling
plotSmResults('LNHP',yLT,xAll,
zlu=[[0,3],[0,3],[1,2.25],[-0.3,0.5],[-0.09,-0.05],[0,3],[0,3],[0,3]])
# # Gamma distribution markov model partial-pooling
# +
import pystan
invgammafun='''functions{
vector invdigamma(vector x){
vector[num_elements(x)] y; vector[num_elements(x)] L;
for (i in 1:num_elements(x)){
if (x[i]==digamma(1)){
y[i]=1;
}else{ if (x[i]>=-2.22){
y[i]=(exp(x[i])+0.5);
}else{
y[i]=1/(x[i]-digamma(1));
}}}
L=digamma(y)-x;
while (min(L)>10^-12){
y=y-L ./trigamma(y);
L=digamma(y)-x;
}
return y;}
real invdigammaR(real x){
real y; real L; int K;
if (x==digamma(1)){
y=1;
}else{ if (x>=-2.22){
y=(exp(x)+0.5);
}else{
y=1/(x-digamma(1));
}}
L=digamma(y)-x;K=0;
while (L>10^-12 && K<10000){
y=y-L ./trigamma(y);
L=digamma(y)-x;
K=K+1;
}
return y;
}} '''
model = """
data {
int<lower=0> N; //nr subjects
real<lower=0> y[N,10];
int<lower=1,upper=10> xSC[N];
real xA[N];
int<lower=0,upper=4> xD[N];
}parameters {
real<lower=1,upper=20> sigma[N];
real z0[N];
real zd[N];
real zh[N];
real<lower=0,upper=20> zhs;
real<lower=0,upper=20> zds;
real<lower=0,upper=20> z0s;
real<lower=-100,upper=100> zhm;
real<lower=-100,upper=100> zdmd[3];
real<lower=-100,upper=100> zdt[4];
real<lower=-100,upper=100> z0m[3];
real<lower=-100,upper=100> z0a;
} transformed parameters {
real<lower=-100,upper=100> zdm[3];
zdm[1]=0;
zdm[2]=zdmd[2]-zdmd[1];
zdm[3]=zdmd[3]-zdmd[1];
}model {
for (n in 1:N){
sigma[n]~cauchy(0,2);
zh[n]~normal(zhm,zhs);
zd[n]~normal(zdmd[xD[n]] +zdt[xSC[n]/2],zds);
z0[n]~normal(z0m[xD[n]]+z0a*xA[n],z0s);
y[n,1]~gamma(invdigamma(z0[n]-sigma[n]),sigma[n]);
for (t in 2:10)
if (y[n,t]>0)
y[n,t]~gamma(invdigamma(log(y[n,t-1])+zh[n]+(xSC[n]==t)*zd[n]-sigma[n]),sigma[n]);
}}generated quantities {
real r[N,10];
for (n in 1:N){
r[n,1]=log(gamma_rng(invdigamma(z0[n]),sigma[n]))-log(y[n,1]);
for (t in 2:10)
if (y[n,t]>0)
r[n,t]=gamma_rng(log(y[n,t-1])+zh[n]+(xSC[n]==t)*zd[n],sigma[n])-log(y[n,t]);
else
r[n,t]=0;
}}
"""
#smGMP = pystan.StanModel(model_code=invgammafun+model)
# -
model = """
data {
int<lower=0> N; //nr subjects
real<lower=0> K;
real<lower=0> y[N,10];
int<lower=1,upper=10> xSC[N];
}parameters{
real z0[N];
real zh[N];
real zd[N];
real z0m;
real zhm;
real zdm;
real sigma;
}model{
for (n in 1:N){
//z0[n]~normal(z0m,1);
zd[n]~normal(zdm,1);
zh[n]~normal(zhm,1);
//y[n,1]~gamma(invdigammaR(z0[n]-sigma),exp(sigma));
for (t in 2:10){
if (y[n,t]>0)
y[n,t]~gamma(invdigammaR(log(y[n,t-1])+zh[n]+(xSC[n]==t)*zd[n]-sigma),exp(sigma));
}}}
"""
smGamma = pystan.StanModel(model_code=invgammafun+model)
sel=yLT[:,7]>0
temp={'y':yLT[sel,:],'N':sel.sum(),'K':100,'xSC':np.int32(xAll[sel,0])+1}
fit=smGamma.sampling(data=temp,chains=6,n_jobs=6,seed=2,thin=10,iter=500,warmup=300,init=0)
print(fit)
sel=yLT[:,7]>0
temp={'y':yLT[sel,:],'N':sel.sum(),'xSC':np.int32(xAll[sel,0])+1,
'xA':(xAll[sel,2]-7*30)/30,'xD':np.int32(xAll[sel,1])+1}
with open(DPATH+f'GMP.sm', 'rb') as f: smGMP=pickle.load(f)
fit=smGMP.sampling(data=temp,chains=6,n_jobs=6,
seed=2,thin=1,iter=150,warmup=100)
with open(DPATH+f'GMP.stanfit','wb') as f: pickle.dump(fit,f,protocol=-1)
print(fit)
# # Model comparison
# +
class MyDst():
def pdf(x):
pass
N=7
for i in range(8):
plt.figure(figsize=(20,3))
plt.subplot(1,N,1)
temp=gm[:,0,:,i].flatten()
temp=temp[~np.isnan(temp)]
plt.hist(temp,np.linspace(0,30,16),normed=True);
x=np.linspace(0,30,101)
plt.ylim([0,0.18])
plt.subplot(1,N,2)
plt.hist(np.log(temp),np.linspace(-3,5,16),normed=True);
logx=np.linspace(-3,5,101)
plt.ylim([0,0.5])
k=0
for ag in [stats.lognorm,stats.gamma,stats.gengamma,stats.gamma]:
if k==3: pars=ag.fit(temp,floc=0,fscale=1)
else: pars=ag.fit(temp,floc=0)
dst=ag(*pars)
plt.subplot(1,N,1)
plt.plot(x,dst.pdf(x))
plt.subplot(1,N,2)
plt.plot(logx,dst.pdf(np.exp(logx))*np.exp(logx))
k+=1
plt.subplot(1,N,1)
if not i: plt.legend(['LN','G','GG'])
''' '''
plt.subplot(1,N,3)
pars=stats.norm.fit(np.log(temp),floc=0)
dst=stats.norm(*pars)
res=stats.probplot(np.log(temp),dist=dst,plot=plt);
plt.title('R^2 = %.3f'%res[-1][-1]**2);plt.ylabel('')
plt.subplot(1,N,4)
pars=stats.loggamma.fit(np.log(temp),floc=0)
dst=stats.loggamma(*pars)
res=stats.probplot(np.log(temp),dist=dst,plot=plt);
plt.title('R^2 = %.3f'%res[-1][-1]**2);plt.ylabel('')
plt.subplot(1,N,5)
pars=stats.gengamma.fit(temp,floc=0)
q=pars[1];b=pars[3]
dsttemp=stats.gengamma(*pars)
dst=MyDst()
dst.ppf=lambda x: np.log(dsttemp.ppf(x))
res=stats.probplot(np.log(temp),dist=dst,plot=plt);
plt.title('R^2 = %.3f'%res[-1][-1]**2);plt.ylabel('')
plt.subplot(1,N,6)
pars=stats.loggamma.fit(np.log(temp),floc=0,fscale=1)
dst=stats.loggamma(*pars)
res=stats.probplot(np.log(temp),dist=dst,plot=plt);
plt.title('R^2 = %.3f'%res[-1][-1]**2);plt.ylabel('')
#dst=stats.gengamma(pars[0],1,0,b**q)
#res=stats.probplot(np.power(temp,q),dist=dst,plot=plt);
#plt.title('R^2 = %.3f'%res[-1][-1]**2)
#plt.subplot(1,N,6)
#dst=stats.loggamma(pars[0],0,b**q)
#res=stats.probplot(np.log(np.power(temp,q)),dist=dst,plot=plt);
#plt.title('R^2 = %.3f'%res[-1][-1]**2)
# -
# ## Gamma Regression: Single link
import pystan
invgammafun='''functions{
vector invdigamma(vector x){
vector[num_elements(x)] y; vector[num_elements(x)] L;
for (i in 1:num_elements(x)){
if (x[i]==digamma(1)){
y[i]=1;
}else{ if (x[i]>=-2.22){
y[i]=(exp(x[i])+0.5);
}else{
y[i]=1/(x[i]-digamma(1));
}}}
L=digamma(y)-x;
while (min(L)>10^-12){
y=y-L ./trigamma(y);
L=digamma(y)-x;
}
return y;}
real invdigammaR(real x){
real y; real L;
if (x==digamma(1)){
y=1;
}else{ if (x>=-2.22){
y=(exp(x)+0.5);
}else{
y=1/(x-digamma(1));
}}
L=digamma(y)-x;
while (L>10^-12){
y=y-L ./trigamma(y);
L=digamma(y)-x;
}
return y;
}} '''
model = """
data {
int<lower=0> N; //nr subjects
real<lower=-1> yLT[N,10];
real<lower=0,upper=1> xIsTest[10];
int<lower=1,upper=4> xCond[N];
}parameters {
real<lower=-100,upper=100> bIsX, 3, [512, 512, 1024], stage=6, block='b')
X = Activation('relu')(X)
# AVGPOOL
X = AveragePooling2D((2, 2), name='avg_pool')(X)
# output layer
X = Flatten()(X)
#X = Dropout(0.25)(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='DeepRes-ception')
return model
# + colab={} colab_type="code" id="5kH1Zha9KZ7E"
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import cv2
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D
from keras.models import Model, load_model
from keras.initializers import glorot_uniform
from keras.utils import plot_model
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
import keras.backend as K
import tensorflow as tf
from keras.layers.merge import concatenate
# + colab={} colab_type="code" id="KkU4s8LAKcUf"
img_shape = (150, 150, 3)
model = DeepResNetInception(input_shape=img_shape, classes=6)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="jH1FXWIJKfot" outputId="d4e17bb2-b7e7-4184-c2e4-ad676ebd6af9"
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 104} colab_type="code" id="jPrJoZsWKip1" outputId="ed37b30e-e57e-4f89-8fac-763972ba6a1d"
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="cikmUNwAKk7t" outputId="7e7c63ca-2946-4689-ab4e-2c5e3323d306"
# fits the model on batches with real-time data augmentation:
trained_model = model.fit_generator(
train_generator,
validation_data=test_generator,
epochs=50
)
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="Gp2b81dzKozV" outputId="4968137e-d4e6-4fdd-bc2c-2c8c9b8bedca"
import matplotlib.pyplot as plt
# Plot training & validation accuracy values
plt.plot(trained_model.history['acc'])
plt.plot(trained_model.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(trained_model.history['loss'])
plt.plot(trained_model.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="kuCf0Orz4Eaw" outputId="aaa0eb5d-8466-482d-ef4a-8e43c8bac53c"
model.evaluate_generator(test_generator)
# + colab={} colab_type="code" id="ejBKLW5lP5W5"
model.save_weights('Deep-Res-ception_with_no_BN_weights.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="iy-wp5MLUMN0" outputId="bd9a94c6-c435-4aec-f8f4-c13e255629a1"
plot_model(model, show_shapes=True, to_file='Deep-Res-ception_with_no_BN_weights.png')
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="ad1_Y6L9J4h8" outputId="361761fc-9fed-4419-930e-f0a14531ffb3"
trained_model2 = model.fit_generator(
train_generator,
validation_data=test_generator,
epochs=50
)
# + colab={} colab_type="code" id="VNSgHFsNXS4k"
model.save_weights('Deep-Res-ception_with_no_BN_weights_100epochs.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="cgyVbmipKAGY" outputId="cbffae19-52b8-4755-c4cc-d88f60588d33"
# Plot training & validation accuracy values
plt.plot(trained_model2.history['acc'])
plt.plot(trained_model2.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(trained_model2.history['loss'])
plt.plot(trained_model2.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={} colab_type="code" id="RCg0uzMfKE0x"
def appendHist(h1, h2):
if h1 == {}:
return h2
else:
dest = {}
for key, value in h1.items():
dest[key] = value + h2[key]
return dest
# + colab={} colab_type="code" id="Iy1uOORlPSEh"
all_hist = appendHist(trained_model.history, trained_model2.history)
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="_5UO9oM_PcRA" outputId="e5e487c5-b59d-45c4-b591-4976082e841f"
# Plot training & validation accuracy values
plt.plot(all_hist['acc'])
plt.plot(all_hist['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(all_hist['loss'])
plt.plot(all_hist['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 857} colab_type="code" id="h1t6tlLfdX5i" outputId="10bcafb7-7de1-434b-acbb-f2d13d5382fd"
trained_model3 = model.fit_generator(
train_generator,
validation_data=test_generator,
epochs=25
)
# + colab={} colab_type="code" id="3l751Ehfjugd"
model.save_weights('Deep-Res-ception_with_no_BN_weights_125epochs.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="LrC6mK9ejzEu" outputId="62f473e1-7a26-4435-a5be-63c5c097f564"
# Plot training & validation accuracy values
plt.plot(trained_model2.history['acc'])
plt.plot(trained_model2.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(trained_model2.history['loss'])
plt.plot(trained_model2.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={} colab_type="code" id="6BxIFYU1j2T2"
all_hist = appendHist(all_hist, trained_model3.history)
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="dxznkMu7j_Sn" outputId="037bf5c9-634b-45fd-c7c3-ca4092afe847"
# Plot training & validation accuracy values
plt.plot(all_hist['acc'])
plt.plot(all_hist['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(all_hist['loss'])
plt.plot(all_hist['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 857} colab_type="code" id="ZSJ82d3tmS7U" outputId="19e1970d-da45-4aa7-cb93-ba0d27ddbbe7"
trained_model4 = model.fit_generator(
train_generator,
validation_data=test_generator,
epochs=25
)
# + colab={} colab_type="code" id="U2kZG2dkwUfy"
model.save_weights('Deep-Res-ception_with_no_BN_weights_150epochs.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="NfVKiAVWwaC1" outputId="f8d3cd15-5b84-4e33-c1e5-430ae1ac8de4"
# Plot training & validation accuracy values
plt.plot(trained_model2.history['acc'])
plt.plot(trained_model2.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(trained_model2.history['loss'])
plt.plot(trained_model2.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={} colab_type="code" id="R_Ci4eduwe6d"
all_hist = appendHist(all_hist, trained_model3.history)
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="tpRRunA2wh8G" outputId="344a8e64-0fa6-4ff0-d8b2-583a60a17226"
# Plot training & validation accuracy values
plt.plot(all_hist['acc'])
plt.plot(all_hist['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(all_hist['loss'])
plt.plot(all_hist['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="39eFkfJwIDBj" outputId="c34c2733-94e7-4a09-f29b-b4af1adcaf4f"
model.evaluate_generator(test_generator)
| 20,480 |
/notebooks/06_raulpl_s2_s3_ejemplo.ipynb
|
40db5922bfbe4071cec1215e5a5df768246da7a6
|
[] |
no_license
|
mikesaurio/dataton_anticorrupcion
|
https://github.com/mikesaurio/dataton_anticorrupcion
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 33,404 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### 이항분포 (Binomial distribution)
# 1) 정의 : 성공확률이 $\theta$ 인 베르누이 시도를 $N$번 하는 경우
# 2) 공식
# $$ \text{Bin}(x;N,\theta) = \binom N x \theta^x(1-\theta)^{N-x} $$
# 3) 의미
# [1] x의 의미
#
# $$ x = \sum_{i=1}^N y_i $$
# [2] 표현
#
# $$ Y \sim \text{Bern}(y;\theta) $$
# 4) 모멘트
# [1] 기대값
# $$ \text{E}[X] = N\theta $$
# (증명)
#
# $$ \text{E}[X] = \text{E} \left[ \sum_{i=1}^N \text{Bern}_i \right] = \sum_{i=1}^N \text{E}[ \text{Bern}_i ] = N\theta $$
# [2] 분산
# $$ \text{Var}[X] = N\theta(1-\theta)$$
# (증명)
#
# $$ \text{Var}[X] = \text{Var} \left[ \sum_{i=1}^N \text{Bern}_i \right] = \sum_{i=1}^N \text{Var}[ \text{Bern}_i ] = N\theta(1-\theta)$$
# 이항 분포 구하기
import scipy as sp
import scipy.stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
N = 10 # 실행 횟수
theta = 0.6 # 성공 비율
rv = sp.stats.binom(N, theta) # 이항분포 모수 설정
# +
xx = np.arange(N + 1)
plt.bar(xx, rv.pmf(xx), align = "center")
plt.ylabel("P(x)")
plt.title("Binomial PMF")
plt.show()
# +
# 시뮬레이션
np.random.seed(0) # 난수 기준 설정
x = rv.rvs(100) # 100개 난수 생성
sns.countplot(x) # 갯수가 표시되는 플롯 생성
plt.title("Binomal simulation")
plt.xlabel("sample")
plt.show()
# +
# 이론과 시뮬레이션 동시에 표현하기
y = np.bincount(x, minlength = N + 1)/float(len(x))
df = pd.DataFrame({"theory": rv.pmf(xx), "simaulation":y}).stack()
df = df.reset_index()
df.columns = ["sample", "type", "ratio"]
df.pivot("sample", "type", "ratio")
sns.barplot(x="sample", y="ratio", hue="type", data=df)
plt.show()
# -
# ### 연습 문제
# #### ``연습 문제 1``
#
# 이항 확률 분포의 모수가 다음과 같을 경우에 각각 샘플을 생성한 후 기댓값과 분산을 구하고 앞의 예제와 같이 확률 밀도 함수와 비교한 카운트 플롯을 그린다.
#
# 샘플의 갯수가 10개인 경우와 1000개인 경우에 대해 각각 위의 계산을 한다.
#
# 1. $\theta = 0.5$, $N=5$
# 2. $\theta = 0.9$, $N=20$
# +
N = 5
theta = 0.5
rv = sp.stats.binom(N, theta)
xx = np.arange(N + 1)
np.random.seed(0)
x = rv.rvs(10)
y = np.bincount(x, minlength=N+1)/float(len(x))
df = pd.DataFrame({"theory": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["sample", "type", "ratio"]
df.pivot("sample", "type", "ratio")
sns.barplot(x="sample", y="ratio", hue="type", data=df)
plt.show()
# +
N = 5
theta = 0.5
rv = sp.stats.binom(N, theta)
xx = np.arange(N + 1)
np.random.seed(0)
x = rv.rvs(1000)
y = np.bincount(x, minlength=N+1)/float(len(x))
df = pd.DataFrame({"theory": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["sample", "type", "ratio"]
df.pivot("sample", "type", "ratio")
sns.barplot(x="sample", y="ratio", hue="type", data=df)
plt.show()
# +
N = 20
theta = 0.9
rv = sp.stats.binom(N, theta)
xx = np.arange(N + 1)
np.random.seed(0)
x = rv.rvs(10)
y = np.bincount(x, minlength=N+1)/float(len(x))
df = pd.DataFrame({"theory": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["sample", "type", "ratio"]
df.pivot("sample", "type", "ratio")
sns.barplot(x="sample", y="ratio", hue="type", data=df)
plt.show()
# +
N = 20
theta = 0.9
rv = sp.stats.binom(N, theta)
xx = np.arange(N + 1)
np.random.seed(0)
x = rv.rvs(1000)
y = np.bincount(x, minlength=N+1)/float(len(x))
df = pd.DataFrame({"theory": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["sample", "type", "ratio"]
df.pivot("sample", "type", "ratio")
sns.barplot(x="sample", y="ratio", hue="type", data=df)
plt.show()
| 3,544 |
/NiN/.ipynb_checkpoints/keras_network_in_network-checkpoint.ipynb
|
f9acbb7ce94e75d45c2053ecd755df057465df71
|
[
"MIT"
] |
permissive
|
laplacian-k/10x-ml
|
https://github.com/laplacian-k/10x-ml
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 98,053 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div style="text-align:center"><h1>PBH in Critical Higgs Inflation</h1></div>
# <h2 style="text-align:center">-Total Process</h2>
# <div style="text-align:right"><b>Tae Geun Kim</b></div>
# *****
# <h2>- Table of Contents</h2>
#
# > <a href="#main"><font size=4>1. Main Code</font></a>
# > <a href="#running"><font size=4>2. Running Coupling Constant</font></a>
# > <a href="#potential"><font size=4>3. Potential</font></a>
#
# ---------------------------
# <h2 class="main"><a name="#main">1. Main Code (Cython Code)</a></h2>
# +
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
# -
# %load_ext Cython
import numpy as np
import pylab as plt
import seaborn
# + language="cython"
# import numpy as np
# cimport numpy as np
# from libc.math cimport sqrt, exp, log, log10, pi
# #-------------------------------------------------------
# #--------------Main Class-------------------------------
# #-------------------------------------------------------
# cdef class RGE:
# cdef public double t, mt, xi, Mp, MpR, MW, MZ, MH, alphasMZ, lH, yt, g1, g2, g3, phi, LG
# cdef public double h, sh, Beta_lH1, Beta_g11, Beta_g21, Beta_g31, Beta_yt1, gamma1
# cdef public double Beta_lH2, Beta_g12, Beta_g22, Beta_g32, Beta_yt2, gamma2
# cdef public object Betas1, Betas2, Beta, Beta_real
#
# def __init__(self, mt_init, xi_init, tstart):
# self.t = tstart
# self.mt = mt_init
# self.xi = xi_init
# self.Mp = 1.221*10**(19)
# self.MpR = 2.4*10**(18) # Reduced Planck Mass
# self.MW = 80.385
# self.MZ = 91.1876
# self.MH = 125.09
# self.alphasMZ = 0.1182
# self.lH = 0.12604 + 0.00206 * (self.MH - 125.15) - 0.00004 * (self.mt - 173.34)
# self.yt = (
# 0.93690 + 0.00556 * (self.mt - 173.34) - 0.00003*(self.MH - 125.15)
# - 0.00042*(self.alphasMZ - 0.1184) / 0.0007
# )
# self.g3 = 1.1666 + 0.00314 * (self.alphasMZ - 0.1184)/0.007 - 0.00046 * (self.mt - 173.34)
# self.g2 = 0.64779 + 0.00004 * (self.mt - 173.34) + 0.00011 * (self.MW - 80.384) / 0.014
# self.g1 = 0.35830 + 0.00011 * (self.mt - 173.34) - 0.00020 * (self.MW - 80.384) / 0.014
# self.LG = 0
#
# def BETA(self): # To Initialize
# # 1-Loop Beta Functions
# self.h = sqrt(2)/self.yt * self.mt * exp(self.t)
# self.phi = self.h
# self.sh = (
# (1 + self.xi * self.h ** 2 / self.MpR**2)/(1+(1+6*self.xi)*self.xi*self.h**2/self.MpR**2)
# )
# self.Beta_lH1 = (
# 6*(1+3*self.sh**2)*self.lH**2 + 12*self.lH*self.yt**2 - 6*self.yt**4 - 3*self.lH*(3*self.g2**2 + self.g1**2)
# + 3/8*(2*self.g2**4 + (self.g1**2 + self.g2**2)**2)
# )
# self.Beta_g11 = (81 + self.sh)/12 * self.g1**3
# self.Beta_g21 = - (39 - self.sh) / 12 * self.g2 **3
# self.Beta_g31 = -7*self.g3**3
# self.Beta_yt1 = (
# self.yt * ((23/6 + 2/3*self.sh)*self.yt**2 - (8*self.g3**2 + 9/4*self.g2**2 + 17/12 * self.g1**2))
# )
# self.gamma1 = -(9/4*self.g2**2 + 3/4*self.g1**2 - 3*self.yt**2)
# self.Betas1 = np.array([self.Beta_lH1, self.Beta_g11, self.Beta_g21,
# self.Beta_g31, self.Beta_yt1, self.gamma1])
# # 2-Loop Beta Functions
# self.Beta_lH2 = (
# 1/48*((912+3*self.sh)*self.g2**6 - (290-self.sh)*self.g1**2*self.g2**4
# - (560-self.sh) * self.g1**4*self.g2**2 - (380-self.sh)*self.g1**6)
# + (38-8*self.sh)*self.yt**6 - self.yt**4 * (8/3*self.g1**2 + 32*self.g3**2 + (12-117*self.sh+108*self.sh**2)*self.lH)
# + self.lH * (-1/8*(181 + 54*self.sh - self.sh**2)*self.g2**4
# + 1/4*(3-18*self.sh + 54*self.sh**2)*self.g1**2*self.g2**2 + 1/24*(90+377*self.sh+162*self.sh**2)*self.g1**4
# + (27+54*self.sh+27*self.sh**2)*self.g2**2*self.lH + (9+18*self.sh+9*self.sh**2)*self.g1**2*self.lH
# - (48+288*self.sh - 324*self.sh**2 + 624*self.sh**3 - 324*self.sh**4)*self.lH**2)
# + self.yt**2*(-9/4*self.g2**4 + 21/2*self.g1**2*self.g2**2 - 19/4*self.g1**4
# + self.lH*(45/2*self.g2**2 + 85/6*self.g1**2 + 80*self.g3**2 - (36+108*self.sh**2)*self.lH))
# )
# self.Beta_g12 = (
# 199/18 * self.g1**5 + 9/2 * self.g1**3*self.g2**2 + 44/3*self.g1**3*self.g3**2
# - 17/6 * self.sh * self.g1**3 * self.yt**2
# )
# self.Beta_g22 = (
# 3/2 * self.g1**2 * self.g2**3 + 35/6*self.g2**5 + 12*self.g2**3*self.g3**2
# - 3/2 * self.sh * self.g2**3 * self.yt**2
# )
# self.Beta_g32 = (
# 11/6 * self.g1**2 * self.g3**3 + 9/2*self.g2**2*self.g3**3 - 26*self.g3**5
# - 2 * self.sh * self.g3**3 * self.yt**2
# )
# self.Beta_yt2 = (
# self.yt * (-23/4 * self.g2**4 - 3/4 * self.g1**2 * self.g2**2 + 1187/216*self.g1**4
# + 9*self.g2**2*self.g3**2 + 19/9*self.g1**2*self.g3**2 - 108*self.g3**4
# + (225/16*self.g2**2 + 131/16*self.g1**2 + 36*self.g3**2)*self.sh*self.yt**2
# + 6*(-2*self.sh**2*self.yt**4 - 2*self.sh**3*self.yt**2*self.lH + self.sh**2*self.lH**2)
# )
# )
# self.gamma2 = (
# -(271/32 * self.g2**4 - 9/16 * self.g1**2 * self.g2**2 - 431/96 * self.sh * self.g1**4
# - 5/2 * (9/4 * self.g2**2 + 17/12 * self.g1**2 + 8*self.g3**2) * self.yt**2 + 27/4 * self.sh*self.yt**4
# - 6 * self.sh**3 * self.lH**2)
# )
# self.Betas2 = np.array([self.Beta_lH2, self.Beta_g12, self.Beta_g22,
# self.Beta_g32, self.Beta_yt2, self.gamma2])
# # Total Beta Functions
# self.Beta = np.empty(len(self.Betas1))
# for i, (beta1, beta2) in enumerate(zip(self.Betas1, self.Betas2)):
# self.Beta[i] = 1/(16*pi**2)*beta1 + 1/(16*pi**2)**2*beta2
# self.Beta_real = np.array([beta/(1+self.Beta[5]) for beta in self.Beta])
#
# def Running(self, double h):
# # Running Coupling Constant
# self.lH += h * self.Beta_real[0]
# self.g1 += h * self.Beta_real[1]
# self.g2 += h * self.Beta_real[2]
# self.g3 += h * self.Beta_real[3]
# self.yt += h * self.Beta_real[4]
# self.LG += h * self.Beta_real[5]
# self.t += h
# #-------------------------------------------------------
# #--------------Running----------------------------------
# #-------------------------------------------------------
# cpdef tuple RCC(double Mt, double xi, int Nend, double h): # M_t, xi, End point, precision
# # Real Running
# cdef int i, k
# A = RGE(Mt, xi, 0)
# k = int(1/h * Nend)
# cdef np.ndarray[dtype=double, ndim=1] lH, yt, t, g1, g2, g3, phi, G
# lH = np.empty(k)
# yt = np.empty(k)
# t = np.empty(k)
# g1 = np.empty(k)
# g2 = np.empty(k)
# g3 = np.empty(k)
# phi = np.empty(k)
# G = np.empty(k)
# for i in range(k):
# A.BETA() # initialize by calling BETA
# lH[i] = A.lH
# g1[i] = A.g1
# g2[i] = A.g2
# g3[i] = A.g3
# yt[i] = A.yt
# t[i] = A.t
# phi[i] = A.phi
# G[i] = exp(-A.LG)
# A.Running(h)
# return (lH, g1, g2, g3, yt, t, phi, G)
#
# # More Faster
# cpdef tuple RCC_intxi(double Mt, int xi, int Nend, double h): # M_t, xi, End point, precision
# # Real Running
# cdef object A = RGE(Mt, xi, 0)
# cdef int i, k = int(1/h * Nend)
# cdef np.ndarray[dtype=double, ndim=1] lH, yt, t, g1, g2, g3, phi, G
# lH = np.empty(k)
# yt = np.empty(k)
# t = np.empty(k)
# g1 = np.empty(k)
# g2 = np.empty(k)
# g3 = np.empty(k)
# phi = np.empty(k)
# G = np.empty(k)
# for i in range(k):
# A.BETA() # initialize by calling BETA
# lH[i] = A.lH
# g1[i] = A.g1
# g2[i] = A.g2
# g3[i] = A.g3
# yt[i] = A.yt
# t[i] = A.t
# phi[i] = A.phi
# G[i] = exp(-A.LG)
# A.Running(h)
# return (lH, g1, g2, g3, yt, t, phi, G)
#
# #Parallelize
# cpdef tuple RCC_parallel(tuple Tup): # M_t, xi, End point, precision
# cdef double Mt=Tup[0], h=Tup[3]
# cdef int xi=Tup[1], Nend=Tup[2]
# # Real Running
# cdef object A = RGE(Mt, xi, 0)
# cdef int i, k = int(1/h * Nend)
# cdef np.ndarray[dtype=double, ndim=1] lH, yt, t, g1, g2, g3, phi, G, BlH
# lH = np.empty(k)
# yt = np.empty(k)
# t = np.empty(k)
# g1 = np.empty(k)
# g2 = np.empty(k)
# g3 = np.empty(k)
# phi = np.empty(k)
# G = np.empty(k)
# BlH = np.empty(k)
# for i in range(k):
# A.BETA() # initialize by calling BETA
# lH[i] = A.lH
# g1[i] = A.g1
# g2[i] = A.g2
# g3[i] = A.g3
# yt[i] = A.yt
# t[i] = A.t
# phi[i] = A.phi
# G[i] = exp(-A.LG)
# BlH[i] = A.Beta[0]
# A.Running(h)
# return (lH, g1, g2, g3, yt, t, phi, G, BlH)
# #-------------------------------------------------------
# #--------------Reading----------------------------------
# #-------------------------------------------------------
# cpdef tuple Reader(double mt, int xi): # Now consider only int xi
# cdef object in_temp, line, obj, elem
# cdef list TP
# cdef np.ndarray[dtype=double, ndim=1] lH, yt, t, g1, g2, g3, phi, G
# in_temp = open(str(mt)+'_'+str(xi)+'_lamb.csv', 'r')
# TP = [line.split(',') for line in in_temp]
# for obj in TP:
# del(obj[len(obj)-1])
# lH = np.array([float(elem) for elem in TP[0]])
# g1 = np.array([float(elem) for elem in TP[1]])
# g2 = np.array([float(elem) for elem in TP[2]])
# g3 = np.array([float(elem) for elem in TP[3]])
# yt = np.array([float(elem) for elem in TP[4]])
# t = np.array([float(elem) for elem in TP[5]])
# phi = np.array([float(elem) for elem in TP[6]])
# G = np.array([float(elem) for elem in TP[7]])
# return (lH, g1, g2, g3, yt, t, phi, G)
#
# # Parallelize
# cpdef tuple Reader_parallel(tuple Tup): # Now consider only int xi
# cdef double mt=Tup[0]
# cdef int xi=Tup[1]
# cdef object in_temp, line, obj, elem
# cdef list TP
# cdef np.ndarray[dtype=double, ndim=1] lH, yt, t, g1, g2, g3, phi, G, BlH
# in_temp = open(str(mt)+'_'+str(xi)+'_lamb.csv', 'r')
# TP = [line.split(',') for line in in_temp]
# for obj in TP:
# del(obj[len(obj)-1])
# lH = np.array([float(elem) for elem in TP[0]])
# g1 = np.array([float(elem) for elem in TP[1]])
# g2 = np.array([float(elem) for elem in TP[2]])
# g3 = np.array([float(elem) for elem in TP[3]])
# yt = np.array([float(elem) for elem in TP[4]])
# t = np.array([float(elem) for elem in TP[5]])
# phi = np.array([float(elem) for elem in TP[6]])
# G = np.array([float(elem) for elem in TP[7]])
# BlH = np.array([float(elem) for elem in TP[8]])
# return (lH, g1, g2, g3, yt, t, phi, G, BlH)
# #-------------------------------------------------------
# #--------------Saving-----------------------------------
# #-------------------------------------------------------
# cpdef Saver(double mt, int xi, tuple List): # Save Data
# cdef object wrt_temp = open(str(mt)+'_'+str(xi)+'_lamb.csv', 'w')
# cdef int i, k
# cdef double elem
# cdef np.ndarray[dtype=double, ndim=1] obj
# for k, obj in enumerate(List):
# for i, elem in enumerate(obj):
# if i < len(obj):
# wrt_temp.write(str(elem)+',')
# else:
# wrt_temp.write(str(elem))
# wrt_temp.write('\n')
# wrt_temp.close()
#
# # Parallelize
# cpdef Saver_parallel(tuple Tup): # Save Data
# cdef double mt=Tup[0]
# cdef int xi=Tup[1]
# cdef tuple List = Tup[2]
# cdef object wrt_temp = open(str(mt)+'_'+str(xi)+'_lamb.csv', 'w')
# cdef int i, k
# cdef double elem
# cdef np.ndarray[dtype=double, ndim=1] obj
# for k, obj in enumerate(List):
# for i, elem in enumerate(obj):
# if i < len(obj):
# wrt_temp.write(str(elem)+',')
# else:
# wrt_temp.write(str(elem))
# wrt_temp.write('\n')
# wrt_temp.close()
# #-------------------------------------------------------
# #--------------Process----------------------------------
# #-------------------------------------------------------
# # Potential
# cpdef np.ndarray[dtype=double, ndim=1] Pot_eff(int xi, tuple Dset):
# cdef int i
# cdef double lH, G, phi, t, MpR=2.4*10**(18)
# cdef np.ndarray[dtype=double, ndim=1] V=np.empty(len(Dset[0]))
# for i, (lH, t, phi, G) in enumerate(list(zip(Dset[0], Dset[5], Dset[6], Dset[7]))):
# V[i] = lH * G**4 * phi ** 4 / (4*(1 + xi * G**2 * phi**2/MpR**2)**2)
# return V
#
# # Normalize
# cpdef np.ndarray[dtype=double, ndim=1] Normalize(np.ndarray[dtype=double, ndim=1] Target, int order):
# cdef np.ndarray[dtype=double, ndim=1] Output
# cdef double elem, MpR=2.4*10**(18)
# Output = np.array([elem/(MpR**order) for elem in Target])
# return Output
# -
# <h2><a name="#running">2. Running Coupling Constants</a></h2>
# ### 1) Data Processing
# #### - Running
# +
#mt_keys0 = np.array([170.846 + i/100 for i in range(6)])
#xi_0 = 59
#for mt in mt_keys0:
# Temp = RCC(mt, xi_0, 44, 1e-04)
# Saver(mt, xi_0, Temp)
# -
# #### - Import Data
# Setup Keys
mt_keys = np.array([170.0 + i/10 for i in range(11)])
xi_keys = np.array([50, 500, 5000])
# For loop
for mt in mt_keys:
for xi in xi_keys:
Tup = Reader(mt,xi)
locals()['lH_'+str(mt)+'_'+str(xi)] = Tup[0]
locals()['g1_'+str(mt)+'_'+str(xi)] = Tup[1]
locals()['g2_'+str(mt)+'_'+str(xi)] = Tup[2]
locals()['g3_'+str(mt)+'_'+str(xi)] = Tup[3]
locals()['yt_'+str(mt)+'_'+str(xi)] = Tup[4]
locals()['t_'+str(mt)+'_'+str(xi)] = Tup[5]
locals()['phi_'+str(mt)+'_'+str(xi)] = Tup[6]
locals()['G_'+str(mt)+'_'+str(xi)] = Tup[7]
locals()['V_'+str(mt)+'_'+str(xi)] = Pot_eff(xi, Tup)
# #### - Use LaTeX
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
# <h3>2) Running $\lambda$</h3>
# <h4>- Various $M_t$</h4>
plt.figure(figsize=(10,6), dpi=300)
for mt in mt_keys:
plt.plot(locals()['t_'+str(mt)+'_50'], locals()['lH_'+str(mt)+'_50'], label=r'$M_t='+str(mt)+'$')
plt.title(r'$\xi=50$', fontsize=16)
plt.xlabel(r'$t=\log(\mu/M_t)$', fontsize=14)
plt.ylabel(r'$\lambda$', fontsize=14)
plt.axis([20, 40, 0, 0.015])
plt.legend(fontsize=12)
plt.show()
# #### - Various $\xi$
plt.figure(figsize=(10,6), dpi=300)
for xi in xi_keys:
plt.plot(locals()['t_170.0_'+str(xi)], locals()['lH_170.0_'+str(xi)], label=r'$\xi='+str(xi)+'$')
plt.title(r'$M_t=170GeV$', fontsize=16)
plt.xlabel(r'$t=\log(\mu/M_t)$', fontsize=14)
plt.ylabel(r'$\lambda$', fontsize=14)
plt.axis([30, 40, 0.0055, 0.0064])
plt.legend(fontsize=14)
plt.show()
# * You can see more graphs in <b>'Various xi'</b> and <b>'Various mt'</b>
# #### - $\lambda$ vs $\phi$
plt.figure(figsize=(10,6), dpi=300)
for mt in mt_keys:
plt.plot(locals()['phi_'+str(mt)+'_50'], locals()['lH_'+str(mt)+'_50'], label=r'$M_t='+str(mt)+'$')
plt.title(r'$\xi=50$', fontsize=16)
plt.xlabel(r'$\phi$', fontsize=14)
plt.ylabel(r'$\lambda$', fontsize=14)
plt.axis([0, 10**18, 0, 0.012])
plt.legend(fontsize=12)
plt.show()
plt.figure(figsize=(10,6), dpi=300)
for mt in mt_keys:
plt.plot(locals()['phi_'+str(mt)+'_5000'], locals()['lH_'+str(mt)+'_5000'], label=r'$M_t='+str(mt)+'$')
plt.title(r'$\xi=5000$', fontsize=16)
plt.xlabel(r'$\phi$', fontsize=14)
plt.ylabel(r'$\lambda$', fontsize=14)
plt.axis([0, 10**18, 0, 0.012])
plt.legend(fontsize=12)
plt.show()
# ### 3) Running Gauge Couplings
plt.figure(figsize=(10,6), dpi=300)
plt.plot(locals()['t_170.8_50'], locals()['g1_170.8_50'], label=r'$g_1$')
plt.plot(locals()['t_170.8_50'], locals()['g2_170.8_50'], label=r'$g_2$')
plt.plot(locals()['t_170.8_50'], locals()['g3_170.8_50'], label=r'$g_3$')
plt.title(r'$M_t=170.8GeV,\,\xi=50$', fontsize=16)
plt.xlabel(r'$t=\log(\mu/M_t)$', fontsize=14)
plt.ylabel(r'Gauge Couplings', fontsize=14)
plt.legend(fontsize=14)
plt.show()
# ### 4) Running $y_t$
plt.figure(figsize=(10,6), dpi=300)
plt.plot(locals()['t_170.8_50'], locals()['yt_170.8_50'], label=r'$y_t$')
plt.title(r'$M_t=170.8GeV,\,\xi=50$', fontsize=16)
plt.xlabel(r'$t=\log(\mu/M_t)$', fontsize=14)
plt.ylabel(r'$y_t$', fontsize=14)
plt.legend(fontsize=14)
plt.show()
# <h2><a name="#potential">3. Potential</a></h2>
# <h3>1) Normalize Data (in units of $M_p^n$)</h3>
for mt in mt_keys:
for xi in xi_keys:
locals()['Rphi_'+str(mt)+'_'+str(xi)] = Normalize(locals()['phi_'+str(mt)+'_'+str(xi)],1)
locals()['RV_'+str(mt)+'_'+str(xi)] = Normalize(locals()['V_'+str(mt)+'_'+str(xi)],4)
# <h3>2) Graph</h3>
plt.figure(figsize=(10,6), dpi=300)
for mt in mt_keys:
plt.plot(locals()['Rphi_'+str(mt)+'_50'], locals()['RV_'+str(mt)+'_50'], label=r'$M_t='+str(mt)+'$')
plt.title(r'$\xi=50$', fontsize=16)
plt.xlabel(r'$\phi$ (in units of $M_p$)', fontsize=14)
plt.ylabel(r'$V_{eff}$ (in units of $M_p^4$)', fontsize=14)
plt.legend(fontsize=12)
plt.axis([0,20, -5*10**(-8), 7*10**(-7)])
plt.show()
| 17,989 |
/Worth_Try_Again/865. Smallest Subtree with all the Deepest Nodes.ipynb
|
c41c9347040413712cf6cc017312904ac1f70cc9
|
[] |
no_license
|
Jason24-Zeng/Leetcode
|
https://github.com/Jason24-Zeng/Leetcode
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 6,620 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Given the root of a binary tree, the depth of each node is the shortest distance to the root.
#
# Return the smallest subtree such that it contains all the deepest nodes in the original tree.
#
# A node is called the deepest if it has the largest depth possible among any node in the entire tree.
#
# The subtree of a node is tree consisting of that node, plus the set of all descendants of that node.
#
# Note: This question is the same as 1123: https://leetcode.com/problems/lowest-common-ancestor-of-deepest-leaves/
#
#
#
# Example 1:
#
# ```
# Input: root = [3,5,1,6,2,0,8,null,null,7,4]
# Output: [2,7,4]
# Explanation: We return the node with value 2, colored in yellow in the diagram.
# The nodes coloured in blue are the deepest nodes of the tree.
# Notice that nodes 5, 3 and 2 contain the deepest nodes in the tree but node 2 is the smallest subtree among them, so we return it.
# ```
# Example 2:
# ```
# Input: root = [1]
# Output: [1]
# Explanation: The root is the deepest node in the tree.
# ```
# Example 3:
# ```
# Input: root = [0,1,3,null,2]
# Output: [2]
# Explanation: The deepest node in the tree is 2, the valid subtrees are the subtrees of nodes 2, 1 and 0 but the subtree of node 2 is the smallest.
# ```
#
# Constraints:
#
# - The number of nodes in the tree will be in the range [1, 500].
# - 0 <= Node.val <= 500
# - The values of the nodes in the tree are unique.
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
class Solution:
def subtreeWithAllDeepest(self, root: TreeNode) -> TreeNode:
depth = {None:-1}
def dfs(node, parent = None):
if node:
depth[node] = depth[parent] + 1
dfs(node.left, node)
dfs(node.right, node)
dfs(root)
max_depth = max(depth.values())
def answer(node):
if not node or depth[node] == max_depth:
return node
L, R = answer(node.left), answer(node.right)
return node if L and R else L or R
return answer(root)
mpressed Sparse Row Format" (CSR). Почитать о формате можно здесь: https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_(CSR,_CRS_or_Yale_format)
sparse_feature_matrix = count_vectorizer.fit_transform(train_data.data)
dense_feature_matrix = sparse_feature_matrix.toarray()
print('Dense matrix shape', dense_feature_matrix.data.shape)
print('Sparse matrix shape', sparse_feature_matrix.shape)
# %%time
dense_feature_matrix.sum()
# %%time
sparse_feature_matrix.sum()
# ### Словарь токен-слово
# Соберем обратный словарь, в котором каждому токену (номеру) будет сопоставлено слово из оригинального словаря
num_2_words = {
v: k
for k, v in count_vectorizer.vocabulary_.items()
}
# ## Начнем обучать линейные модели
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
# ### Обучим логистическую регрессию предсказывать тему документа
algo = LogisticRegression()
algo.fit(dense_feature_matrix, train_data.target)
# Слова с наименьшим отрицательным весом являются характерными словами класса 0
#
# Слова с наибольшим положительным весом являются характерными словами класса 1
# +
W = algo.coef_.shape[1]
NUM_WORDS = 10
class_2_function = {'Baseball': heapq.nsmallest, 'Hockey': heapq.nlargest}
for category, function in class_2_function.items():
topic_words = [
num_2_words[w_num]
for w_num in function(NUM_WORDS, range(W), key=lambda w: algo.coef_[0, w])
]
print(category)
print(', '.join(topic_words))
# -
# ### Оценим качество
#
# Сравним качество на фолдах с качеством на трейне и на отложенном тесте
algo = LogisticRegression()
arr = cross_val_score(algo, dense_feature_matrix, train_data.target, cv=5, scoring='accuracy')
print(arr)
print(np.mean(arr))
# Почему это неправильная кроссвалидация?
algo.fit(dense_feature_matrix, train_data.target)
print('Train accuracy', accuracy_score(algo.predict(dense_feature_matrix), train_data.target))
print('Test accuracy', accuracy_score(algo.predict(count_vectorizer.transform(test_data.data)), test_data.target))
# Мы видим переобучение, почему?
# ### Регуляризация
# Добавим l1 - регуляризатор с коэффициентом 0.1
algo = LogisticRegression(penalty='l1', C=0.1)
arr = cross_val_score(algo, dense_feature_matrix, train_data.target, cv=5, scoring='accuracy')
print(arr)
print(np.mean(arr))
algo.fit(sparse_feature_matrix, train_data.target)
print('Train accuracy', accuracy_score(algo.predict(dense_feature_matrix), train_data.target))
print('Test accuracy', accuracy_score(algo.predict(count_vectorizer.transform(test_data.data)), test_data.target))
# Добавление регуляризатора уменьшает отличие на трейне и тесте, но ухудшает качество.
# ## Pipeline
# Чтобы не делать векторизацию и обучение раздельно, есть класс Pipeline. Он позволяет объединить в цепочку последовательность действий.
from sklearn.pipeline import Pipeline
pipeline = Pipeline([("vectorizer", CountVectorizer(min_df=3, ngram_range=(1, 2))),
("algo", LogisticRegression())])
pipeline.fit(train_data.data, train_data.target)
print('Train accuracy', accuracy_score(pipeline.predict(train_data.data), train_data.target))
print('Test accuracy', accuracy_score(pipeline.predict(test_data.data), test_data.target))
# Значения примерно такие же как мы получали ранее, делая шаги раздельно.
from sklearn.pipeline import make_pipeline
# При кроссвалидации нужно, чтобы CountVectorizer не обучался на тесте. Pipeline позволяет это сделать просто.
pipeline = make_pipeline(CountVectorizer(min_df=3, ngram_range=(1, 2)),
LogisticRegression())
arr = cross_val_score(pipeline, train_data.data, train_data.target, cv=5, scoring='accuracy')
print(arr)
print(np.mean(arr))
pipeline = make_pipeline(CountVectorizer(min_df=3, ngram_range=(1, 2)),
LogisticRegression())
arr = cross_val_score(pipeline, train_data.data, train_data.target, cv=3, scoring='accuracy')
print(arr)
print(np.mean(arr))
# В Pipeline можно добавлять новые шаги препроцессинга данных.
from sklearn.feature_extraction.text import TfidfTransformer
Image('pics/tfidf.png')
# Подробнее про tf-idf можно прочитать здесь: https://ru.wikipedia.org/wiki/TF-IDF
pipeline = make_pipeline(CountVectorizer(min_df=3, ngram_range=(1, 2)),
TfidfTransformer(),
LogisticRegression())
arr = cross_val_score(pipeline, train_data.data, train_data.target, cv=5, scoring='accuracy')
print(arr)
print(np.mean(arr))
pipeline.fit(train_data.data, train_data.target)
print('Train accuracy', accuracy_score(pipeline.predict(train_data.data), train_data.target))
print('Test accuracy', accuracy_score(pipeline.predict(test_data.data), test_data.target))
| 7,270 |
/13 Selenium/Selenium Abfrage (Zefix).ipynb
|
76ceebe0a19019690dbfe8c16737871717476bc4
|
[] |
no_license
|
jojonini/casdj19
|
https://github.com/jojonini/casdj19
| 0 | 2 | null | 2023-04-12T10:15:44 | 2020-01-12T17:36:35 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 12,655 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import random
import math
import matplotlib.pyplot as plt
# +
# Section
a = -1
b = 2
# Additional data
epsilon = 1e-6
populationSize = 100
# Iterations
t_max_i = 20
t_max = 200
# Propabilities
probCrossover = 0.6
probMutation = 0.02
# -
def f(x):
return x * math.sin(10 * math.pi * x) + 1
def getBitArraySize(n):
bitArraySize = 0
while n > 0:
n >>= 1
bitArraySize += 1
return bitArraySize
# +
def bitToFloat(bitArray):
return a + ((b - a)) / (2**bitArraySize - 1) * int(bitArray, 2)
def fitness(individual):
return f(bitToFloat(individual))
# +
def tournamentMethod(population):
result = random.sample(population, 2)
firstFitnessFunction = fitness(result[0])
secondFitnessFunction = fitness(result[1])
if firstFitnessFunction > secondFitnessFunction:
best = result[0]
else:
best = result[1]
return best
def mutation(individual):
randomProbability = np.random.random_sample()
if randomProbability <= probMutation:
charIndexToMutate = np.random.randint(low = 0, high = bitArraySize)
charToMutate = individual[charIndexToMutate]
if charToMutate == '0':
individual = individual[:charIndexToMutate] + "1" + individual[charIndexToMutate + 1:]
else:
individual = individual[:charIndexToMutate] + "0" + individual[charIndexToMutate + 1:]
return individual
def crossover(population, propability):
individualOne = population[np.random.randint(low = 0, high = populationSize - 1)]
individualTwo = population[np.random.randint(low = 0, high = populationSize - 1)]
probabilityForCrossover = np.random.random_sample()
if probabilityForCrossover <= propability:
randomIndex = np.random.randint(low = 0, high = bitArraySize)
firstHalf_IndividualOne = individualOne[:randomIndex]
secondHalf_IndividualOne = individualOne[randomIndex:]
firstHalf_IndividualTwo = individualTwo[:randomIndex]
secondHalf_IndividualTwo = individualTwo[randomIndex:]
crossovered = firstHalf_IndividualOne + secondHalf_IndividualTwo
crossovered = firstHalf_IndividualTwo + secondHalf_IndividualOne
else:
crossovered = individualOne
crossovered = individualTwo
return crossovered
# +
def applyTournamentForPopulation(population):
newPopulation = []
while len(newPopulation) < populationSize:
best = tournamentMethod(population)
newPopulation.append(best)
return newPopulation
def mutatePopulation(population):
newPopulation = []
for individual in population:
mutatedIndividual = mutation(individual)
newPopulation.append(mutatedIndividual)
return newPopulation
def crossoverPopulation(population, propability):
newPopulation = []
while len(newPopulation) < populationSize:
crossoveredIndividuals = crossover(population, propability)
newPopulation.append(crossoveredIndividuals)
return newPopulation
# -
# +
initialPopulation = []
population = []
n = (int)((b - a) / epsilon)
bitArraySize = getBitArraySize(n)
for i in range(populationSize):
initialPopulation += [np.random.randint(2, size = (bitArraySize,))]
for i in range(populationSize):
population.append("".join(str(x) for x in initialPopulation[i]))
q = []
e = []
best = []
bestX = []
i = 0
while i <= t_max:
population = applyTournamentForPopulation(population)
population = mutatePopulation(population)
population = crossoverPopulation(population, probCrossover)
fitnesses = []
for individual in population:
populationFitness = fitness(individual)
bestX.append(individual)
fitnesses.append(populationFitness)
best.append(max(fitnesses))
if i > 1:
#print('\nf_max = ', best[i])
if best[i] - best[i - 1] < epsilon:
t_max_i += 1
if t_max_i > t_max_i:
break
q.append(i)
e.append(best[i])
i += 1
print(bitToFloat(bestX[i - 1]))
plt.plot(q,e)
# +
probs = []
res = []
for p in np.arange(0, 1, 0.1):
#print('\n-----', p, '-----')
best = []
i = 0
max_i = 0
while i <= t_max:
population = applyTournamentForPopulation(population)
population = mutatePopulation(population)
population = crossoverPopulation(population, p)
fitnesses = []
for individual in population:
populationFitness = fitness(individual)
fitnesses.append(populationFitness)
best.append(max(fitnesses))
if i > 1:
#print('\nf_max = ', best[i])
if best[i] - best[i - 1] < epsilon:
max_i += 1
if max_i > t_max_i:
break
i += 1
probs.append(p)
res.append(best[i - 1])
#print('p = ', p)
#print('f = ', best[i - 1])
plt.plot(probs, res)
print(res)
print(probs)
# -
| 5,292 |
/Transformed_Data/tuition_enrollment_info.ipynb
|
74bae152f80afba6b9135cb6cb9eb99bf495d50b
|
[
"MIT"
] |
permissive
|
pmaxlauve/univ-economics
|
https://github.com/pmaxlauve/univ-economics
| 0 | 0 |
MIT
| 2021-03-03T00:28:05 | 2021-03-02T19:35:13 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 195,445 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#dependencies
import scipy
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#read 4 year tuition csv
year_4_tuition_df = pd.read_csv("//users/rebeccatast/desktop/year_4_tuition.csv")
year_4_tuition_df.head()
#read 4 year enrollment csv
year_4_enrollment_df = pd.read_csv("//users/rebeccatast/desktop/year_4_enrollment.csv")
year_4_enrollment_df.head()
#Merge datasets and print
complete_data_4_df = pd.merge(year_4_tuition_df, year_4_enrollment_df,how = 'left', on=["4_id","4_id"])
print (complete_data_4_df)
complete_data_4_df.head()
#Group by 4 year public universities and tuition
public_group_four_year = complete_data_4_df.set_index("public_tuition\n").groupby(["public_enrollment"])
#Public tuition and enrollment
public_four_by_tuition = complete_data_4_df.set_index("public_tuition\n")["public_enrollment"]
print(public_four_by_tuition)
#pulling variables from df for plotting
pub_tuition = complete_data_4_df["public_tuition\n"]
pri_tuition = complete_data_4_df["private_tuition\n"]
pub_enrollment = complete_data_4_df["public_enrollment"]
pri_enrollment = complete_data_4_df["private_enrollment"]
# +
#Scatter Plot 4 year public university tuition vs. enrollment
plt.scatter(pub_enrollment,
pub_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition (dollars)")
plt.xlabel("Enrollment (million)")
plt.title("Enrollment vs. Tuition at 4-year public universities 2009-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig1.png")
plt.show()
# -
#correlation between 4 year public tuition and public enrollment
print(complete_data_4_df[["public_tuition\n",'public_enrollment']].corr())
#Group by private 4 year universities and tuition
private_group_four_year = complete_data_4_df.set_index("private_tuition\n").groupby(["private_enrollment"])
#Private tuition and enrollment
private_four_by_tuition = complete_data_4_df.set_index("private_tuition\n")["private_enrollment"]
print(private_four_by_tuition)
# +
#Scatter Plot 4 year private university tuition vs. enrollment
plt.scatter(pri_enrollment,
pri_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition (dollars)")
plt.xlabel("Enrollment (million)")
plt.title("Enrollment vs. Tuition at 4-year private universities 2009-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig2.png")
plt.show()
# -
#correlation between 4 year private tuition and private enrollment
print(complete_data_4_df[["private_tuition\n",'private_enrollment']].corr())
#read 2 year tuition csv
year_2_tuition_df = pd.read_csv("//users/rebeccatast/desktop/year_2_tuition.csv")
year_2_tuition_df.head()
#read 2 year enrollment csv
year_2_enrollment_df = pd.read_csv("//users/rebeccatast/desktop/year_2_enrollment.csv")
year_2_enrollment_df.head()
#Merge datasets and print
complete_data_2_df = pd.merge(year_2_tuition_df, year_2_enrollment_df,how = 'left', on=["2_id","2_id"])
print (complete_data_2_df)
complete_data_2_df.head()
#Group by 2 year public universities and tuition
public_group_two_year = complete_data_2_df.set_index("public_tuition").groupby(["public_enrollment"])
#Public 2 year universities tuition and enrollment
public_two_by_tuition = complete_data_2_df.set_index("public_tuition")["public_enrollment"]
print(public_two_by_tuition)
#pulling variables from df for plotting
pub2_tuition = complete_data_2_df["public_tuition"]
pri2_tuition = complete_data_2_df["private_tuition"]
pub2_enrollment = complete_data_2_df["public_enrollment"]
pri2_enrollment = complete_data_2_df["private_enrollment"]
# +
#Scatter Plot 2 year public university tuition vs. enrollment
plt.scatter(pub2_enrollment,
pub2_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition (dollars)")
plt.xlabel("Enrollment (million)")
plt.title("Enrollment vs. Tuition at 2-year public universities 2009-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig3.png")
plt.show()
# -
#correlation between tuition and enrollment public 2-year universities
print(complete_data_2_df[["public_tuition",'public_enrollment']].corr())
#Private 2 year universities tuition and enrollment
private_two_by_tuition = complete_data_2_df.set_index("private_tuition")["private_enrollment"]
print(private_two_by_tuition)
# +
#Scatter Plot 2 year private university tuition vs. enrollment
plt.scatter(pri2_enrollment,
pri2_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition (10,000 dollars)")
plt.xlabel("Enrollment")
plt.title("Enrollment vs. Tuition at 2-year private universities 2009-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig4.png")
plt.show()
# -
#correlation between tuition and enrollment private 2-year universities
print(complete_data_2_df[["private_tuition",'private_enrollment']].corr())
#read 33 year data csv for 2 year university tuition and enrollment
year_2_numbers_df = pd.read_csv("//users/rebeccatast/desktop/2_year_tui_enroll.csv")
year_2_numbers_df.head()
#pulling variables from df for plotting
public2_tuition = year_2_numbers_df["public2_tuition"]
private2_tuition = year_2_numbers_df["private2_tuition"]
public2_enrollment = year_2_numbers_df["public2_enrollment"]
private2_enrollment = year_2_numbers_df["private2_enrollment"]
#Group by 2 year public universities and tuition 1986-2019
public_group_two_year_full = year_2_numbers_df.set_index("public2_tuition").groupby(["public2_enrollment"])
#Public 2 year universities tuition and enrollment 1986-2019
public_two_by_tuition_full = year_2_numbers_df.set_index("public2_tuition")["public2_enrollment"]
print(public_two_by_tuition_full)
# +
#Scatter Plot 2 year public university tuition vs. enrollment 1986-2019
plt.scatter(public2_enrollment,
public2_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition")
plt.xlabel("Enrollment (millions)")
plt.title("Enrollment vs. Tuition at 2-year public universities 1986-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig5.png")
plt.show()
# -
#correlation between tuition and enrollment public 2-year universities 1986-2019
print(year_2_numbers_df[["public2_tuition","public2_enrollment"]].corr())
#correlation between tuition and enrollment private 2-year universities 1986-2019
print(year_2_numbers_df[["private2_tuition","private2_enrollment"]].corr())
#Private 2 year universities tuition and enrollment 1990-2019
private_two_by_tuition_full = year_2_numbers_df.set_index("private2_tuition")["private2_enrollment"]
print(private_two_by_tuition_full)
# +
#Scatter Plot 2 year private university tuition vs. enrollment 1986-2019
plt.scatter(private2_enrollment,
private2_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition")
plt.xlabel("Enrollment")
plt.title("Enrollment vs. Tuition at 2-year private universities 1986-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig6.png")
plt.show()
# -
#read 33 year data csv for 4 year university tuition and enrollment
year_4_numbers_df = pd.read_csv("//users/rebeccatast/desktop/4_year_tui_enroll.csv")
year_4_numbers_df.head()
#pulling variables from df for plotting
public4_tuition = year_4_numbers_df["public4_tuition"]
private4_tuition = year_4_numbers_df["private4_tuition"]
public4_enrollment = year_4_numbers_df["public4_enrollment"]
private4_enrollment = year_4_numbers_df["private4_enrollment"]
#Group by 4 year public universities and tuition 1986-2019
public_group_four_year_full = year_4_numbers_df.set_index("public4_tuition").groupby(["public4_enrollment"])
#Public 4 year universities tuition and enrollment 1986-2019
public_four_by_tuition_full = year_4_numbers_df.set_index("public4_tuition")["public4_enrollment"]
print(public_four_by_tuition_full)
# +
#Scatter Plot 4 year public university tuition vs. enrollment 1986-2019
plt.scatter(public4_enrollment,
public4_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition")
plt.xlabel("Enrollment (million)")
plt.title("Enrollment vs. Tuition at 4-year public universities 1986-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig7.png")
plt.show()
# -
#correlation between tuition and enrollment public 4-year universities 1986-2019
print(year_4_numbers_df[["public4_tuition","public4_enrollment"]].corr())
#Group by 4 year private universities and tuition 1986-2019
private_group_four_year_full = year_4_numbers_df.set_index("private4_tuition").groupby(["private4_enrollment"])
#Private 4 year universities tuition and enrollment 1986-2019
private_four_by_tuition_full = year_4_numbers_df.set_index("private4_tuition")["private4_enrollment"]
print(private_four_by_tuition_full)
# +
#Scatter Plot 4 year private university tuition vs. enrollment 1986-2019
plt.scatter(private4_enrollment,
private4_tuition,
edgecolor="black", linewidths=1, marker="o",alpha=0.8)
plt.ylabel("Tuition")
plt.xlabel("Enrollment (million)")
plt.title("Enrollment vs. Tuition at 4-year private universities 1986-2019")
plt.grid(True)
plt.savefig("/users/rebeccatast/desktop/Fig8.png")
plt.show()
# -
#correlation between tuition and enrollment private 4-year universities 1986-2019
print(year_4_numbers_df[["private4_tuition","private4_enrollment"]].corr())
| 9,701 |
/.ipynb_checkpoints/hw1.1addition-checkpoint.ipynb
|
dd580933d6fae3ab99f6440b4b6377568181feb2
|
[] |
no_license
|
AleksandrPanov/character_recognition
|
https://github.com/AleksandrPanov/character_recognition
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 25,295 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pymc
import pandas as pd
# +
mroz = pd.read_stata("datasets//Mroz.dta")
mroz.head()
# -
mroz['wc'] = mroz.wc.map({'yes':1, 'no':0})
mroz['hc'] = mroz.hc.map({'yes':1, 'no':0})
mroz['lfp'] = mroz.lfp.map({'yes':1, 'no':0})
mroz.head()
# +
X = mroz
b0 = pymc.Normal('b0', mu=0, tau=1e-12, value= 0)
b_k5 = pymc.Normal('b_k5', mu=0, tau=1e-12, value= 0)
b_k618 = pymc.Normal('b_k618', mu=0, tau=1e-12, value= 0)
b_age = pymc.Normal('b_age', mu=0, tau=1e-12, value= 0)
b_wc = pymc.Normal('b_wc', mu=0, tau=1e-12, value= 0)
b_hc = pymc.Normal('b_hc', mu=0, tau=1e-12, value= 0)
b_lwg = pymc.Normal('b_lwg', mu=0, tau=1e-12, value= 0)
b_inc = pymc.Normal('b_inc', mu=0, tau=1e-12, value= 0)
p = pymc.InvLogit("p", b0 + b_k5 * X['k5'] + b_k618 * X['k618'] + b_age * X['age'] + b_wc * X['wc'] + b_hc * X['hc'] + b_lwg * X['lwg'] + b_inc * X['inc'])
y_obs = pymc.Bernoulli("y_obs", p=p, value=mroz.lfp, observed=True)
# -
model = pymc.Model([b0, b_k5, b_k618, b_age, b_wc, b_hc, b_lwg, b_inc, X, y_obs])
M = pymc.MCMC(model)
M.sample(iter=50000)
M.summary()
b_k5.stats()['mean']
| 1,357 |
/Inceptex_python_exercise1.ipynb
|
c9359e9ef84035ad1f2c675253dbaf6d96f0db5f
|
[] |
no_license
|
Ashima-6611/Inceptez_exercise
|
https://github.com/Ashima-6611/Inceptez_exercise
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,819 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Exercise Question 1: Given an input list removes the element at index 4 and add it to the 2nd position and also, at the end of the list
# Original list [34, 54, 67, 89, 11, 43, 94]
# List After removing element at index 4 [34, 54, 67, 89, 43, 94]
# List after Adding element at index 2 [34, 54, 11, 67, 89, 43, 94]
# List after Adding element at last [34, 54, 11, 67, 89, 43, 94, 11]
lst = [34, 54, 67, 89, 11, 43, 94]
temp_lst =lst.pop(4)
del lst[4]
lst.insert(2,temp_lst)
lst.append(temp_lst)
lst
# Exercise Question 2: Given a Python list you should be able to display Python list in the following order
# The Original List is: [100, 200, 300, 400, 500]
# The Expected result is: [500, 400, 300, 200, 100]
lst1=[100, 200, 300, 400, 500]
lst1.reverse()
lst1
lst2=[100, 200, 300, 400, 500]
lst2.sort(reverse=True)
lst2
# Exercise Question 3: Concatenate (join) two lists in the following order
# The original list1: ['Hello ', 'take ']
# The original list2: ['Dear', 'Sir']
# The Expected result is: ['Hello ', 'take ', 'Dear', 'Sir']
# +
list1 = ['Hello ', 'take ']
list2 = ['Dear', 'Sir']
list1.extend(list2)
print(list1)
# -
# Exercise Question 4: Add item 7000 after 6000 in the following Python List
# The original list is: [10, 20, [300, 400, [5000, 6000], 500], 30, 40]
# The expected list is: [10, 20, [300, 400, [5000, 6000, 7000], 500], 30, 40]
lst= [10, 20, [300, 400, [5000, 6000], 500], 30, 40]
lst[2][2].append(7000)
lst
# Exercise Question 5: Given a nested list extend it with adding sub list ["h", "i", "j"] in a such a way that it will look like the
# The original List is: ['a', 'b', ['c', ['d', 'e', ['f', 'g'], 'k'], 'l'], 'm', 'n']
# The expected result is: ['a', 'b', ['c', ['d', 'e', ['f', 'g', 'h', 'i', 'j'], 'k'], 'l'], 'm', 'n']
lst = ['a', 'b', ['c', ['d', 'e', ['f', 'g'], 'k'], 'l'], 'm', 'n']
ext=("h","i","j")
lst[2][1][2].extend(ext)
lst
# Given a Python list, find value 20 in the list, and if it is present, replace it with 200. Only update the first occurrence of a value
lst = [5, 10, 15, 20, 25, 50, 20]
lst.insert(lst.index(20),200)
lst
| 2,375 |
/docs/tutorials/random_noise_generation.ipynb
|
fbf0001bfac2725899d444f8f29f02401fd14851
|
[
"Apache-2.0"
] |
permissive
|
Sally-Er/federated
|
https://github.com/Sally-Er/federated
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 22,413 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %run Prepare_Python.ipynb
# ## 0. Basic usage
# There are two required arguments:
#
# - the first argument is expected to be a dataframe with both group indicator variables and covariates,
# - the second argument specifies a list with names of column which indicate the group membership.
#
# Additional arguments can be provided, such as `name` (specifies `xlab()` for intersection matrix) or `width_ratio` (specifies how much space should be occupied by the set size panel). Other such arguments are discussed at length later in this document.
# + magic_args="-w 800 -h 300" language="R"
# upset(movies, genres, name='genre', width_ratio=0.1)
# -
# ### 0.1 Selecting intersections
# We will focus on the intersections with at least ten members `(min_size=10)` and on a few variables which are significantly different between the intersections (see 2. Running statistical tests).
#
# When using `min_size`, the empty groups will be skipped by default (e.g. *Short* movies would have no overlap with size of 10). To keep all groups pass `keep_empty_groups=TRUE`:
# + magic_args="-w 800 -h 300" language="R"
# (
# upset(movies, genres, name='genre', width_ratio=0.1, min_size=10, wrap=TRUE, set_sizes=FALSE)
# + ggtitle('Without empty groups (Short dropped)')
# + # adding plots is possible thanks to patchwork
# upset(movies, genres, name='genre', width_ratio=0.1, min_size=10, keep_empty_groups=TRUE, wrap=TRUE, set_sizes=FALSE)
# + ggtitle('With empty groups')
# )
# -
# When empty columns are detected a warning will be issued. The silence it, pass `warn_when_dropping_groups=FALSE`. Complimentary `max_size` can be used in tandem.
# You can also select intersections by degree (`min_degree` and `max_degree`):
# + magic_args="-w 800 -h 300" language="R"
# upset(
# movies, genres, width_ratio=0.1,
# min_degree=3,
# )
# -
# Or request a constant number of intersections with `n_intersections`:
# + magic_args="-w 800 -h 300" language="R"
# upset(
# movies, genres, width_ratio=0.1,
# n_intersections=15
# )
# -
# ### 0.2 Region selection modes
# There are four modes defining the regions of interest on corresponding Venn diagram:
#
# - `exclusive_intersection` region: intersection elements that belong to the sets defining the intersection but not to any other set (alias: *distinct*), **default**
# - `inclusive_intersection` region: intersection elements that belong to the sets defining the intersection including overlaps with other sets (alias: *intersect*)
# - `exclusive_union` region: union elements that belong to the sets defining the union, *excluding* those overlapping with any other set
# - `inclusive_union` region: union elements that belong to the sets defining the union, *including* those overlapping with any other set (alias: *union*)
# Example: given three sets $A$, $B$ and $C$ with number of elements defined by the Venn diagram below
# + language="R"
# abc_data = create_upset_abc_example()
#
# abc_venn = (
# ggplot(arrange_venn(abc_data))
# + coord_fixed()
# + theme_void()
# + scale_color_venn_mix(abc_data)
# )
#
# (
# abc_venn
# + geom_venn_region(data=abc_data, alpha=0.05)
# + geom_point(aes(x=x, y=y, color=region), size=1)
# + geom_venn_circle(abc_data)
# + geom_venn_label_set(abc_data, aes(label=region))
# + geom_venn_label_region(
# abc_data, aes(label=size),
# outwards_adjust=1.75,
# position=position_nudge(y=0.2)
# )
# + scale_fill_venn_mix(abc_data, guide='none')
# )
# -
# For the above sets $A$ and $B$ the region selection modes correspond to region of Venn diagram defined as follows:
#
# - exclusive intersection: $(A \cap B) \setminus C$
# - inclusive intersection: $A \cap B$
# - exclusive union: $(A \cup B) \setminus C$
# - inclusive union: $A \cup B$
#
# and have the total number of elements as in the table below:
# | members \ mode | exclusive int. | inclusive int. | exclusive union | inclusive union |
# |------------------|----------|-----------|------------------|-----------------|
# | (A, B) | 10 | 11 | 110 | 123 |
# | (A, C) == (B, C) | 6 | 7 | 256 | 273 |
# | (A) == (B) | 50 | 67 | 50 | 67 |
# | (C) | 200 | 213 | 200 | 213 |
# | (A, B, C) | 1 | 1 | 323 | 323 |
# | () | 2 | 2 | 2 | 2 |
# + magic_args="-w 600 -h 650" language="R"
# simple_venn = (
# abc_venn
# + geom_venn_region(data=abc_data, alpha=0.3)
# + geom_point(aes(x=x, y=y), size=0.75, alpha=0.3)
# + geom_venn_circle(abc_data)
# + geom_venn_label_set(abc_data, aes(label=region), outwards_adjust=2.55)
# )
# highlight = function(regions) scale_fill_venn_mix(
# abc_data, guide='none', highlight=regions, inactive_color='NA'
# )
#
# (
# (
# simple_venn + highlight(c('A-B')) + labs(title='Exclusive intersection of A and B')
# | simple_venn + highlight(c('A-B', 'A-B-C')) + labs(title='Inclusive intersection of A and B')
# ) /
# (
# simple_venn + highlight(c('A-B', 'A', 'B')) + labs(title='Exclusive union of A and B')
# | simple_venn + highlight(c('A-B', 'A-B-C', 'A', 'B', 'A-C', 'B-C')) + labs(title='Inclusive union of A and B')
# )
# )
# -
# When customizing the `intersection_size()` it is important to adjust the mode accordingly, as it defaults to `exclusive_intersection` and cannot be automatically deduced when user customizations are being applied:
# + magic_args="-w 800 -h 450" language="R"
# abc_upset = function(mode) upset(
# abc_data, c('A', 'B', 'C'), mode=mode, set_sizes=FALSE,
# encode_sets=FALSE,
# queries=list(upset_query(intersect=c('A', 'B'), color='orange')),
# base_annotations=list(
# 'Size'=(
# intersection_size(
# mode=mode,
# mapping=aes(fill=exclusive_intersection),
# size=0,
# text=list(check_overlap=TRUE)
# ) + scale_fill_venn_mix(
# data=abc_data,
# guide='none',
# colors=c('A'='red', 'B'='blue', 'C'='green3')
# )
# )
# )
# )
#
# (
# (abc_upset('exclusive_intersection') | abc_upset('inclusive_intersection'))
# /
# (abc_upset('exclusive_union') | abc_upset('inclusive_union'))
# )
# -
# ### 0.3 Displaying all intersections
# To display all possible intersections (rather than only the observed ones) use `intersections='all'`.
#
# **Note 1**: it is usually desired to filter all the possible intersections down with `max_degree` and/or `min_degree` to avoid generating all combinations as those can easily use up all available RAM memory when dealing with multiple sets (e.g. all human genes) due to sheer number of possible combinations
#
# **Note 2**: using `intersections='all'` is only reasonable for mode different from the default *exclusive intersection*.
# + magic_args="-w 800 -h 300" language="R"
# upset(
# movies, genres,
# width_ratio=0.1,
# min_size=10,
# mode='inclusive_union',
# base_annotations=list('Size'=(intersection_size(counts=FALSE, mode='inclusive_union'))),
# intersections='all',
# max_degree=3
# )
# -
# ## 1. Adding components
# We can add multiple annotation components (also called panels) using one of the three methods demonstrated below:
# + magic_args="-w 800 -h 800" language="R"
#
# set.seed(0) # keep the same jitter for identical plots
#
# upset(
# movies,
# genres,
# annotations = list(
# # 1st method - passing list:
# 'Length'=list(
# aes=aes(x=intersection, y=length),
# # provide a list if you wish to add several geoms
# geom=geom_boxplot(na.rm=TRUE)
# ),
# # 2nd method - using ggplot
# 'Rating'=(
# # note that aes(x=intersection) is supplied by default and can be skipped
# ggplot(mapping=aes(y=rating))
# # checkout ggbeeswarm::geom_quasirandom for better results!
# + geom_jitter(aes(color=log10(votes)), na.rm=TRUE)
# + geom_violin(alpha=0.5, na.rm=TRUE)
# ),
# # 3rd method - using `upset_annotate` shorthand
# 'Budget'=upset_annotate('budget', geom_boxplot(na.rm=TRUE))
# ),
# min_size=10,
# width_ratio=0.1
# )
# -
# You can also use barplots to demonstrate differences in proportions of categorical variables:
# + magic_args="-w 800 -h 500" language="R"
#
# upset(
# movies,
# genres,
# annotations = list(
# 'MPAA Rating'=(
# ggplot(mapping=aes(fill=mpaa))
# + geom_bar(stat='count', position='fill')
# + scale_y_continuous(labels=scales::percent_format())
# + scale_fill_manual(values=c(
# 'R'='#E41A1C', 'PG'='#377EB8',
# 'PG-13'='#4DAF4A', 'NC-17'='#FF7F00'
# ))
# + ylab('MPAA Rating')
# )
# ),
# width_ratio=0.1
# )
# -
# ### 1.1. Changing modes in annotations
# Use `upset_mode` to change the mode of the annotation:
# + magic_args="-w 800 -h 800" language="R"
# set.seed(0)
# upset(
# movies,
# genres,
# mode='inclusive_intersection',
# annotations = list(
# # if not specified, the mode will follow the mode set in `upset()` call (here: `inclusive_intersection`)
# 'Length (inclusive intersection)'=(
# ggplot(mapping=aes(y=length))
# + geom_jitter(alpha=0.2, na.rm=TRUE)
# ),
# 'Length (exclusive intersection)'=(
# ggplot(mapping=aes(y=length))
# + geom_jitter(alpha=0.2, na.rm=TRUE)
# + upset_mode('exclusive_intersection')
# ),
# 'Length (inclusive union)'=(
# ggplot(mapping=aes(y=length))
# + geom_jitter(alpha=0.2, na.rm=TRUE)
# + upset_mode('inclusive_union')
# )
# ),
# min_size=10,
# width_ratio=0.1
# )
# -
# ## 2. Running statistical tests
# %R upset_test(movies, genres)
# `Kruskal-Wallis rank sum test` is not always the best choice.
#
# You can either change the test for:
#
# - all the variables (`test=your.test`), or
# - specific variables (using `tests=list(variable=some.test)` argument)
# The tests are called with `(formula=variable ~ intersection, data)` signature, such as accepted by `kruskal.test`. The result is expected to be a list with following members:
#
# - `p.value`
# - `statistic`
# - `method`
#
# It is easy to adapt tests which do not obey this signature/output convention; for example the Chi-squared test and anova can be wrapped with two-line functions as follows:
# + language="R"
# chisq_from_formula = function(formula, data) {
# chisq.test(
# ftable(formula, data)
# )
# }
#
# anova_single = function(formula, data) {
# result = summary(aov(formula, data))
# list(
# p.value=result[[1]][['Pr(>F)']][[1]],
# method='Analysis of variance Pr(>F)',
# statistic=result[[1]][['F value']][[1]]
# )
# }
#
# custom_tests = list(
# mpaa=chisq_from_formula,
# budget=anova_single
# )
# -
# %R head(upset_test(movies, genres, tests=custom_tests))
# Many tests will require at least two observations in each group. You can skip intersections with less than two members with `min_size=2`.
# + language="R"
# bartlett_results = suppressWarnings(upset_test(movies, genres, test=bartlett.test, min_size=2))
# tail(bartlett_results)
# -
# ### 2.1 Ignore specific variables
# You may want to exclude variables which are:
#
# - highly correlated and therefore interfering with the FDR calculation, or
# - simply irrelevant
#
# In the movies example, the title variable is not a reasonable thing to compare. We can ignore it using:
# + language="R"
# # note: title no longer present
# rownames(upset_test(movies, genres, ignore=c('title')))
# -
# ## 3. Adjusting "Intersection size"
# ### 3.1 Counts
# The counts over the bars can be disabled:
# + magic_args="-w 800 -h 300" language="R"
#
# upset(
# movies,
# genres,
# base_annotations=list(
# 'Intersection size'=intersection_size(counts=FALSE)
# ),
# min_size=10,
# width_ratio=0.1
# )
# -
# The colors can be changed, and additional annotations added:
# + magic_args="-w 800 -h 300" language="R"
#
# upset(
# movies,
# genres,
# base_annotations=list(
# 'Intersection size'=intersection_size(
# text_colors=c(
# on_background='brown', on_bar='yellow'
# )
# )
# + annotate(
# geom='text', x=Inf, y=Inf,
# label=paste('Total:', nrow(movies)),
# vjust=1, hjust=1
# )
# + ylab('Intersection size')
# ),
# min_size=10,
# width_ratio=0.1
# )
# -
# Any parameter supported by `geom_text` can be passed in `text` list:
# + magic_args="-w 800 -h 300" language="R"
#
# upset(
# movies,
# genres,
# base_annotations=list(
# 'Intersection size'=intersection_size(
# text=list(
# vjust=-0.1,
# hjust=-0.1,
# angle=45
# )
# )
# ),
# min_size=10,
# width_ratio=0.1
# )
# -
# ### 3.2 Fill the bars
# + magic_args="-w 800 -h 300" language="R"
#
# upset(
# movies,
# genres,
# base_annotations=list(
# 'Intersection size'=intersection_size(
# counts=FALSE,
# mapping=aes(fill=mpaa)
# )
# ),
# width_ratio=0.1
# )
# + magic_args="-w 800 -h 300" language="R"
# upset(
# movies,
# genres,
# base_annotations=list(
# 'Intersection size'=intersection_size(
# counts=FALSE,
# mapping=aes(fill=mpaa)
# ) + scale_fill_manual(values=c(
# 'R'='#E41A1C', 'PG'='#377EB8',
# 'PG-13'='#4DAF4A', 'NC-17'='#FF7F00'
# ))
# ),
# width_ratio=0.1
# )
# + magic_args="-w 800 -h 300" language="R"
#
# upset(
# movies,
# genres,
# base_annotations=list(
# 'Intersection size'=intersection_size(
# counts=FALSE,
# mapping=aes(fill='bars_color')
# ) + scale_fill_manual(values=c('bars_color'='blue'), guide='none')
# ),
# width_ratio=0.1
# )
# -
# ### 3.3 Adjusting the height ratio
# Setting `height_ratio=1` will cause the intersection matrix and the intersection size to have an equal height:
# + magic_args="-w 800 -h 300" language="R"
#
# upset(
# movies,
# genres,
# height_ratio=1,
# width_ratio=0.1
# )
# -
# ### 3.5 Hiding intersection size
# You can always disable the intersection size altogether:
# + magic_args="-w 800 -h 160" language="R"
# upset(
# movies,
# genres,
# base_annotations=list(),
# min_size=10,
# width_ratio=0.1
# )
# -
# ### 3.6 Showing intersection size/union size ratio
# It can be useful to visualise which intersections are larger than expected by chance (assuming equal probability of belonging to multiple sets); this can be achieved using the intersection size/union size ratio.
# + magic_args="-w 800 -h 600" language="R"
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=10,
# base_annotations=list(
# 'Intersection size'=intersection_size(),
# 'Intersection ratio'=intersection_ratio()
# )
# )
# -
# The plot above tells us that the analysed documentary movies are almost always (in over 60% of cases) documentaries (and nothing more!), while comedies more often include elements of other genres (e.g. drama, romance) rather than being comedies alone (like stand-up shows).
# ### 3.7 Showing percentages
# `text_mapping` can be used to manipulate the aesthetics of the labels. Using the `intersection_size` and `union_size` one can calculate percentage of items in the intersection (relative to the potential size of the intersection). A `upset_text_percentage(digits=0, sep='')` shorthand is provided for convenience:
# + magic_args="-w 800 -h 600" language="R"
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=10,
# base_annotations=list(
# # with manual aes specification:
# 'Intersection size'=intersection_size(text_mapping=aes(label=paste0(round(
# !!get_size_mode('exclusive_intersection')/!!get_size_mode('inclusive_union') * 100
# ), '%'))),
# # using shorthand:
# 'Intersection ratio'=intersection_ratio(text_mapping=aes(label=!!upset_text_percentage()))
# )
# )
# -
# Also see [10. Display percentages](#10-display-percentages).
# #### 3.7.1 Showing percentages and numbers together
# + magic_args="-w 800 -h 300" language="R"
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=10,
# base_annotations=list(
# 'Intersection size'=intersection_size(
# text_mapping=aes(label=paste0(
# !!upset_text_percentage(),
# '\n',
# '(',
# !!get_size_mode('exclusive_intersection'),
# ')'
# )),
# bar_number_threshold=1,
# text=list(vjust=1.1)
# )
# )
# )
# -
# #### 3.8 Custom positioning on bars/background
# If adjusting `bar_number_threshold` is not sufficient, you can specify custom rules for placement of text on bars/background:
# + magic_args="-w 800 -h 300" language="R"
# size = get_size_mode('exclusive_intersection')
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=10,
# base_annotations=list(
# 'Intersection size'=intersection_size(
# text_mapping=aes(
# label=paste0(
# !!upset_text_percentage(),
# '\n(', !!size, ')'
# ),
# colour=ifelse(!!size > 50, 'on_bar', 'on_background'),
# y=ifelse(!!size > 50, !!size - 100, !!size)
# )
# )
# )
# )
# -
# ### 3.8 Further adjustments using ggplot2 functions
# + magic_args=" -w 800 -h 300" language="R"
# upset(
# movies, genres, width_ratio=0.1,
# base_annotations = list(
# 'Intersection size'=(
# intersection_size()
# + ylim(c(0, 700))
# + theme(plot.background=element_rect(fill='#E5D3B3'))
# + ylab('# observations in intersection')
# )
# ),
# min_size=10
# )
# -
# ## 4. Adjusting "set size"
# When using thresholding or selection criteria (such as `min_size` or `n_intersections`) the change in number of elements in each set size is not reflected in the set sizes plot by default. You can change this by providing `filter_intersections=TRUE` to `upset_set_size`.
# + magic_args="-w 800 -h 250" language="R"
# upset(
# movies, genres,
# min_size=200,
# set_sizes=upset_set_size()
# ) | upset(
# movies, genres,
# min_size=200,
# set_sizes=upset_set_size(filter_intersections=TRUE)
# )
# -
# ### 4.1 Rotate labels
# To rotate the labels modify corresponding theme:
# + magic_args="-w 400 -h 300" language="R"
# upset(
# movies, genres,
# min_size=100,
# width_ratio=0.15,
# set_sizes=(
# upset_set_size()
# + theme(axis.text.x=element_text(angle=90))
# )
# )
# -
# To display the ticks:
# + magic_args="-w 400 -h 300" language="R"
# upset(
# movies, genres, width_ratio=0.3, min_size=100,
# set_sizes=(
# upset_set_size()
# + theme(axis.ticks.x=element_line())
# )
# )
# -
# ### 4.2 Modify geoms and other layers
# Arguments of the `geom_bar` can be adjusted in `upset_set_size`; it can use a different geom, or be replaced with a custom list of layers altogether:
# + magic_args="-w 800 -h 300" language="R"
#
# (
# upset(
# movies, genres, width_ratio=0.5, max_size=100, min_size=15, wrap=TRUE,
# set_sizes=upset_set_size(
# geom=geom_bar(width=0.4)
# )
# )
# +
# upset(
# movies, genres, width_ratio=0.5, max_size=100, min_size=15, wrap=TRUE,
# set_sizes=upset_set_size(
# geom=geom_point(
# stat='count',
# color='blue'
# )
# )
# )
# +
# upset(
# movies, genres, width_ratio=0.5, max_size=100, min_size=15, wrap=TRUE,
# set_sizes=(
# upset_set_size(
# geom=geom_point(stat='count'),
# mapping=aes(y=..count../max(..count..)),
# )
# + ylab('Size relative to the largest')
# )
# )
# )
# -
# ### 4.3 Logarithmic scale
# In order to use a log scale we need to pass additional scale to in `layers` argument. However, as the bars are on flipped coordinates, we need a reversed log transformation. Appropriate function, `reverse_log_trans()` is provided:
# + magic_args="-w 500 -h 300" language="R"
#
# upset(
# movies, genres,
# width_ratio=0.1,
# min_size=10,
# set_sizes=(
# upset_set_size()
# + theme(axis.text.x=element_text(angle=90))
# + scale_y_continuous(trans=reverse_log_trans())
# ),
# queries=list(upset_query(set='Drama', fill='blue'))
# )
# -
# We can also modify the labels to display the logged values:
# + magic_args="-w 500 -h 300" language="R"
#
# upset(
# movies, genres,
# min_size=10,
# width_ratio=0.2,
# set_sizes=upset_set_size()
# + scale_y_continuous(
# trans=reverse_log_trans(),
# labels=log10
# )
# + ylab('log10(set size)')
# )
# -
# ### 4.4 Display counts
# To display the count add `geom_text()`:
# + magic_args="-w 500 -h 300" language="R"
#
# upset(
# movies, genres,
# min_size=10,
# width_ratio=0.3,
# encode_sets=FALSE, # for annotate() to select the set by name disable encoding
# set_sizes=(
# upset_set_size()
# + geom_text(aes(label=..count..), hjust=1.1, stat='count')
# # you can also add annotations on top of bars:
# + annotate(geom='text', label='@', x='Drama', y=850, color='white', size=3)
# + expand_limits(y=1100)
# + theme(axis.text.x=element_text(angle=90))
# )
# )
# -
# ### 4.5 Change position and add fill
# + magic_args="-w 500 -h 300" language="R"
#
# upset(
# movies, genres,
# min_size=10,
# width_ratio=0.3,
# set_sizes=(
# upset_set_size(
# geom=geom_bar(
# aes(fill=mpaa, x=group),
# width=0.8
# ),
# position='right'
# )
# ),
# # moves legends over the set sizes
# guides='over'
# )
# -
# ### 4.6 Hide the set sizes altogether
# + magic_args="-w 500 -h 300" language="R"
#
# upset(
# movies, genres,
# min_size=10,
# set_sizes=FALSE
# )
# -
# ### 4.7 Change the label
# For compatibility with older `ggplot2` versions, `upset_set_size` generates a plot with flipped coordinates and therefore `ylab` needs to be used instead of `xlab` (and aesthetic `x` is used in examples above in place of `y`. This wil change it in a future major release of `ComplexUpset`.
# + magic_args="-w 400 -h 300" language="R"
# upset(
# movies, genres, width_ratio=0.3, min_size=100,
# set_sizes=(
# upset_set_size()
# + ylab('MY TITLE')
# )
# )
# -
# ## 5. Adjusting other aesthetics
# ### 5.1 Stripes
# Change the colors:
# + magic_args="-w 600 -h 400" language="R"
# upset(
# movies,
# genres,
# min_size=10,
# width_ratio=0.2,
# stripes=c('cornsilk1', 'deepskyblue1')
# )
# -
# You can use multiple colors:
# + magic_args="-w 600 -h 400" language="R"
# upset(
# movies,
# genres,
# min_size=10,
# width_ratio=0.2,
# stripes=c('cornsilk1', 'deepskyblue1', 'grey90')
# )
# -
# Or, set the color to white to effectively disable the stripes:
# + magic_args="-w 600 -h 400" language="R"
# upset(
# movies,
# genres,
# min_size=10,
# width_ratio=0.2,
# stripes='white'
# )
# -
# Advanced customization using `upset_stripes()`:
# + magic_args="-w 600 -h 400" language="R"
# upset(
# movies,
# genres,
# min_size=10,
# width_ratio=0.2,
# stripes=upset_stripes(
# geom=geom_segment(size=5),
# colors=c('cornsilk1', 'deepskyblue1', 'grey90')
# )
# )
# -
# Mapping stripes attributes to data using `upset_stripes()`:
# + magic_args="-w 600 -h 400" language="R"
# genre_metadata = data.frame(
# set=c('Action', 'Animation', 'Comedy', 'Drama', 'Documentary', 'Romance', 'Short'),
# shown_in_our_cinema=c('no', 'no', 'on weekends', 'yes', 'yes', 'on weekends', 'no')
# )
#
# upset(
# movies,
# genres,
# min_size=10,
# width_ratio=0.2,
# stripes=upset_stripes(
# mapping=aes(color=shown_in_our_cinema),
# colors=c(
# 'yes'='green',
# 'no'='red',
# 'on weekends'='orange'
# ),
# data=genre_metadata
# )
# )
# -
# ### 5.2 Adding title
# Adding title with `ggtitle` with add it to the intersection matrix:
# + magic_args="-w 600 -h 400" language="R"
# upset(movies, genres, min_size=10) + ggtitle('Intersection matrix title')
# -
# In order to add a title for the entire plot, you need to wrap the plot:
# + magic_args="-w 600 -h 400" language="R"
# upset(movies, genres, min_size=10, wrap=TRUE) + ggtitle('The overlap between genres')
# -
# ### 5.3 Making the plot transparent
# You need to set the plot background to transparent and adjust colors of stripes to your liking:
# + magic_args="-w 600 -h 400" language="R"
# (
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=10,
# stripes=c(alpha('grey90', 0.45), alpha('white', 0.3))
# )
# & theme(plot.background=element_rect(fill='transparent', color=NA))
# )
# -
# Use `ggsave('upset.png', bg="transparent")` when exporting to PNG.
# ### 5.4 Adjusting the intersection matrix
# Use `intersection_matrix()` to modify the matrix parameters:
# + magic_args="-w 800 -h 400" language="R"
# upset(
# movies, genres, name='genre', min_size=10,
# encode_sets=FALSE, # for annotate() to select the set by name disable encoding
# matrix=(
# intersection_matrix(
# geom=geom_point(
# shape='square',
# size=3.5
# ),
# segment=geom_segment(
# linetype='dotted'
# ),
# outline_color=list(
# active='darkorange3',
# inactive='grey70'
# )
# )
# + scale_color_manual(
# values=c('TRUE'='orange', 'FALSE'='grey'),
# labels=c('TRUE'='yes', 'FALSE'='no'),
# breaks=c('TRUE', 'FALSE'),
# name='Is intersection member?'
# )
# + scale_y_discrete(
# position='right'
# )
# + annotate(
# geom='text',
# label='Look here →',
# x='Comedy-Drama',
# y='Drama',
# size=5,
# hjust=1
# )
# ),
# queries=list(
# upset_query(
# intersect=c('Drama', 'Comedy'),
# color='red',
# fill='red',
# only_components=c('intersections_matrix', 'Intersection size')
# )
# )
# )
# -
# ## 6. Themes
# The themes for specific components are defined in `upset_themes` list, which contains themes for:
# + language="R"
# names(upset_themes)
# -
# You can substitute this list for your own using `themes` argument. While you can specify a theme for every component, if you omit one or more components those will be taken from the element named `default`.
# ### 6.1 Substituting themes
# + magic_args="-w 800 -h 400" language="R"
# upset(movies, genres, min_size=10, themes=list(default=theme()))
# -
# You can also add themes for your custom panels/annotations:
# + magic_args="-w 800 -h 800" language="R"
#
# upset(
# movies,
# genres,
# annotations = list(
# 'Length'=list(
# aes=aes(x=intersection, y=length),
# geom=geom_boxplot(na.rm=TRUE)
# ),
# 'Rating'=list(
# aes=aes(x=intersection, y=rating),
# geom=list(
# geom_jitter(aes(color=log10(votes)), na.rm=TRUE),
# geom_violin(alpha=0.5, na.rm=TRUE)
# )
# )
# ),
# min_size=10,
# width_ratio=0.1,
# themes=modifyList(
# upset_themes,
# list(Rating=theme_void(), Length=theme())
# )
# )
# -
# ### 6.2 Adjusting the default themes
# Modify all the default themes as once with `upset_default_themes()`:
# + magic_args="-w 800 -h 400" language="R"
#
# upset(
# movies, genres, min_size=10, width_ratio=0.1,
# themes=upset_default_themes(text=element_text(color='red'))
# )
# -
# To modify only a subset of default themes use `upset_modify_themes()`:
# + magic_args="-w 800 -h 400" language="R"
#
# upset(
# movies, genres,
# base_annotations=list('Intersection size'=intersection_size(counts=FALSE)),
# min_size=100,
# width_ratio=0.1,
# themes=upset_modify_themes(
# list(
# 'intersections_matrix'=theme(text=element_text(size=20)),
# 'overall_sizes'=theme(axis.text.x=element_text(angle=90))
# )
# )
# )
# -
# ## 7. Highlighting (queries)
# Pass a list of lists generated with `upset_query()` utility to the optional `queries` argument to selectively modify aesthetics of specific intersections or sets.
#
# Use one of the arguments: `set` or `intersect` (not both) to specify what to highlight:
# - `set` will highlight the bar of the set size,
# - `intersect` will highlight an intersection on all components (by default), or on components chosen with `only_components`
# - all other parameters will be used to modify the geoms
# + magic_args="-w 800 -h 600" language="R"
#
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=10,
# annotations = list(
# 'Length'=list(
# aes=aes(x=intersection, y=length),
# geom=geom_boxplot(na.rm=TRUE)
# )
# ),
# queries=list(
# upset_query(
# intersect=c('Drama', 'Comedy'),
# color='red',
# fill='red',
# only_components=c('intersections_matrix', 'Intersection size')
# ),
# upset_query(
# set='Drama',
# fill='blue'
# ),
# upset_query(
# intersect=c('Romance', 'Comedy'),
# fill='yellow',
# only_components=c('Length')
# )
# )
# )
# -
# ## 8. Sorting
# ### 8.1 Sorting intersections
# By degree:
# + magic_args="-w 800 -h 300" language="R"
# upset(movies, genres, width_ratio=0.1, sort_intersections_by='degree')
# -
# By ratio:
# + magic_args="-w 800 -h 400" language="R"
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=10,
# sort_intersections_by='ratio',
# base_annotations=list(
# 'Intersection size'=intersection_size(text_mapping=aes(label=!!upset_text_percentage())),
# 'Intersection ratio'=intersection_ratio(text_mapping=aes(label=!!upset_text_percentage()))
# )
# )
# -
# The other way around:
# + magic_args="-w 800 -h 300" language="R"
# upset(movies, genres, width_ratio=0.1, sort_intersections='ascending')
# -
# Without any sorting:
# + magic_args="-w 800 -h 300" language="R"
# upset(movies, genres, width_ratio=0.1, sort_intersections=FALSE)
# -
# First by degree then by cardinality:
# + magic_args="-w 800 -h 300" language="R"
# upset(movies, genres, width_ratio=0.1, sort_intersections_by=c('degree', 'cardinality'))
# -
# User-specified order:
# + magic_args="-w 600 -h 300" language="R"
# upset(
# movies,
# genres,
# width_ratio=0.1,
# sort_intersections=FALSE,
# intersections=list(
# 'Comedy',
# 'Drama',
# c('Comedy', 'Romance'),
# c('Romance', 'Drama'),
# 'Outside of known sets',
# 'Action'
# )
# )
# -
# ### 8.2 Sorting sets
# Ascending:
# + magic_args="-w 800 -h 300" language="R"
# upset(movies, genres, width_ratio=0.1, sort_sets='ascending')
# -
# Without sorting - preserving the order as in genres:
genres
# + magic_args="-w 800 -h 300" language="R"
# upset(movies, genres, width_ratio=0.1, sort_sets=FALSE)
# -
# ## 9. Grouping
# ### 9.1 Grouping intersections
# Use `group_by='sets'` to group intersections by set. If needed, the intersections will be repeated so that they appear in each set group. Use `upset_query()` with `group` argument to color the intersection matrix accordingly.
# + magic_args="-w 800 -h 300" language="R"
#
# upset(
# movies, c("Action", "Comedy", "Drama"),
# width_ratio=0.2,
# group_by='sets',
# queries=list(
# upset_query(
# intersect=c('Drama', 'Comedy'),
# color='red',
# fill='red',
# only_components=c('intersections_matrix', 'Intersection size')
# ),
# upset_query(group='Drama', color='blue'),
# upset_query(group='Comedy', color='orange'),
# upset_query(group='Action', color='purple'),
# upset_query(set='Drama', fill='blue'),
# upset_query(set='Comedy', fill='orange'),
# upset_query(set='Action', fill='purple')
# )
# )
# -
# ## 10. Display percentages
# Use `aes_percentage()` utility preceded with `!!` syntax to easily display percentages. In the examples below only percentages for the movies with R rating are shown to avoid visual clutter.
# + language="R"
# rating_scale = scale_fill_manual(values=c(
# 'R'='#E41A1C', 'PG'='#377EB8',
# 'PG-13'='#4DAF4A', 'NC-17'='#FF7F00'
# ))
# show_hide_scale = scale_color_manual(values=c('show'='black', 'hide'='transparent'), guide='none')
# -
# ### 10.1 Within intersection
# + magic_args="-w 800 -h 500" language="R"
#
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=100,
# annotations =list(
# 'MPAA Rating'=list(
# aes=aes(x=intersection, fill=mpaa),
# geom=list(
# geom_bar(stat='count', position='fill', na.rm=TRUE),
# geom_text(
# aes(
# label=!!aes_percentage(relative_to='intersection'),
# color=ifelse(mpaa == 'R', 'show', 'hide')
# ),
# stat='count',
# position=position_fill(vjust = .5)
# ),
# scale_y_continuous(labels=scales::percent_format()),
# show_hide_scale,
# rating_scale
# )
# )
# )
# )
# -
# ### 10.2 Relative to the group
# + magic_args="-w 800 -h 500" language="R"
#
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=100,
# annotations =list(
# 'MPAA Rating'=list(
# aes=aes(x=intersection, fill=mpaa),
# geom=list(
# geom_bar(stat='count', position='fill', na.rm=TRUE),
# geom_text(
# aes(
# label=!!aes_percentage(relative_to='group'),
# group=mpaa,
# color=ifelse(mpaa == 'R', 'show', 'hide')
# ),
# stat='count',
# position=position_fill(vjust = .5)
# ),
# scale_y_continuous(labels=scales::percent_format()),
# show_hide_scale,
# rating_scale
# )
# )
# )
# )
#
# -
# ### 10.3 Relative to all observed values
# + magic_args="-w 800 -h 500" language="R"
#
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=100,
# annotations =list(
# 'MPAA Rating'=list(
# aes=aes(x=intersection, fill=mpaa),
# geom=list(
# geom_bar(stat='count', position='fill', na.rm=TRUE),
# geom_text(
# aes(
# label=!!aes_percentage(relative_to='all'),
# color=ifelse(mpaa == 'R', 'show', 'hide')
# ),
# stat='count',
# position=position_fill(vjust = .5)
# ),
# scale_y_continuous(labels=scales::percent_format()),
# show_hide_scale,
# rating_scale
# )
# )
# )
# )
# -
# ## 11. Advanced usage examples
# ### 11.1 Display text on some bars only
# + magic_args="-w 800 -h 500" language="R"
#
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=100,
# annotations =list(
# 'MPAA Rating'=list(
# aes=aes(x=intersection, fill=mpaa),
# geom=list(
# geom_bar(stat='count', position='fill', na.rm=TRUE),
# geom_text(
# aes(label=ifelse(mpaa == 'R', 'R', NA)),
# stat='count',
# position=position_fill(vjust = .5),
# na.rm=TRUE
# ),
# show_hide_scale,
# rating_scale
# )
# )
# )
# )
# -
# ### 11.2 Combine multiple plots together
# + magic_args="-w 800 -h 500" language="R"
# library(patchwork)
#
# annotations = list(
# 'MPAA Rating'=list(
# aes=aes(x=intersection, fill=mpaa),
# geom=list(
# geom_bar(stat='count', position='fill')
# )
# )
# )
# set.seed(0) # for replicable example only
#
# data_1 = movies[sample(nrow(movies), 100), ]
# data_2 = movies[sample(nrow(movies), 100), ]
#
# u1 = upset(data_1, genres, min_size=5, base_annotations=annotations)
# u2 = upset(data_2, genres, min_size=5, base_annotations=annotations)
#
# (u1 | u2) + plot_layout(guides='collect')
# -
# ### 11.3 Change height of the annotations
# + magic_args="-w 800 -h 350" language="R"
# upset(
# movies, genres, name='genre', width_ratio=0.1, min_size=100,
# annotations =list(
# 'MPAA Rating'=list(
# aes=aes(x=intersection, fill=mpaa),
# geom=list(
# geom_bar(stat='count', position='fill'),
# scale_y_continuous(labels=scales::percent_format())
# )
# )
# )
# ) + patchwork::plot_layout(heights=c(0.5, 1, 0.5))
# -
# ## 12. Venn diagrams
# Simple implementation of Venn diagrams is provided, taking the same input format as `upset()` but only supporting up to three sets.
# + language="R"
# movies_subset = head(movies, 300)
# genres_subset = c('Comedy', 'Drama', 'Action')
#
# movies_subset$good_rating = movies_subset$rating > mean(movies_subset$rating)
# arranged = arrange_venn(movies_subset, sets=genres_subset)
# -
# ### 12.1 Highlight specific elements
# + magic_args="-w 800 -h 550" language="R"
# (
# ggplot(arranged)
# + theme_void()
# + coord_fixed()
# + geom_point(aes(x=x, y=y, color=region, shape=good_rating, fill=length), size=2.7)
# + geom_venn_circle(movies_subset, sets=genres_subset, size=1)
# + geom_venn_label_set(movies_subset, sets=genres_subset, aes(label=region), outwards_adjust=2.6)
# + geom_venn_label_region(movies_subset, sets=genres_subset, aes(label=size), position=position_nudge(y=0.15))
# + geom_curve(
# data=arranged[which.min(arranged$length), ],
# aes(xend=x+0.01, yend=y+0.01), x=1.5, y=2.5, curvature=.2,
# arrow = arrow(length = unit(0.015, "npc"))
# )
# + annotate(
# geom='text', x=1.9, y=2.6, size=6,
# label=paste(substr(arranged[which.min(arranged$length), ]$title, 0, 9), 'is the shortest')
# )
# + scale_color_venn_mix(movies, sets=genres_subset, guide='none')
# + scale_shape_manual(
# values=c(
# 'TRUE'='triangle filled',
# 'FALSE'='triangle down filled'
# ),
# labels=c(
# 'TRUE'='above average',
# 'FALSE'='below average'
# ),
# name='Rating'
# )
# + scale_fill_gradient(low='white', high='black', name='Length (minutes)')
#
# )
# -
# ### 12.2 Highlight all regions
# + magic_args="-w 800 -h 550" language="R"
# (
# ggplot(arranged)
# + theme_void()
# + coord_fixed()
# + geom_venn_region(movies_subset, sets=genres_subset, alpha=0.1)
# + geom_point(aes(x=x, y=y, color=region), size=2.5)
# + geom_venn_circle(movies_subset, sets=genres_subset, size=1.5)
# + geom_venn_label_set(movies_subset, sets=genres_subset, aes(label=region), outwards_adjust=2.6)
# + geom_venn_label_region(movies_subset, sets=genres_subset, aes(label=size), position=position_nudge(y=0.15))
# + scale_color_venn_mix(movies, sets=genres_subset, guide='none')
# + scale_fill_venn_mix(movies, sets=genres_subset, guide='none')
# )
# -
# ### 12.3 Highlight specific regions
# + magic_args="-w 800 -h 550" language="R"
#
# (
# ggplot(arranged)
# + theme_void()
# + coord_fixed()
# + geom_venn_region(movies_subset, sets=genres_subset, alpha=0.2)
# + geom_point(aes(x=x, y=y, color=region), size=1.5)
# + geom_venn_circle(movies_subset, sets=genres_subset, size=2)
# + geom_venn_label_set(movies_subset, sets=genres_subset, aes(label=region), outwards_adjust=2.6)
# + scale_color_venn_mix(movies, sets=genres_subset, guide='none')
# + scale_fill_venn_mix(movies, sets=genres_subset, guide='none', highlight=c('Comedy-Action', 'Drama'), inactive_color='white')
# )
# -
# ### 12.4 Two sets Venn
# The density of the points grid is determined in such a way that the all the points from the set with the largest space restrictions are fit into the available area. In case of the diagram below, its the observations that do not belong to any set that define the grid density:
# + magic_args="-w 600 -h 450" language="R"
# genres_subset = c('Action', 'Drama')
# (
# ggplot(arrange_venn(movies_subset, sets=genres_subset))
# + theme_void()
# + coord_fixed()
# + geom_point(aes(x=x, y=y, color=region), size=2)
# + geom_venn_circle(movies_subset, sets=genres_subset, size=2)
# + geom_venn_label_set(movies_subset, sets=genres_subset, aes(label=region), outwards_adjust=2.6)
# + scale_color_venn_mix(movies, sets=genres_subset, guide='none')
# )
| 42,985 |
/cheat_sheets/Facts.ipynb
|
330f7febc3e27789d12ad2bf227964a03afa3bbf
|
[] |
no_license
|
kraeml/hands-on-ansible
|
https://github.com/kraeml/hands-on-ansible
| 0 | 0 | null | 2017-04-18T08:03:17 | 2017-04-11T11:58:57 | null |
Jupyter Notebook
| false | false |
.sh
| 2,456 |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sh
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Bash
# language: bash
# name: bash
# ---
# # Facts
#
# Facts: information about hosts in your inventory that’s gathered by Ansible when it connects to a host.
#
# You can use these facts in your plays – they are represented as variables stored in a dictionary.
#
#
# Run the setup module to see facts.
#
# ansible 10.0.3.41 -m setup -u root
#
#
#
# ## Disable Fact Gathering
#
# In some cases you do *not* want Ansible to automatically gather facts when it connects to a target machine. For example, this is often true when you manage system facts outside of Ansible, or when you are preparing a host that doesn't have Python installed.
#
# You can disable fact gathering by setting ‘gather_facts: False’ at the playbook level (e.g. prepare_ansible_target.yml).
#
#
#
# ## Access Facts
#
# Gathered facts are added to the Host variable scope. You can access them through the 'hostvars' dictionary.
#
# {{ hostvars[host]['fact_name'] }}
#
#
#
# ## Fact-testing tasks
# Try running the following tasks on a host:
#
# - name: All the things
# debug: msg={{ hostvars[inventory_hostname] }}
#
# - name: However this machine is listed in the inventory
# debug: msg={{ inventory_hostname }}
#
# - name: Just the hostname
# debug: msg={{ ansible_hostname }}
#
# - name: Just the primary IPv4 address
# debug: msg={{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}
| 1,635 |
/Altri_algoritmi_di_ottimizzazione.ipynb
|
93033d9e0bb8c14bd92b0237a0e0b70f782eab05
|
[] |
no_license
|
zideric/colab
|
https://github.com/zideric/colab
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 261,703 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/zideric/colab/blob/main/Altri_algoritmi_di_ottimizzazione.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="beSOhcL6n3s4"
# # Altri algoritmi di ottimizzazione
#
# utilizziamo la stessa architettura di rete neurale creata in questo [notebook](https://colab.research.google.com/drive/1XVnUq7_vaP9WQan11Di0JMneXVJyjule#scrollTo=pT-9CQx0BxoJ) al fine di confrontare i diversi algoritmi di ottimizzazione
#
# importiamo i moduli
# + id="zk6DV3nvn0JQ"
import numpy as np
import matplotlib.pyplot as plt
import tensorflow
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import History
from keras import optimizers
from time import time
def set_seed(seed):
from os import environ
environ["PYTHONHASHSEED"] = '0'
environ["CUDA_VISIBLE_DEVICES"]='-1'
environ["TF_CUDNN_USE_AUTOTUNE"] ='0'
from numpy.random import seed as np_seed
np_seed(seed)
import random
random.seed(seed)
#from tensorflow import set_random_seed
#set_random_seed(seed)
tensorflow.random.set_seed(seed) #per poter riprodurrer i risultati
# + [markdown] id="X7Jp-y0bpOeH"
# ## Preparazione dei dati
#
# Carichiamo il dataset Fashion MNIST con Keras e processiamo i dati
# + colab={"base_uri": "https://localhost:8080/"} id="kSBhsKsYovm-" outputId="ad60b759-7e95-40d0-c10c-a9b5eb68d868"
from keras.datasets import fashion_mnist
labels = ["T-shirt/top","Pantalone","Pullover","Vestito","Cappotto","Sandalo","Maglietta","Sneaker","Borsa","Stivaletto"]
(X_train, y_train),(X_test, y_test) = fashion_mnist.load_data()
#Encoding delle immagini
X_train = X_train.reshape(X_train.shape[0],28*28)
X_test = X_test.reshape(X_test.shape[0],28*28)
#Normalizziamo, usiamo semplicemente divisione per 255
X_train = X_train/255
X_test = X_test/255
# Encoding del target
num_classes = 10
y_train_dummy = to_categorical(y_train, num_classes)
y_test_dummy = to_categorical(y_test, num_classes)
# + [markdown] id="tqrvMcb2qYEe"
# Adesso ogni immagine è codificata in un vettore contenente il valore dei pixel normalizzati disposti su di un'unica riga.
# Il target è codificato all'interno di 10 variabili dummy, una per ogni classe, in cui la variabile alla posizione della classe di appartenenza vale 1 (True), mentre le altre valgono 0 (False).
#
# ## Funzioni utili
#
# Creiamo delle funzioni visto che dovremo ridefinire e riaddestrare la stessa rete diverse volte cambiando solo la funzione di ottimizzazione
#
# Questa funzione definisce l'architettura del modello impostando un seed comune per l'inizializzazione casuale dei pesi e ritorna il modello
# + id="yjW-Q_QiqKIh"
def build_model():
set_seed(0)
model = Sequential()
model.add(Dense(512, activation='relu', input_dim=X_train.shape[1]))
model.add(Dense(256, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
return model
# + [markdown] id="Y0y2lU5CrNkp"
# Questa funzione disegna il grafico della funzione di costo ad ogni epoca
# + id="_jlBoH7ArMmy"
def plot_loss(optimizer, loss):
plt.figure(figsize=(14,10))
plt.title(optimizer)
plt.xlabel("Epoca")
plt.ylabel("Log-Loss")
plt.plot(loss)
# + [markdown] id="teUKsRuQrvhe"
# Questa funzione avvia l'addestramento della rete e cronometra l'esecuzione, infine mostra le metriche accuracy e loss e il grafico con la funzione creata allo step precedente
# + id="sdoBKjOJrubV"
def train_and_time(model, optimizer,batch_size=512): #imposto default a 512 per congruenza con l'esercizio ma prevedo il parametro per fare diverse prove
history = History()
start_at = time()
model.fit(X_train, y_train_dummy, epochs=100, batch_size=batch_size,callbacks=[history], verbose=0)
train_time = time() - start_at
metrics = model.evaluate(X_train, y_train_dummy, verbose=0) #usiamo evaluate cosi salviamo accuracy e loss
print("Tempo di addestramento: %d minuti e %d secondi" % (train_time/60,train_time%60))
print("Accuracy = %.4f - Loss = %.4f" % (metrics[1],metrics[0]))
plot_loss(optimizer, history.history['loss'])
return model
# + [markdown] id="VUaqO-11tQYo"
# ## Utilizzare il Momentum
#
# Il momemtum è un concetto proveniente dalla fisica e permette di far convergere il Gradient Descent più velocemente nella direzione corretta, accumulando gli update degli step precendenti.
#
# $$
# \begin{align*}
# V_t = \gamma V_{t-1} + \eta \nabla J(W) \\
# W = W - V_t
# \end{align*}
# $$
#
# Cominciamo costruendo il modello utilizzando la funzione build_model che abbiamo definito prima.
# + id="WczTu9uVtPEW"
model = build_model()
# + [markdown] id="zsPLczmdt_81"
# per impostare il momentum dobbiamo creare un nuovo ottimizzatore di tipo Stochastic Gradient Descent e passare al suo interno nel parametro momentum il valore della costante gamma, che va da 0 a 1, con un valore consigliato di 0.9
# + id="9Zn21YaGtsOe"
sgd = optimizers.SGD(momentum=0.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# + [markdown] id="CUyodphuuhtp"
# Avviamo la fase di addestramento utilizzando la funzione train_and_time definita prima.
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="rjegU9OWug4q" outputId="e4d4da7e-e940-4040-c33f-890d329e160e"
model = train_and_time(model, "Momentum")
# + [markdown] id="iH7ntx6IvIXc"
# ## Nesterov Momentum
#
# Il Nesterov Momentum è un espansione del momentum che tenta di prevedere il prossimo step del gradient descent eseguendo una stima con l'update dell'epoca precedente.
#
# $$
# \begin{align*}
# V_t = \gamma V_{t-1} + \eta \nabla J(W-\gamma V_{t-1}) \\
# W = W - V_t
# \end{align*}
# $$
# Per utilizzare il Nesterov Momentum dobbiamo solamente passare il paramentro nesterov=True quando definiamo l'ottimizzatore.
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="D_y05iDpusEh" outputId="92b224a3-a9b7-4c5f-fd1c-7c507655b009"
model = build_model() #ricreiamo il modello
sgd = optimizers.SGD(momentum=0.9, nesterov=True) #impostiamo il Nesterov
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
model = train_and_time(model, "Nesterov Momentum")
# + [markdown] id="SijnHn_FyOEt"
# ## AdaGrad
#
# L'AdaGrad è un algoritmo di ottimizzazione che permette di ottenere un valore del learning rate dinamico, ciò che varia ad ogni iterazione e che si adatta alle features assumendo valore maggiore per le features più rare.
#
# $$
# \begin{align*}
# g_t = \nabla J(\theta) \\
# \theta = \theta - \frac{\eta}{\sqrt{G_t + \epsilon}}\cdot g_t
# \end{align*}
# $$
# Dove $G_t$ è una matrice diagonale che contiene la somma dei quadrati di ogni derivata parizale delle epoche precedenti, mentre $\epsilon$ è una costante molto piccola ($\approx 1^{-8}$) utilizzata per evitare la divisione per zero.
# L'AdaGrad elimina anche il problema di dover selezionare il learning rate manualmente, **infatti un valore standard di 0.01 va bene nella maggior parte dei casi.**
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="SxRfbw72yUWf" outputId="4cde47d3-144d-4c6e-8df8-4f99a482a90a"
model = build_model()
adagrad = optimizers.Adagrad()
model.compile(loss='categorical_crossentropy', optimizer=adagrad, metrics=['accuracy'])
model = train_and_time(model,'Adagrad optimizer')
# + [markdown] id="asZd5tEpzHzh"
# Lo svantaggio principale dell'AdaGrad è che è troppo aggressivo, andando a ridurre eccessivamente il learning rate con l'avanzare delle epoche.
#
#
# ## RMSprop
#
# L'algoritmo di ottimizzazione RMSprop è una soluzione al problema dell'Adagrad proposta da Geoff Hinton, che piuttosto che accumulare i gradienti di tutte le epoche precedenti, raccoglie solo quelli di un ultimo quadro temporale utilizzando una media mobile esponenziale.
#
# $$
# \begin{align*}
# g_t = \nabla J(W) \\
# E[g^2] = \gamma E[g^2]_{t-1} + (1-\gamma)g^2_t \\
# W = W - \frac{\eta}{\sqrt{E[g^2] + \epsilon}}\cdot g_t
# \end{align*}
# $$
# Anche qui il valore consigliato per $\gamma$ è 0.9, mentre per il learning rate è 0.001.
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="eWSJKRK8zI-E" outputId="b452069a-66c4-45ad-c5e4-9a96b4becde8"
model = build_model()
rmsprop = optimizers.RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rmsprop, metrics=['accuracy'])
model = train_and_time(model, "RMSprop optimizer")
# + [markdown] id="LW9WrwGD1-Hv"
# ## Adadelta
#
# L'Adadelta è un'espansione dell'Adagrad che utilizza un approccio del tutto simile al RMSprop per affrontare il problema della riduzione del learning rate.
#
# Per approfondire la parte matematica dell'Adadelta, dai uno sguardo al paper originale su Arxiv: Adadelta: an Adapive Learning Rate Method
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="ar5irVs22DJD" outputId="1d673b0a-75de-494e-93c9-f19715f9995c"
model = build_model()
adadelta = optimizers.Adadelta()
model.compile(loss='categorical_crossentropy', optimizer=adadelta, metrics=['accuracy'])
model = train_and_time(model, "Adadelta optimizer")
# + [markdown] id="whsE7G1i2dS8"
# ## Adam
#
# L'Adam è un algoritmo di ottimizzazione basato sul RMSprop che fa anche uso del Momentum
#
# $$
# \begin{align*}
# g_t = \nabla J(W) \\
# m_t = \beta_1 m_{t-1} + (1-\beta_1)g_t \\
# v_t = \beta_2 v_{t-1} + (1-\beta_2)g^2_t \\
# \hat{m}_t = \frac{m_t}{1-\beta^t_1} \\
# \hat{v}_t = \frac{v_t}{1-\beta^t_2} \\
# W = W - \frac{\eta}{\sqrt{\hat{v}_t + \epsilon}}\cdot \hat{m}_t
# \end{align*}
# $$
# Il valore consigliato per $\beta_1$ è 0.9 e per $\beta_2$ 0.999. Nella pratica è stato dimostrato che Adam è l'algoritmo di ottimizzazione più performante nella maggior parte dei casi, per approfondire dai uno sguardo al paper originale su Arxiv Adam: A Method for Stochastic [Optimization.](https://arxiv.org/abs/1412.6980)
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="h8CNQ0pp2m5K" outputId="2dcd99ff-58f5-49ab-c387-029b06f8ef24"
model = build_model()
adam = optimizers.Adam()
model.compile(loss='categorical_crossentropy',optimizer=adam, metrics=['accuracy'])
model = train_and_time(model, "Adam optimizer")
# + [markdown] id="SCfufHZx3rnD"
# ## Adamax
#
# L'algortimo di ottimizzazione Adamax è un'espansione dell'Adam che lo rende più robusto nel caso di array sparsi.
#
# $$
# \begin{align*}
# \hat{u}_t = \max{(\beta_2 v_{t-1},|g_t|)} \\
# \theta = \theta - \frac{\eta}{\hat{u}_t}\cdot \hat{m}_t
# \end{align*}
# $$
# Per approfondire fai sempre riferimento al paper [Adam: A Method for Stochastic Optimization.](https://arxiv.org/abs/1412.6980)
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="nr5sN-hJ33bD" outputId="8dcd50b8-7f5e-4c99-b47a-1a6e142cc0a0"
model = build_model()
adamax = optimizers.Adamax()
model.compile(loss='categorical_crossentropy',optimizer=adamax, metrics=['accuracy'])
model = train_and_time(model, "Adamax optimizer")
# + [markdown] id="7HwgbJoV4LoV"
# ## Nadam
#
# L'algoritmo di ottimizzazione Nadam è un espansione dell'Adam che fa utilizzo del Nesterov Momentum.$$
# \begin{align*}
# &\theta = \theta - \frac{\eta}{\sqrt{\hat{v}_t + \epsilon}}\cdot (\beta_1 \hat{m}_t + \frac{(1-\beta_1)g_t}{1-\beta_1^t})
# \end{align*}
# $$
#
# Per approfondire la parte matematica dai uno sguardo a questo paper: [Incorporating Nesterov Momentum into Adam](http://cs229.stanford.edu/proj2015/054_report.pdf)
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="V8PVQKsq4Um8" outputId="a27d96f0-3d44-4247-8520-5432d4592c15"
model = build_model()
nadam = optimizers.Nadam()
model.compile(loss='categorical_crossentropy',optimizer=nadam, metrics=['accuracy'])
model = train_and_time(model, "Nadam optimizer")
# + [markdown] id="fC78-8FX4pB7"
# ## Abbiamo creato una super rete neurale?
#
# Sul repositori del [fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) possiamo vedere dei benchmark di modelli creati dai ricercatori
# A quanto pare l'accuracy del nostro banalissimo modello, ottenuta con diversi algoritmi di ottimizzazione, è superiore a quella di modelli creati da ricercatori esperti con modelli ben più complessi. E' davvero così ! NO. Il modello va valutato su osservazioni a lui sconosciute, cioè osservazioni che non gli sono state mostrate durante la fase di addestramento. Facciamolo, utilizzando il test set per valutare l'ultimo modello addestrato.
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="quEwkOok434X" outputId="1e33a679-a403-4f19-d267-00f476cc38d9"
test_metrics = model.evaluate(X_test, y_test_dummy)
print("Log loss su test set: %.4f - Accuracy sul test set: %.4f" % (test_metrics[0], test_metrics[1]))
| 13,149 |
/covid-19.ipynb
|
287f5717ec2e21cfc6dee2b4b6f754a26953f178
|
[] |
no_license
|
pjindal91/kaggle-covid-19
|
https://github.com/pjindal91/kaggle-covid-19
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,106,223 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Kaggle COVID-19 Week 1 Competition
# +
import datetime
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import pickle
# %matplotlib inline
sns.set(style="darkgrid")
pd.set_option('display.float_format', lambda x: '%.2f' % x)
warnings.filterwarnings("ignore")
# -
test = pd.read_csv('./Datasets/test.csv')
train = pd.read_csv('./Datasets/train.csv')
train.head()
train.describe().T
train.isnull().sum()
# Replace null values of Province/State with Country/Region
train['Province/State'].fillna(train['Country/Region'], inplace=True)
# ## Data Analysis/EDA
# Analyze the trend in Confirmed Cases and Fatalities over period of time
train_total = train.groupby(['Date']).agg({
'ConfirmedCases': ['sum'],
'Fatalities': ['sum']
})
train_total.reset_index(inplace=True)
train_total.head()
train_total.columns = ['Date', 'total_confirmed', 'total_fatalities']
plt.figure(figsize=(15,3))
sns.barplot(x=train_total['Date'],
y=train_total['total_confirmed'],
palette="rocket")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize=(15,3))
sns.barplot(x=train_total['Date'],
y=train_total['total_fatalities'],
palette="rocket")
plt.xticks(rotation=90)
plt.show()
# +
import plotly.express as px
country_data = train.groupby('Country/Region')['ConfirmedCases', 'Fatalities']\
.max()\
.reset_index()
fig = px.choropleth(country_data, locations="Country/Region",
locationmode='country names', color=np.log(country_data["ConfirmedCases"]),
hover_name="Country/Region", hover_data=['ConfirmedCases'],
color_continuous_scale="darkmint",
title='Countries with Confirmed Cases')
fig.update(layout_coloraxis_showscale=False)
fig.show()
fig = px.choropleth(country_data, locations="Country/Region",
locationmode='country names', color=np.log(country_data["Fatalities"]),
hover_name="Country/Region", hover_data=['Fatalities'],
color_continuous_scale="reds",
title='Countries with Fatalities')
fig.update(layout_coloraxis_showscale=False)
fig.show()
# +
usa_data = usa_data = train.loc[(train["Country/Region"]=="US") & (train["ConfirmedCases"] > 50)]\
.groupby(['Date', 'Lat', 'Long', "Province/State"])['ConfirmedCases', 'Fatalities']\
.max()\
.reset_index()
fig = px.scatter_geo(usa_data,
lon="Long",
lat="Lat",
color="ConfirmedCases",
size='ConfirmedCases',
animation_frame="Date",
scope="usa",
hover_name="Province/State",
title='Spread over time')
fig.update(layout_coloraxis_showscale=False)
fig.show()
# -
countries_over_time = train\
.groupby(['Date', 'Country/Region'])['ConfirmedCases', 'Fatalities']\
.max()\
.reset_index()
fig = px.scatter_geo(countries_over_time,
locations="Country/Region",
locationmode='country names',
color="ConfirmedCases",
size='ConfirmedCases',
hover_name="Country/Region",
projection="natural earth",
animation_frame="Date",
title='Spread over time')
fig.update(layout_coloraxis_showscale=False)
fig.show()
# ## Feature Engineering
# ### Distance from the source of virus - Hubei
# +
import math
hubei_lat = train.loc[(train['Country/Region'] == 'China') & (train['Province/State'] == 'Hubei')].iloc[0]['Lat']
hubei_long = train.loc[(train['Country/Region'] == 'China') & (train['Province/State'] == 'Hubei')].iloc[0]['Long']
def dist_from_hubei(row):
return math.sqrt(math.pow(row['Lat'] - hubei_lat, 2) + math.pow(row['Long'] - hubei_long, 2))
train['dist_from_hubei'] = train.apply(dist_from_hubei, axis=1)
train.head()
# +
train_from_hubei = train.loc[(train['Province/State'] != 'Hubei')]\
.groupby(['dist_from_hubei'])\
.agg({
'Fatalities': ['sum'],
'ConfirmedCases': ['sum']
})
train_from_hubei.reset_index(inplace=True)
train_from_hubei.columns = ['dist_from_hubei', 'total_fatalities', 'total_confirmed']
# -
plt.figure(figsize=(15,3))
sns.lineplot(x='dist_from_hubei',
y='value',
hue='variable',
data=pd.melt(
train_from_hubei[['total_fatalities', 'total_confirmed', 'dist_from_hubei']],
['dist_from_hubei'])
)
# ### Lag of Confirmed Cases and Fatalities
# +
lag_list = [1, 2, 3, 7]
for lag in lag_list:
train[f'confirmed_shifted_{lag}'] = train.sort_values(
['Country/Region',
'Province/State',
'Date']
)['ConfirmedCases']\
.shift(lag)
train[f'confirmed_shifted_{lag}'].fillna(0, inplace=True)
train[f'fatalities_shifted_{lag}'] = train.sort_values(
['Country/Region',
'Province/State',
'Date']
)['Fatalities']\
.shift(lag)
train[f'fatalities_shifted_{lag}'].fillna(0, inplace=True)
# -
train.head()
# ### Encode Categorical Features
from sklearn.preprocessing import LabelEncoder
state_encoder = LabelEncoder().fit(train['Province/State'])
train['Province/State'] = state_encoder.transform(train['Province/State'])
country_encoder = LabelEncoder().fit(train['Country/Region'])
train['Country/Region'] = country_encoder.transform(train['Country/Region'])
date_encoder = LabelEncoder().fit(pd.concat([train['Date'],test['Date']]))
train['Date'] = date_encoder.transform(train['Date'])
train_data = train.loc[train['Date'] < 50]
X_train = train_data.drop(['ConfirmedCases', 'Fatalities', 'Id'], axis=1)
y1_train = train_data['ConfirmedCases']
y2_train = train_data['Fatalities']
validation_data = train.loc[train['Date'] >= 50]
X_validation = validation_data.drop(['ConfirmedCases', 'Fatalities', 'Id'], axis=1)
y1_validation = validation_data['ConfirmedCases']
y2_validation = validation_data['Fatalities']
# ## Modeling
# ### Find best params for the model
# +
# from xgboost import XGBRegressor
# def grid_search(n_estimator, learning_rate, max_depth):
# xgb_confirmed_cases_model = XGBRegressor(n_estimator=n_estimator,
# learning_rate=learning_rate,
# max_depth=max_depth,
# objective='reg:squaredlogerror')
# xgb_confirmed_cases_model.fit(X_train,
# y1_train,
# eval_set=[(X_validation, y1_validation)],
# early_stopping_rounds=50,
# verbose=False)
# result = xgb_confirmed_cases_model.evals_result()['validation_0']['rmsle']
# print(f"For estimators: {n_estimator}, learning_rate: {learning_rate}, max_depth: {max_depth}",
# np.min(result))
# for n_estimator in [100, 300, 500, 1000]:
# for learning_rate in [0.05, 0.1, 0.4, 0.7]:
# for max_depth in [6, 8, 10]:
# grid_search(n_estimator, learning_rate, max_depth)
# +
# from xgboost import XGBRegressor
# def grid_search(n_estimator, learning_rate, max_depth):
# xgb_fatalities_model = XGBRegressor(n_estimator=n_estimator,
# learning_rate=learning_rate,
# max_depth=max_depth,
# objective='reg:squaredlogerror')
# xgb_fatalities_model.fit(X_train,
# y2_train,
# eval_set=[(X_validation, y2_validation)],
# early_stopping_rounds=50,
# verbose=False)
# result = xgb_fatalities_model.evals_result()['validation_0']['rmsle']
# print(f"For estimators: {n_estimator}, learning_rate: {learning_rate}, max_depth: {max_depth}",
# np.min(result))
# for n_estimator in [100, 300, 500, 1000]:
# for learning_rate in [0.05, 0.1, 0.4, 0.7]:
# for max_depth in [6, 8, 10]:
# grid_search(n_estimator, learning_rate, max_depth)
# -
# Training the final chosen model, could have done all this from GridSearchCV as well
# +
from xgboost import XGBRegressor
xgb_confirmed_cases_model = XGBRegressor(n_estimator=100,
learning_rate=0.7,
max_depth=8,
objective='reg:squaredlogerror')
xgb_confirmed_cases_model.fit(X_train.values,
y1_train.values,
eval_set=[(X_validation.values, y1_validation.values)],
early_stopping_rounds=50,
verbose=False)
xgb_fatalities_model = XGBRegressor(n_estimator=100,
learning_rate=0.9,
max_depth=8,
objective='reg:squaredlogerror')
xgb_fatalities_model.fit(X_train.values,
y2_train.values,
eval_set=[(X_validation.values, y2_validation.values)],
early_stopping_rounds=50,
verbose=False)
# +
plt.figure(figsize=(15,3))
sns.barplot(x=X_train.columns, y=xgb_confirmed_cases_model.feature_importances_, palette="rocket").set_title("Feature Importance")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize=(15,3))
sns.barplot(x=X_train.columns, y=xgb_fatalities_model.feature_importances_, palette="rocket").set_title("Feature Importance")
plt.xticks(rotation=90)
plt.show()
# -
# ### Prepare test data
test['Province/State'].fillna(test['Country/Region'], inplace=True)
test['dist_from_hubei'] = test.apply(dist_from_hubei, axis=1)
test['Province/State'] = state_encoder.transform(test['Province/State'])
test['Country/Region'] = country_encoder.transform(test['Country/Region'])
test['Date'] = date_encoder.transform(test['Date'])
all_data = pd.concat([train, test]).sort_values(['Country/Region',
'Province/State',
'Date'])
# +
lag_list = [1, 2, 3, 7]
def add_lags(row, masked_data):
for lag in lag_list:
row[f'confirmed_shifted_{lag}'] = masked_data['ConfirmedCases'].shift(lag).iloc[-1]
row[f'fatalities_shifted_{lag}'] = masked_data['Fatalities'].shift(lag).iloc[-1]
def update_lags(all_data, row):
for lag in lag_list:
all_data.loc[all_data['ForecastId']==row['ForecastId'], f'confirmed_shifted_{lag}'] = \
row[f'confirmed_shifted_{lag}']
all_data.loc[all_data['ForecastId']==row['ForecastId'], f'fatalities_shifted_{lag}'] = \
row[f'fatalities_shifted_{lag}']
confirmed_results = {}
fatalities_results = {}
for i, row in test.iterrows():
forecast_id = row['ForecastId']
mask = (all_data['Lat']==row['Lat']) & (all_data['Long']==row['Long']) & \
(all_data['Date']<=row['Date']) & (all_data['Date']>=row['Date']-7) & \
(all_data['ForecastId']!=row['ForecastId'])
add_lags(row, all_data.loc[mask])
row.drop('ForecastId', inplace=True)
confirmed_predict = xgb_confirmed_cases_model.predict(row)[0]
confirmed_predict = int(confirmed_predict) if confirmed_predict > 0.0 else 0
fatalities_predict = xgb_fatalities_model.predict(row)[0]
fatalities_predict = int(fatalities_predict) if fatalities_predict > 0.0 else 0
all_data.loc[all_data['ForecastId']==forecast_id, 'ConfirmedCases'] = confirmed_predict
all_data.loc[all_data['ForecastId']==forecast_id, 'Fatalities'] = fatalities_predict
confirmed_results[forecast_id] = confirmed_predict
fatalities_results[forecast_id] = fatalities_predict
row['ForecastId'] = forecast_id
update_lags(all_data, row)
# -
test_results = pd.DataFrame()
test_results['ForecastId'] = list(confirmed_results.keys())
test_results['ConfirmedCases'] = list(confirmed_results.values())
test_results['Fatalities'] = list(fatalities_results.values())
test_results.to_csv('submission.csv', index=False)
| 13,049 |
/Logestic regression.ipynb
|
d043e7a0249d64f464d23712d3ffc1c655d02c02
|
[] |
no_license
|
azizkastalli/gomycode-worshops
|
https://github.com/azizkastalli/gomycode-worshops
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 126,350 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Workshop Logestic Regression with Gradient Descent
# In this workshop we will build a logestic model from scratch and train it by optimising its parameters with the Gradient Descent algorithm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import clear_output
from sklearn.model_selection import train_test_split
# ## Part 1 : Build a logestic Model
# $$ y = a + bx $$<br>
# $$ z = Sigmoid(y) $$<br>
# $$ z = \frac{1}{1+e^{-(a+bx)}}$$
# ### Step 1 : generate two random correlated vectors
# +
#generate a ranfom feature
np.random.seed(30)
feature = np.random.uniform(1,10,100)
#generate a target
np.random.seed(30)
target = np.random.uniform(1,10,100)
target = np.where(target>5,1,0)
# -
#add randomness to the target
random_shift = np.unique(np.random.randint(0,20,50))
for i in random_shift:
if target[i] == 1 :
target[i] = 0
else :
target[i] = 1
#check correlation
np.corrcoef(feature,target)
X_train,X_test,y_train,y_test=train_test_split(feature,target,test_size = 0.3 , random_state=0)
plt.figure(figsize=[10,5])
plt.scatter(X_train,y_train)
plt.show()
# ### Step 2 : prediction function $ z = sig(a +bx) $
# +
# choose random parameters :
b = 0.1
a = 3
#liniar function :
def liniar(a,b,x):
y = a + b*x
return y
#sigmoid function :
def sigmoid(a,b,x):
y=liniar(a,b,x)
z = 1/(1+np.exp(-y))
return z
# -
y = liniar(a,b,X_train)
x = np.linspace(-5,5,100)
z = (1/(1+np.exp(-x)))
# +
fig,axs = plt.subplots(1,2,figsize=(20,5))
axs[0].scatter(X_train,y_train)
axs[0].plot(X_train,y,'red')
axs[1].scatter(y,y_train)
axs[1].plot(x,z,'red')
# -
# The training part consist of optimizing the loss of our model. In order to do this, we need first to implement a loss function then try to optimise it
# ## Part 2 :
# ## Implement a loss function : Logloss
# $$ logloss = -\sum \limits_{i=1}^{n}y\log\hat{y} + (1-y)\log(1-\hat{y}) $$
# ### Step 1 : logloss Implementation
# +
def logloss(y_hat,y):
return (y*np.log(y_hat)+(1-y)*np.log(1-y_hat))
def cost(a,b,x,y_true):
y_pred = sigmoid(a,b,x)
loss = -np.sum(logloss(y_pred,y_true))
return loss
# -
loss = cost(a,b,X_train,y_train)
loss
# ### Step 2 : Loss function (SSE) Visualization
# In this part we will only focus on optimising the Intercept parameter and see how the value of our loss function changes when we move our intercept value from -4 to 4 with a step equal to 1 at each iteration.
interceptVect = []
for b in range(-4,4,1):
interceptVect.append(cost(a,b,X_train,y_train))
plt.figure(figsize=[10,5])
plt.scatter(np.arange(-4,4,1),interceptVect)
plt.plot(np.arange(-4,4,1),interceptVect,'green')
plt.xlabel('Intercept')
plt.ylabel('Sum of Square Error')
plt.show()
# ## Part 3 : Liniar Model evaluation before any optimisation
y_pred = sigmoid(a,b,X_test)
y_pred
y_predH = np.where(y_pred>0.5,1,0)
y_predH
result = pd.DataFrame({'test set':X_test,'ground truth':y_test,'predictions':y_predH})
result.head(10)
# +
from sklearn.metrics import accuracy_score
print('accuracy : ',accuracy_score(y_test,y_predH))
# -
# ## Part 4 : Gradient Descent Implementation
# $$ logloss = -\sum \limits_{i=1}^{n}y\log\hat{y} + (1-y)\log(1-\hat{y}), with$$
# $$\hat{y} = \frac{1}{1+e^{-(a + bx)}}, \text{after applying chain rule we get :}$$<br>
# $$ \frac{\partial{logloss}}{\partial{a}} = -\sum\limits_{i=1}^{n}{\hat{y}-y},and $$
# $$ \frac{\partial{logloss}}{\partial{b}} = -\sum\limits_{i=1}^{n}x({\hat{y}-y}) $$ <br>
# link for partial derivatives details : https://medium.com/analytics-vidhya/derivative-of-log-loss-function-for-logistic-regression-9b832f025c2d
# ### Step 1 :
# ### Optimising the intercept parameter (a)
epoch = 500
learningRate = 0.001
gradientStop = 0.005
#intilize weights
a = 3
b= 0.1
loss = cost(a,b,X_train,y_train)
f = np.linspace(-4,4,100)
c = (1/(1+np.exp(-f)))
for i in range(epoch):
#show training
clear_output(wait=True)
fig,axs = plt.subplots(1,2,figsize=(20,5))
y = liniar(a,b,X_train)
axs[0].scatter(X_train,y_train)
axs[0].plot(X_train,y,'red')
axs[1].scatter(y,y_train)
axs[1].plot(f,c,'red')
plt.show()
#-------------------------------------
#optimization algorithm
#calculate loss
loss = cost(a,b,X_train,y_train)
print('Before training ---> logloss : ',loss,' epoch : ',i,' intercept : ',a)
#intercept derivative
y_hat = sigmoid(a,b,X_train)
interceptDv = np.sum(y_hat-y_train)
#intercept update
a = a-learningRate*interceptDv
#break if the algorithm converge
if(np.abs(learningRate*interceptDv)<gradientStop):
break;
# optimizing one parameter does not lead to an optimal solution, that's why we need to update all the model's parameters simultaneously when we apply any optimization algorithm.
# ### Step 2 :
# ### Full Gradient Decent implementation : optimization of both intercept(a) and slope(b)
epoch = 500
learningRate = 0.001
gradientStop = 0.003
#intilize weights
a = 3
b= 0.1
f = np.linspace(-5,5,100)
c = (1/(1+np.exp(-f)))
for i in range(epoch):
#show training
clear_output(wait=True)
fig,axs = plt.subplots(1,2,figsize=(20,5))
y = liniar(a,b,X_train)
axs[0].scatter(X_train,y_train)
axs[0].plot(X_train,y,'red')
axs[1].scatter(y,y_train)
axs[1].plot(f,c,'red')
plt.show()
#-------------------------------------
#optimization algorithm
#calculate loss
loss = cost(a,b,X_train,y_train)
print('Before training ---> logloss : ',loss,' epoch : ',i,' intercept : ',a,' slope : ',b)
#intercept derivative
y_hat = sigmoid(a,b,X_train)
interceptDv = np.sum(y_hat-y_train)
slopeDv = np.sum(X_train*(y_hat-y_train))
#weights update
a = a-learningRate*interceptDv
b = b-learningRate*slopeDv
#break if the algorithm converge
if(np.abs(learningRate*interceptDv)<gradientStop) and (np.abs(learningRate*slopeDv)<gradientStop):
break;
# ## Prediction :
y_pred = sigmoid(a,b,X_test)
y_pred
y_predH = np.where(y_pred>0.5,1,0)
y_predH
result = pd.DataFrame({'test set':X_test,'ground truth':y_test,'predictions':y_predH})
result.head(10)
print('accuracy : {:.4}'.format(accuracy_score(y_test,y_predH)))
print('model parameters : intercept=',a,' slope=',b)
# ## Logestic regression using Sklearn library :
# +
from sklearn.linear_model import LogisticRegression
#intilize logestic regression
logreg = LogisticRegression(solver='newton-cg')
#train the model
logreg.fit(X_train.reshape(-1,1),y_train)
#prediction
y_pred = logreg.predict(X_test.reshape(-1,1))
#evaluation
print('accuracy : {:.4}'.format(accuracy_score(y_test,y_pred)))
# -
result = pd.DataFrame({'test set':X_test,'ground truth':y_test,'predictions':y_predH})
result.head(10)
| 7,107 |
/DeepLearning-Playground/NER/Untitled.ipynb
|
248d49bf6b25f4ac340b56415f0ceea12f0f7d25
|
[] |
no_license
|
subratcall/DeepLearning
|
https://github.com/subratcall/DeepLearning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 9,136 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Intro
#
# This coding competition is modified from Kaggle's awesome [Microchallenges](https://www.kaggle.com/learn/microchallenges). Participants are required to implement a gaming strategy and upload it to the judging server. All uploaded strageties will be executed together. A winner will be picked based on the performance of his/her strategy.
#
# # Run
#
# ```
# docker build -t datomar/hackson-blackjack .
# docker run --rm -p 8888:8888 datomar/hackson-blackjack
# ```
#
# Then open a brower at http://localhost:8888, click `blackjack.ipynb`
# # Blackjack Rules
#
# Ready for a quick test of your logic and programming skills?
#
# In this challenge, you will write the logic for a blackjack playing program. Our dealer will test your program by playing 50,000 hands of blackjack. You'll see how frequently your program won, and you can discuss how your approach stacks up against others in the challenge.
#
# 
#
# We'll use a slightly simplified version of blackjack (aka twenty-one). In this version, there is one player (who you'll control) and a dealer. Play proceeds as follows:
#
# - The player is dealt two face-up cards. The dealer is dealt one face-up card.
# - The player may ask to be dealt another card ('hit') as many times as they wish. If the sum of their cards exceeds 21, they lose the round immediately.
# - The dealer then deals additional cards to himself until either:
# - The sum of the dealer's cards exceeds 21, in which case the player wins the round, or
# - The sum of the dealer's cards is greater than or equal to 17. If the player's total is greater than the dealer's, the player wins. Otherwise, the dealer wins (even in case of a tie).
#
# When calculating the sum of cards, Jack, Queen, and King count for 10. Aces can count as 1 or 11. (When referring to a player's "total" above, we mean the largest total that can be made without exceeding 21. So A+8 = 19, A+8+8 = 17.)
#
# +
### Don't change content in this cell
import pandas as pd
from tqdm import tqdm
from game import *
### strategy of NPC player 1
random_hit = """
def should_hit(player_total, dealer_card_val, player_aces):
import random
bet = random.randint(1, 10)
hit = (bet % 2 ==0)
return hit,bet
"""
### strategy of NPC player 2
always_no = """
def should_hit(player_total, dealer_card_val, player_aces):
return False,1
"""
# -
#
# # The Blackjack Player
# You'll write a function representing the player's decision-making strategy. It returns a tuple of bool & int
# The boolean value means decision of `hit` or `stay`. The int value must be within 1..10, it means how much you want to bet on this hand. The above cell contains 2 strategies in string format. You can use them as example
#implement your stragegy here
def should_hit(player_total, dealer_card_val, player_aces):
return False, 11
# # Try you strategy out
# You can test your strategy against the 2 NPC players.
# +
simulation = simulate(True)
simulation.add(1,"You", strategy=should_hit)
simulation.add(2,"NPC1", strategy=random_hit)
simulation.add(3,"NPC2", strategy=always_no)
result = simulation.one_game()
print("\n\n******* Game Result ********\n\n")
for p in result:
name,_,_, m = result[p]
print("*", m)
# -
# # Compete with NPC
#
# Once you fell satisfied with your stategy, upload the `should_hit` function to the judge backend. Winnier will be chosen with the player with higest positive bankroll. If all player loss money, then the higest winning rate is the winnipeg.
#
# The following is how the judging function works.
# +
simulation = simulate(False)
simulation.add(1,"You", should_hit)
simulation.add(2,"NPC1", random_hit)
simulation.add(3,"NPC2", always_no)
data = simulation.simulate(n_games=50000)
df = pd.DataFrame(data, columns =['Name', 'Wins', 'Bankroll'])
print(df)
winner = df.loc[df['Bankroll'].idxmax()]
print(f"\n\nThe winner is {winner['Name']}")
# -
del = BertForSequenceClassification.from_pretrained(
args.pretrained_model_name,
config=pretrained_model_config,
)
# + [markdown] id="lYtJXijM8PN8"
# # 학습 준비
# Task와 Trainer를 준비합니다.
# + id="-FFn4MSz8SWu"
from ratsnlp.nlpbook.classification import ClassificationTask
task = ClassificationTask(model, args)
# + id="18W4vRtR8UTx"
trainer = nlpbook.get_trainer(args)
# + [markdown] id="KteHdhBT8X0e"
# # 학습
# 준비한 데이터와 모델로 학습을 시작합니다. 학습 결과물(체크포인트)은 미리 연동해둔 구글 드라이브의 준비된 위치(`/gdrive/My Drive/nlpbook/checkpoint-doccls`)에 저장됩니다.
# + id="SDr3M_nF8l7M"
trainer.fit(
task,
train_dataloaders=train_dataloader,
val_dataloaders=val_dataloader,
)
cking the hyperplane that maximizes the distance to the nearest data point on each side. When the classes are not linearly separable, SVMs map the data into a higher-dimensional space where a linear separation can hopefully be found. SVMs often achieve very good performance in text classification tasks.
#
# [Logistic Regression models](https://en.wikipedia.org/wiki/Logistic_regression), finally, model the log-odds $l$, or $log(p/(1-p))$, of a class as a linear model and estimate the parameters $\beta$ of the model during training:
#
# \begin{equation*}
# l = \beta_0 + \sum_{i=1}^n \beta_i x_i
# \end{equation*}
#
# Like SVMs, they often achieve great performance in text classification.
# ### Simple training
#
# We train our three classifiers in Scikit-learn with the `fit` method, giving it the preprocessed training text and the correct classes for each text as parameters.
# +
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
nb_classifier = MultinomialNB()
svm_classifier = LinearSVC()
lr_classifier = LogisticRegression(multi_class="ovr")
print("Training Naive Bayes classifier...")
nb_classifier.fit(train_preprocessed, train_data.target)
print("Training SVM classifier...")
svm_classifier.fit(train_preprocessed, train_data.target)
print("Training Logistic Regression classifier...")
lr_classifier.fit(train_preprocessed, train_data.target)
# -
# Let's find out how well each classifier performs. To find oud, we have each classifier `predict` the label for all texts in our preprocessed test set.
nb_predictions = nb_classifier.predict(test_preprocessed)
svm_predictions = svm_classifier.predict(test_preprocessed)
lr_predictions = lr_classifier.predict(test_preprocessed)
# Now we can compute the accuracy of each model: the proportion of test texts for which the predicted label is the same as the target label. The Naive Bayes classifier assigned the correct label in 77.4% of the cases, the logistic regression model achieves an accuracy of 82.8%, and the Support Vector Machine got the label right 85.3% of the time.
# +
import numpy as np
print("NB Accuracy:", np.mean(nb_predictions == test_data.target))
print("SVM Accuracy:", np.mean(svm_predictions == test_data.target))
print("LR Accuracy:", np.mean(lr_predictions == test_data.target))
# -
# ### Grid search
# Still, it's a bit too early to announce the winner. It's very likely we haven't yet got the most from our classifiers. When we trained them above, we just used the default values for most hyperparameters. However, these hyperparameter values can have a big impact on accuracy. Therefore we want to explore the parameter space a bit more extensively, and find out what hyperparameter values give the best results. We do this with so-called grid search. In grid search, we define a grid of hyperparameter values that we want to explore. Scikit-learn then steps to this grid to find the best combination. It does this with $n$-fold cross-validation: for each parameter combination in the grid, it fits a predefined number of models ($n$, the `cv` parameter in `GridSearchCV`. It splits up the training data in $n$ folds, fits a model on all but one of these folds, and tests it on the held-out fold. When it has done this $n$ times, it computes the average performance, and moves on. It performs the full hyperparameter grid in this way and keeps the model with the best average performance over the folds.
#
# In this example, we'll experiment with the $C$ hyperparameter. $C$ controls the degree of regularization in support vector machines and logistic regression. Regularization combats overfitting by imposing a penalty on large parameter values in the model. The lower the $C$ value, the more regularization is applied.
# +
from sklearn.model_selection import GridSearchCV
parameters = {'C': np.logspace(0, 3, 10)}
parameters = {'C': [0.1, 1, 10, 100, 1000]}
print("Grid search for SVM")
svm_best = GridSearchCV(svm_classifier, parameters, cv=3, verbose=1)
svm_best.fit(train_preprocessed, train_data.target)
print("Grid search for logistic regression")
lr_best = GridSearchCV(lr_classifier, parameters, cv=3, verbose=1)
lr_best.fit(train_preprocessed, train_data.target)
# -
# When grid search has been completed, we can find out what hyperparameter values led to the best-performing model.
# +
print("Best SVM Parameters")
print(svm_best.best_params_)
print("Best LR parameters:")
print(lr_best.best_params_)
# -
# Let's see if these best models now perform any better on our test data. For the SVM, the default setting seems to have worked best: our other values didn't lead to a higher accuracy. For logistic regression, however, the default $C$ value was clearly not the most optimal one. When we increase $C$ to $1000$, the logistic regression model performs almost as well as the SVM.
# +
best_svm_predictions = svm_best.predict(test_preprocessed)
best_lr_predictions = lr_best.predict(test_preprocessed)
print("Best SVM Accuracy:", np.mean(best_svm_predictions == test_data.target))
print("Best LR Accuracy:", np.mean(best_lr_predictions == test_data.target))
# -
# ## Extensive evaluation
# ### Detailed scores
# So far we've only looked at the accuracy of our models: the proportion of test examples for which their prediction is correct. This is fine as a first evaluation, but it doesn't give us much insight in what mistakes the models make and why. We'll therefore perform a much more extensive evaluation, in three steps. Let's start by computing the precision, recall and F-score of the best SVM for the individual classes:
#
# - Precision is the number of times the classifier predicted a class correctly, divided by the total number of times it predicted this class.
# - Recall is the proportion of documents with a given class that were labelled correctly by the classifier.
# - The F1-score is the harmonic mean between precision and recall: $2*P*R/(P+R)$
#
# The classification report below shows, for example, that the sports classes were quite easy to predict, while the computer and some of the politics classes proved much more difficult.
# +
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(test_data.target, best_svm_predictions, target_names=test_data.target_names))
# -
# ### Confusion matrix
#
# Second, we're going to visualize our results in even more detail, using a so-called confusion matrix. A confusion matrix helps us better understand the errors our classifier makes. Its rows display the actual labels, its columns show the predictions of our classifier. This means all correct predictions will lie on the diagonal, where the actual label and the predicted label are the same. The predictions elsewhere in the matrix help us understand what classes are often mixed up by our classifier. Our confusion matrix shows, for example, that 91 documents with the label `talk.politics.misc` incorrectly received the label `talk.politics.guns`. Similarly, our classifier sometimes fails to tell apart the two religion classes, and gets quite mixed up in the computer topics in the top left corner.
# +
# %matplotlib inline
import pandas as pd
import seaborn as sn
import matplotlib.pyplot as plt
conf_matrix = confusion_matrix(test_data.target, best_svm_predictions)
conf_matrix_df = pd.DataFrame(conf_matrix, index=test_data.target_names, columns=test_data.target_names)
plt.figure(figsize=(15, 10))
sn.heatmap(conf_matrix_df, annot=True, vmin=0, vmax=conf_matrix.max(), fmt='d', cmap="YlGnBu")
plt.yticks(rotation=0)
plt.xticks(rotation=90)
# -
# ### Explainability
# Finally, we'd like to perform a more qualitative evaluation of our model by taking a look at the features that it assigns the highest weight for each of the classes. This will help us understand if the model indeed captures the phenomena we'd like it to capture. A great Python library to do this is `eli5`, which works together seamlessly with `scikit-learn`. Its `explain_weights` function takes a trained model, a list of feature names and target names, and prints out the features that have the highest positive values for each of the targets. The results convince us that our SVM indeed models the correct information: it sees a strong link between the "atheism" class and words such as _atheism_ and _atheists_, between "computer graphics" and words such as _3d_ and _image_, and so on.
# +
import eli5
eli5.explain_weights(svm_best.best_estimator_,
feature_names = preprocessing.named_steps["vect"].get_feature_names(),
target_names = train_data.target_names
)
# -
# ## Conclusions
# This notebook has demonstrated how you can quickly train a text classifier. Although the types of models we've looked at predate the deep learning revolution in NLP, they're often a quick and effective way of training a first classifier for your text classification problem. Not only can they provide a good baseline and help you understand your data and problem better. In some cases, you may find they are quite hard to beat even with state-of-the-art deep learning models.
| 14,172 |
/project.ipynb
|
f13e8272a38e401ed1afc20b31bee4c69423c419
|
[] |
no_license
|
hemanthpachimadla/Internship_studio_project
|
https://github.com/hemanthpachimadla/Internship_studio_project
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 395,586 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="sPnooVTCdRHe" colab_type="text"
# # Project Title : Exploratory Data Analysis of Car Features
# + [markdown] id="uA2psp-ud12v" colab_type="text"
# The data comes from the Kaggle dataset "Car Features and MSRP". It describes almost 12,000 car models, sold in the USA between 1990 and 2017, with the market price (new or used) and some features. Applying some descriptive statistics and then, you move on to the exploration stage where in you plot various graphs and mine the hidden insights. In this project, we perform Exploratory data analysis on how the different features of a car and its price are related.
#
# + [markdown] id="CH0k4ZIWo3fo" colab_type="text"
# ---
# ### 1.0. Import the dataset and the necessary libraries, check datatype, statistical summary, shape, null values etc.
# + [markdown] id="dVblbhWapStY" colab_type="text"
# Importing dataset and neccesary libraries which are pandas , numpy , seaborn and matplotlib libraries . We can complete 80 % of the project with these libraries.
# + id="auPKJixKKOsN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 293} outputId="4fd2bb0c-0966-45f8-dbbf-5ac70430b105"
#Importing libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
#Loading the dataset
dataset = pd.read_csv('data.csv')
dataset.head(3)
# + [markdown] id="UKgJhJwLrQv5" colab_type="text"
# Checking datatype , statistical summary , shape and null values using info( ) , describe( ) and shape .
# + id="rCAvMrRANAeH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 400} outputId="03f08e65-5ac1-49d7-fd99-6c47c19283e6"
dataset.info()
# + id="nUqRghMdD00O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 291} outputId="3951df4d-55c7-4ce1-bf38-1d0d3c4b0206"
dataset.corr()
# + id="tnPV61IlNZ8O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 291} outputId="78972c68-2df3-49ec-a3e4-0b652a15fa51"
dataset.describe()
# + id="4Z0Dw68_rHko" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="8a4025b8-dd7f-41e4-e725-18760478166b"
dataset.shape
# + [markdown] id="9VKq5Ighrkl2" colab_type="text"
# ---
#
#
# ### 2.0 : Dropping less relevant columns from the project.
# + [markdown] id="ndne59ZIBJjI" colab_type="text"
# I dropped these columns because they have very less standard deviation than other columns from which we can understand that these columns are less relevant to the data.
#
# From the above data , you can see the 'Number of Doors' column , there you can see the min and max values of the column.
# i.e; min=2 and max=4 , which means in these column there is only two values and they are 2 and 4 . These values doesn't make any relevance and changes to the target variable.
#
# 'Market Category' column doesn't have any importance in the car industry because we cannot say this car only for these people and another reason I dropped this column because , there is many null values than any other column in the given data.
#
#
#
#
# + id="6GgGxUWzOfBl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="8b4f912d-a1c2-4619-9655-dc60ef7bdc36"
df=dataset.drop(['Number of Doors','Market Category'],axis=1)
df.shape
# + id="rcf3UEmRQrwR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="cf0320d0-5c86-4c49-dfa6-06b60ffa6dab"
df.head()
# + [markdown] id="3jP_4EOkKYTm" colab_type="text"
# ---
#
#
# ### 3.0 : Rename the columns
#
#
# * "Engine HP" to "HP"
# * "Engine Cylinders" to "Cylinders"
# * "Transmission Type" to "Transmission"
# * "Engine Cylinders" to "Cylinders"
# * "Driven_Wheels" to "Drive Mode"
# * "highway MPG" to "MPG-H"
# * "city mpg" to "MPG-C"
# * "MSRP" to "Price"
# + id="xO1CS8uclyCp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 200} outputId="aa146e87-a63e-43a9-da0b-84ea9e01aeeb"
df_new = df.rename(columns={'Engine HP': 'HP',"Engine Cylinders": "Cylinders", "Transmission Type": "Transmission", "Driven_Wheels": "Drive Mode","highway MPG": "MPG-H", "city mpg": "MPG-C", "MSRP": "Price"})
df_new.head(5)
# + [markdown] id="LVA-jOpgMPrX" colab_type="text"
# ---
# ### 4.0 : Check for any duplicates in the data, check for null values and missing data and remove them.
# + [markdown] id="AuuQRHsBogIx" colab_type="text"
# From the below code , you can see that we're checking for duplicate rows in the data and we can drop those duplicate data with drop_duplicates().
# + id="SKKqjfiDRSxd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="dc52a648-2fab-4cd7-e90b-d48859b8d1cc"
duplicate_rows_df=df_new[df_new.duplicated()]
print("No.of duplicate rows" ,duplicate_rows_df.shape)
# + id="-x0nvLM6RS17" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 200} outputId="0b28b7ab-f59e-493e-9d82-2f91e586e674"
df_new=df_new.drop_duplicates()
df_new.head(5)
# + id="qXx0we0PS-Q4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="e77112f7-136f-4cf1-ae2a-959bd6aeb8c0"
df_new.shape
# + [markdown] id="aT3z1k8jpEw5" colab_type="text"
# Here we're going to check for missing data or null values in the data and here we can see there are 69 HP missing values and 3 Engine Fuel Type missing values in the given data.
# + id="XbUX-ATsTSgD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 267} outputId="8e71abf4-c637-45dd-f514-6cde75860e71"
print(df_new.isnull().sum())
# + [markdown] id="h_1JreZTplN5" colab_type="text"
# Now , we going to drop the null values using dropna() .
# + id="8RIgxAQ8UkuF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 267} outputId="b9f46342-c20a-4f41-9155-b9bf86f4989e"
df_new=df_new.dropna()
df_new.count()
# + [markdown] id="LxhnSrD_qKu8" colab_type="text"
# Here we going to find the total count of null values in each column to ensure there is no more missing data.
# + id="ZaTh65oCUxbs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 267} outputId="2ec960b0-1980-4d64-ff7d-1ea84997a42d"
print(df_new.isnull().sum())
# + [markdown] id="pSrr8yanqhQD" colab_type="text"
# ---
# ### 5.0 : Plot graphs of various columns to check for outliers and remove those data points from the dataset.
# + [markdown] id="feUn-X6zq3gq" colab_type="text"
# We're using Boxplots because it's used to measure of how well distributed the data in a data set is. It divides the data set into three quartiles. Here we are going to find outliers in the data.
# + id="RLiZ2A9VVDzR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="1b24168a-dd71-4fa9-b3da-6f8d5bdac9f4"
sns.boxplot(x=df_new['Price'])
# + id="hsAEr87pVEM6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="d2fc2bac-37d5-44de-e0a1-311bd316c2a7"
sns.boxplot(x=df_new['HP'])
# + id="TbWpy2BOWAnf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="34df0f1c-76cf-4897-93ef-b709e705c2e2"
sns.boxplot(x=df_new['MPG-H'])
# + id="5bi5VPQRWIMx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="e09cf2be-7cb8-4eb1-8ac8-6cb54fe7eea8"
sns.boxplot(x=df_new['MPG-C'])
# + id="I28_Bh4hWYPn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="c1a92f9c-07b4-4d9c-d03a-dd8a33b3ead4"
sns.boxplot(x=df_new['Popularity'])
# + [markdown] id="Yn3hI5-DrYbT" colab_type="text"
# Now we're going to calculate the IQR values of each column above mentioned and going to print them.
# + id="2dGWoVgSWdKP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 150} outputId="57a29f5a-9cb0-48c5-8cb4-b698e7d3717d"
Q1=df_new.quantile(0.25)
Q2=df_new.quantile(0.50)
Q3=df_new.quantile(0.75)
IQR=Q3-Q1
print(IQR)
# + [markdown] id="kWQf60DproTr" colab_type="text"
# Here it comes the exciting thing ! ! !
#
# By using IQR and quartile values we are going to remove outliers from the data.
#
# Here we are with clean data with no null values, duplicate values and outliers in the data.
#
#
# + id="VxooZFt6XFIm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="8c037760-6e79-4d1a-f5a9-67d25de5e79c"
df_new=df_new[~((df_new<(Q1-1.5*IQR))|(df_new>(Q3+1.5*IQR))).any(axis=1)]
df_new.shape
# + [markdown] id="lk1ddf8Js5f7" colab_type="text"
# ---
#
#
# ### 6.0 : What car brands are the most represented in the dataset and find the average price among the top car brands.
# + [markdown] id="MZplxf8XtXPv" colab_type="text"
# Here we going to use matplotlib library to plot top 10 car brands according to there sales in the given project.
# + id="hbrzimTLYphP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 390} outputId="afb8b07b-8d7c-4678-fc5d-3a1c50d1de45"
# Percentage of car per brand
counts=df_new['Make'].value_counts()*100/sum(df_new['Make'].value_counts())
#Top 10 car brands
popular_labels=counts.index[:10]
#Plot
plt.figure(figsize=(13,6))
plt.barh(popular_labels,width=counts[:10])
plt.title('Top 10 car brands')
plt.show()
# + [markdown] id="yRRsHDkEuqhp" colab_type="text"
# Now , we are going to create a table with Model ('Make') and average price .
# We can see there are only 9 models in the data and their mean or average price .
# + id="8merYQKMbPvJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 200} outputId="667e306a-eb1b-461a-f72a-14c77b0ca2e9"
prices=df_new[['Make','Price']].loc[(df_new['Make']=='infiniti')|
(df_new['Make']=='Suzuki')|
(df_new['Make']=='Honda')|
(df_new['Make']=='Mazda')|
(df_new['Make']=='Dodge')|
(df_new['Make']=='GMC')|
(df_new['Make']=='Nissan')|
(df_new['Make']=='Volkswagen')|
(df_new['Make']=='Toyota')|
(df_new['Make']=='Chevrolet')].groupby('Make').mean()
print(prices)
# + [markdown] id="he0SLllWwRvo" colab_type="text"
# ---
# ### 7.0 : Plot the correlation matrix and document your insights.
#
# + [markdown] id="-fCUOScmwfrt" colab_type="text"
# From the correlation matrix we can see there is **high correlation** between **HP and Cylinders** because more the Cylinders , the more Horsepower.
#
# There is **high negative correlation** between **Highway MPG**, City MPG and **Horsepower (HP)** because the more HP , less the Mileage.
# + id="E-xDsURQvpe8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 395} outputId="35840d00-66d5-492c-c412-7220c0b8ded6"
plt.figure(figsize=(10,6))
corr=df_new.corr()
sns.heatmap(corr , cmap="BrBG",annot=True)
# + [markdown] id="XfQtZICrtHQj" colab_type="text"
# ---
# ### 8.0 : Perform EDA and plot different graphs and document your findings
# + [markdown] id="Dj8Spi8DuLk2" colab_type="text"
# Here we use scatterplots to plot data points on a horizontal and a vertical axis in the attempt to show how much one variable is affected by another. Now , we create scatterplot between "HP" and "Price" which represent the relation between Horsepower and price.
# + id="dL8KAFCmmkOT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 388} outputId="64ea14e7-dab4-4c3b-b50f-4c673069b739"
# Scatterplot
fig,ax=plt.subplots(figsize=(10,6))
ax.scatter(df_new['HP'],df_new['Price'])
ax.set_xlabel('HP')
ax.set_ylabel('Price')
plt.show()
# + [markdown] id="smtUUGF4wuoi" colab_type="text"
# Here , we're creating bar graph for car 'body' variable from which you can see below.
# The bar graph represent the no.of vehicles sold with respect to "Vehicle Style".
# + id="TB2qVF7WeW5O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 625} outputId="5f1e43ac-9946-4e1c-aea5-02bfac5efa55"
# Bar chart for car 'body' variable
df_new['Vehicle Style'].value_counts().plot.bar(figsize=(12,8))
plt.title("Cars sold by body")
plt.xlabel("Body Type")
plt.ylabel("No.of vehicles")
# + [markdown] id="kP7HclcU0C1U" colab_type="text"
# A countplot is kind of likea histogram or a bar graph for some categorical area. It simply shows the number of occurrences of an item based on a certain type of category.
#
# At y-axis , we use "Vehicle style" column and on x-axis , there is going to be the count of vehicles that are sold with respectively.
# The hue is going to be drive mode of the selected "Vehicle style".
# + id="rEpUtp8-hLyx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="7cdeae02-5e7a-4e53-94ad-78a0f32a7c39"
# Vehicle Style type and Drive type analysis
sns.countplot(y='Vehicle Style',data=df_new,hue='Drive Mode')
plt.title("Vehicle Type Vs Drive Mode Type")
plt.xlabel("Count of vehicles")
plt.ylabel("Vehicle Type")
# + [markdown] id="PZGKZmIU_uRN" colab_type="text"
# Now we create a new column 'Price_group' and assign the value based on car price and then creating bar plot to represent the 'price_group' column with labels '<20K', '20K - 39K' ,'40K - 59K' ,'60K - 79K' ,'80K - 99K' ,'>100K'
# + id="owb3dAUnjADa" colab_type="code" colab={}
# Create a new column 'Price_group' and assign the value based on car price
df_new['price_group']=pd.cut(df_new['Price'],[0,20000,40000,60000,80000,100000,600000],
labels=['<20K','20K - 39K','40K - 59K','60K - 79K','80K - 99K','>100K'],include_lowest=True)
df_new['price_group']=df_new['price_group'].astype(object)
# + id="hB4Ch2C6lcVw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 537} outputId="696bc361-d25c-4496-9d8f-4ee733d573ba"
(df_new['price_group'].value_counts()/len(df_new)*100).plot.bar(figsize=(10,8))
# + [markdown] id="7tKji1AR0HrP" colab_type="text"
# ---
#
#
# ### 9.0 : build a machine learning model with Price as the target variable.
#
#
#
# + [markdown] id="EJbhkb0KSHcj" colab_type="text"
# Splitting the dataset into 80 and 20 ratio , scaling the data and Fitting Multiple Linear Regression to the training set.
#
# Multiple linear regression (MLR), also known simply as multiple regression, is a statistical technique that uses several explanatory variables to predict the outcome of a response variable.
#
#
# + id="JufSaGkPl7bg" colab_type="code" colab={}
x=df_new[['Year','HP','Cylinders','MPG-H','MPG-C','Popularity']].values
y=df_new['Price'].values
# + id="LStx3dFKCUxZ" colab_type="code" colab={}
# Scaling the data
from sklearn.preprocessing import StandardScaler
ss=StandardScaler()
X=ss.fit_transform(x)
Y=ss.fit_transform(y.reshape(-1,1))
# + [markdown] id="3GmLRXVjaRJa" colab_type="text"
# Splitting the dataset into 80 and 20 ratio for training and test data.
# + id="l45mtCrTD0Sw" colab_type="code" colab={}
# Splitting the dataset into the training set and test set
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2,random_state=0)
# + id="AJa7H4NbExWg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="68c649a1-32d7-4912-90b3-4ffaf7fe52cb"
# Fitting Multiple Linear Regression to the training set
from sklearn.linear_model import LinearRegression
reg=LinearRegression()
reg.fit(X_train,Y_train)
# + id="N8TvoppNHJHz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="209093ce-b6ab-4521-c7e1-10c0b355e938"
# Predicting the test set results
y_pred=reg.predict(X_test)
plt.scatter(Y_test,y_pred)
# + [markdown] id="p48EnRmGVDal" colab_type="text"
# Seaborn distplot lets you show a histogram with a line on it. This can be shown in all kinds of variations.
# + id="eB11kMRXHp9e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="13c250e1-6040-4688-f8d3-f66663ecf671"
sns.distplot((Y_test-y_pred),bins=50)
# + [markdown] id="rwEIWIlXToGl" colab_type="text"
# checking the performance of the Linear Regression model over metrics like R square, Root Mean Squared Error, Mean Absolute Error etc.
# + id="YzOwJ6GoH_6j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="6ec6bea7-4a0a-44a2-9744-ad7e93f25ddd"
from sklearn import metrics
print("Mean Absolute Error : ",metrics.mean_absolute_error(Y_test,y_pred))
print("R2 Score : ",metrics.r2_score(Y_test,y_pred))
print("Root Mean Squared Error : ",np.sqrt(metrics.mean_squared_error(Y_test,y_pred)))
# + [markdown] id="6799F8FAKHug" colab_type="text"
# ---
#
#
# ### 10.0 : Try different algorithms and check their performance over metrics like R square, RMSE, MAE etc
# + [markdown] id="en_KFU5kYFgK" colab_type="text"
# Fitting Polynomial Regression to the training set.
#
# Polynomial Regression is a form of linear regression in which the relationship between the independent variable x and dependent variable y is modeled as an nth degree polynomial.
#
#
#
# + id="XZGP6UqvInpb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="6dfa3f9c-7868-474b-aa19-209c61be5762"
from sklearn.preprocessing import PolynomialFeatures
p_reg=PolynomialFeatures(degree=4)
X_poly=p_reg.fit_transform(X_train)
p_reg.fit(X_poly,Y_train)
lin_reg=LinearRegression()
lin_reg.fit(X_poly,Y_train)
# + id="fSzcd6xUMZU7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="4de8b019-efd3-4e51-d98e-b86eacb44045"
# Predicting result with Polynomial Regression
y_pred=lin_reg.predict(p_reg.fit_transform(X_test))
plt.scatter(Y_test,y_pred)
# + [markdown] id="64HV83XnVdgB" colab_type="text"
# Seaborn distplot lets you show a histogram with a line on it. This can be shown in all kinds of variations.
# + id="zt9NM0cLNJuf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="0373b7a3-a0c7-4097-d99c-6765c2e5bdd6"
sns.distplot((Y_test-y_pred),bins=50)
# + [markdown] id="sxlsJypwUeVl" colab_type="text"
# checking the performance of the Polynomial Regression model over metrics like R square, Root Mean Squared Error, Mean Absolute Error etc
# + id="A_2FL0vuNlND" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="aaac5a5a-e8b4-4933-f73d-3e4e0dd82d32"
print("Mean Absolute Error : ",metrics.mean_absolute_error(Y_test,y_pred))
print("R2 Score : ",metrics.r2_score(Y_test,y_pred))
print("Root Mean Squared Error : ",np.sqrt(metrics.mean_squared_error(Y_test,y_pred)))
# + [markdown] id="QmSBvSnbYr9v" colab_type="text"
# ---
#
#
# Fitting Support Vector Machine - Regression (SVR) to the training set.
#
# Support Vector Machine - Regression (SVR) Support Vector Machine can also be used as a regression method, maintaining all the main features that characterize the algorithm (maximal margin).
# + id="V1zzVPtdNnpx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 103} outputId="a952b93f-dc69-40eb-97db-bd8d75b62478"
# Fitting SVR to the dataset
from sklearn.svm import SVR
SVR_reg=SVR(kernel='rbf')
SVR_reg.fit(X_train,Y_train)
# + id="i7g5Wev5OxLb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="35d85cd4-04f5-4555-c9dc-e1741e4276c9"
y_pred=SVR_reg.predict(X_test)
plt.scatter(Y_test,y_pred)
# + [markdown] id="iM3nEXJZVgKy" colab_type="text"
# Seaborn distplot lets you show a histogram with a line on it. This can be shown in all kinds of variations.
# + id="HAEwIbNbPJe4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="568a7212-2c69-4f7a-c238-632ee4549f27"
sns.distplot((Y_test-y_pred),bins=50)
# + [markdown] id="vALAmF_SUgjQ" colab_type="text"
# checking the performance of the SVR model over metrics like R square, Root Mean Squared Error, Mean Absolute Error etc
# + id="gnoAPNbjPeC9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="c45b5b81-a581-4fac-f4f5-85ed9e08f174"
print("Mean Absolute Error : ",metrics.mean_absolute_error(Y_test,y_pred))
print("R2 Score : ",metrics.r2_score(Y_test,y_pred))
print("Root Mean Squared Error : ",np.sqrt(metrics.mean_squared_error(Y_test,y_pred)))
# + [markdown] id="rqsKF89XZJ7d" colab_type="text"
# ---
#
#
# Fitting Random Forest Regression to the training set.
#
# Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes
# + id="ARKEp8AUPiXU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="97401f0e-21ab-4966-ffa1-3a59ed3f3cea"
# Fitting Random Forest Regression to the dataset
from sklearn.ensemble import RandomForestRegressor
RF_reg=RandomForestRegressor(n_estimators=300,random_state=0)
RF_reg.fit(X_train,Y_train)
# + id="IbvkRI7hQT5j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="72964005-ddb0-4d4c-9a6d-813749080ebf"
y_pred=RF_reg.predict(X_test)
plt.scatter(Y_test,y_pred)
# + [markdown] id="xvdzhSMDVizk" colab_type="text"
# Seaborn distplot lets you show a histogram with a line on it. This can be shown in all kinds of variations.
# + id="8OJZq2yjQkuV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="ef2ee937-b8bd-4541-b4df-d27a23f10738"
sns.distplot((Y_test-y_pred),bins=50)
# + [markdown] id="k4Ou_mZQUicO" colab_type="text"
# checking the performance of the Random Forest Regression model over metrics like R square, Root Mean Squared Error, Mean Absolute Error etc
# + id="ACZyXKmdQwpm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="b23da383-bf5a-4ec9-d235-800c05146d8f"
print("Mean Absolute Error : ",metrics.mean_absolute_error(Y_test,y_pred))
print("R2 Score : ",metrics.r2_score(Y_test,y_pred))
print("Root Mean Squared Error : ",np.sqrt(metrics.mean_squared_error(Y_test,y_pred)))
# + [markdown] id="1hy2xGbVbI2N" colab_type="text"
# ---
# ### Summary
# + [markdown] id="vvS67b-aV3Gi" colab_type="text"
# From all the other ML models , Random Forest Model is the best model for this dataset .
#
# Because , the R2 score of Random forest is **R2=0.9321** which is way more than other models like .
# A Random Forest's nonlinear nature can give it a leg up over linear algorithms, making it a great option.
| 22,882 |
/Group-Project-Report-FINAL.ipynb
|
92a11fa28baa55c7a4f09019a08b4058cb0856df
|
[] |
no_license
|
canberk17/Wine-Classifcation-in-R
|
https://github.com/canberk17/Wine-Classifcation-in-R
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.r
| 672,667 |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Group Project Report
# #### Anton Buri (58151168), Canberk Kandemir (54411160), Danny Liu (54033675), Jasmine Wu (54422571)
# #### DSCI
# #### Due: Tuesday, April 7th, 2020 11:59pm
# ### Title
# “Data & Drink: Predicting the Quality of Red and White Wines”
# ### Introduction
# Winemakers have the ability to modify certain elements of wine they produce to impact consumer perceptions and maximize the appeal of their wines. Certain physiochemicals such as alcohol content, acidity and density can distinguish an excellent wine from a poor one.
#
# The question we are investigating is: Can certain combinations of physiochemicals be used to predict the perceived quality of wine?
#
# The dataset we use to conduct this analysis is from the UCI Machine Learning Repository and it contains data on red and white “Vinho Verde” wine samples from Portugal. There are around 5,000 instances, each characterized by 12 attributes such as residual sugar levels, pH and alcohol content, as well as a quality rating based on a scale of three to nine. Within this dataset, 25 percent of the observations come from red wine samples and 75 percent come from white wine samples.
# ### Preliminary exploratory analysis
# The conduct our analysis, the following libraries and packages are required:
# - Rvest
# - Expss
# - Repr
# - Tidyverse
# - Stringr
# - Forcats
# - DBI
# - RPostgres
# - Lubridate
# - Dplyr
# - Caret
# - GGcorrplot, and
# - GGally
# Additionally, to use the *ggcorrplot* and *expss* libraries it is necessary to install the *ggcorrplot* and *expss* packages.
install.packages('ggcorrplot')
install.packages("expss")
library(expss)
library(rvest)
library(repr)
library(tidyverse)
library(stringr)
library(forcats)
library(DBI)
library(RPostgres)
library(lubridate)
library(dplyr)
library(caret)
library(ggcorrplot)
library(GGally)
# Our dataset can be loaded into R using the *read_csv* command with no additional arguments. The data is read from its original source (Kaggle) via a URL. To do this, we were required to upload the original CSV file, which we downloaded from Kaggle, into Google Drive. We then made the file public within Google Drive and put the share link into the *read_csv* command.
#
# Note: There were no missing values or invalid data in the file so no observations had to be removed.
unscaled_wine <- read_csv("https://drive.google.com/uc?export=download&id=1cJfEyPrqqfBSt31iJawkaRWLC0PBlQ4l")
head(unscaled_wine)
# Given that our goal was to create a classifier that could predict the quality of wine based on various attributes, we were required to create a classification for each observation based on the value in the 'quality' column. To determine how we would do this, we created a histogram (Figure 1) so that we could observe the distribution of wine qualities in our dataset. The x-axis on the histogram describes the quality of wine on a scale from 3 to 9, and the y-axis describes the frequency of each quality.
options(repr.plot.width = 5, repr.plot.height = 4)
hist(unscaled_wine$quality,
breaks = 7,
main = "Figure 1: Histogram of Wine Qualities",
xlab = "Quality",
col = "white")
# Based on Figure 1, we saw that the distribution of qualities wasn't even; there were very few observations that had qualities of 3, 4, 8 and 9. To build an accurate classifier it is necessary to have a fairly even mix of obsevations that fall into each classification so to avoid imbalance between classes. Therefore, we decided to focus our analysis on classifying wines into two categories: "good" and "poor". Doing this would allow us to predict from two categories rather than nine as well as achieve an even distribution across categories.
#
# As observed in Figure 1, there is a more or less equal amount of observations with qualities of 6, 7, 8 or 9 as there are observations where quality is 3, 4 or 5. Therefore, we decided to classify a wine as "good" if it had a quality of 6, 7, 8 or 9 and "poor" if it has a quality of 3, 4 or 5. Note that no wine has a quality less than 3 or greater than 9.
#
# To do this, we created a column called 'rating' that assigns each observation a rating based on the previously mentioned criteria. Then, we factorized the values in this vector so that it could be used as a classifier in our model. These steps are shown in the code below.
unscaled_wine <- unscaled_wine %>%
mutate(rating = ifelse(quality < 6,
'poor',
'good')) %>%
mutate(rating = as.factor(rating))
head(unscaled_wine)
# The next step in our exploratory analysis was determining if the predictors for quality of wine differed among red and white wines. Given the differences in the production of red and white wines (Sussman, “What's the Difference Between Red and White Wine?”), we realized that these predictors might be different.
#
# To do this, we first split the dataset into two subsets red wine and white wine.
unscaled_red <- unscaled_wine %>%
filter(style == 'red')
unscaled_white <- unscaled_wine %>%
filter(style =='white')
head(unscaled_red)
head(unscaled_white)
# We then used a correlation matrix heatmap that allowed us to observe which attributes of each type of wine were highly correlated with quality. The heat map shows the correlation for different pairs of factors.
options(repr.plot.width = 12, repr.plot.height = 10)
corr_red <- unscaled_red %>%
select(-style,-rating) %>%
cor()
corr_white <- unscaled_white %>%
select(-style,-rating) %>%
cor()
ggcorrplot(corr_red,lab = TRUE,
title = "Figure 2: Correlation Matrix - Red Wine")
ggcorrplot(corr_white,lab = TRUE,
title = "Figure 3: Correlation Matrix - White Wine")
# Based on these correlation matrices, we saw that it was indeed the case that red and white wines differed in attributes that predicted their quality. The can be seen by the fact that some variables are strongly correlated with quality for red wines, but are not for white. Specifically, this can be seen by the numbers and the intensity of color of each square; more intense colors indicate stronger correlations.
#
# Furthermore, from these correlation matrices we were able to observe which predictors were most strongly correlated with quality of wine and thus best to include in our analysis. Since our goal is to predict the quality of wine, we would like to find the most influential factors to use as predictors for quality. Furthermore, due to the fact that K-nearest neighbours classification tends to perform poorly with a large number of predictors, we saw it as important to select only predictors that had a strong correlation to wine quality (Timbers, Campbell and Lee, "Introduction to Data Science"). Specifically, we chose only to includ variables that had an absolute correlation value with quality that was greater than 0.2.
#
# Therefore, we proceeded with our analysis using only the following predictor variables:
#
# Red wine: volatile acidity, citric acid, sulphates, alcohol
#
# White wine: chlorides, density, alcohol
# Finally, before conducting our analysis we wanted to confirm that there was no class imbalance within each of our data subsets.
options(repr.plot.width = 5, repr.plot.height = 4)
hist(unscaled_red$quality,
main = "Figure 4: Histogram of Red Wine Classifications",
xlab = "Rating",
col = "darkred")
hist(unscaled_white$quality,
main = "Figure 5: Histogram of White Wine Classifications",
xlab = "Rating",
col = "cyan")
# Based on Figures 4 and 5 we confirmed that there was no class imbalance within the red and white wine datasets. Therefore, we proceeded with our analysis using the following dataframes. A summary of these dataframes is shown below in Tables 1 and 2 below.
# +
#Red
unscaled_red <- select(unscaled_red, volatile_acidity, citric_acid, sulphates, alcohol, rating)
head(unscaled_red)
#White
unscaled_white <- select(unscaled_white, chlorides, density, alcohol, rating)
head(unscaled_white)
unscaled_red %>%
tab_cells(volatile_acidity, citric_acid, sulphates, alcohol) %>%
tab_cols(total(label = "Total| |"), rating) %>%
tab_stat_fun(Mean = w_mean, "# of Observations" = w_n, method = list) %>%
tab_pivot() %>%
set_caption("Table 1: Summary statistics (Red wine)")
unscaled_white %>%
tab_cells(alcohol, density, chlorides) %>%
tab_cols(total(label = "Total| |"), rating) %>%
tab_stat_fun(Mean = w_mean, "# of Observations" = w_n, method = list) %>%
tab_pivot() %>%
set_caption("Table 2: Summary statistics (White wine)")
# -
# ### Methods and Results
# Our analysis uses K-nearest neighbours to train a classifier that can predict the quality of wine as either "good" or "bad". We began our anaysis by splitting the data so that 75% of it was used in the training set and 25% was used in the test set. Our rationale for dividing the data up in this was was to use more observations to build and train our classifier, thus making it more accurate. Although the dataset is quite large, we decided that a 75-25 split would be best since we have little background knowledge about wine physiochemical characteristics and their relation with wine quality.
set.seed(1234)
#Red
set_rows_red <- unscaled_red %>%
select(rating) %>%
unlist() %>%
createDataPartition(p = 0.75, list = FALSE)
training_set_red <- unscaled_red %>%
slice(set_rows_red)
test_set_red <- unscaled_red %>%
slice(-set_rows_red)
#White
set_rows_white <- unscaled_white %>%
select(rating) %>%
unlist() %>%
createDataPartition(p = 0.75, list = FALSE)
training_set_white <- unscaled_white %>%
slice(set_rows_white)
test_set_white <- unscaled_white %>%
slice(-set_rows_white)
# Below is a preview of our training sets...
glimpse(training_set_red)
glimpse(training_set_white)
# ... and of our test sets.
glimpse(test_set_red)
glimpse(test_set_white)
# Since k-nn is sensitive to the scale of the predictors, we performed a data transformation to scale and center the data. We took into account that we should only use the training data when creating *scale_transformer*. We did not want the test data to influence any aspect of our model training. We applied the scaling transformer to both our white and red wine data.
#Red
scale_transformer <- preProcess(training_set_red, method = c("center", "scale"))
training_set_red<- predict(scale_transformer, training_set_red)
test_set_red<- predict(scale_transformer, test_set_red)
#White
scale_transformer <- preProcess(training_set_white, method = c("center", "scale"))
training_set_white<- predict(scale_transformer, training_set_white)
test_set_white<- predict(scale_transformer, test_set_white)
# After splitting our data up into a training set and a test set and then scaling it, we next used cross-validation to split up our overall training set in different ways, train and evaluate the classifier for each split, then choose our k-value based on these results. Cross-validation was important because if we just split our overall training set once, our chosen k-value would only depend on whichever data ended up in the validation set.
#
# We chose to conduct 5-fold cross-validation. While keeping in mind that more folds generally results in a more accurate classifier, we decided that sacrificing some accuracy for computational speed was appropriate given that the consequences of our analysis are not significant (Timbers, Campbell and Lee, "Introduction to Data Science").
# +
X_train_red <- training_set_red %>%
select(-rating) %>%
data.frame()
Y_train_red <- training_set_red %>%
select(rating) %>%
unlist()
X_train_white <- training_set_white %>%
select(-rating) %>%
data.frame()
Y_train_white <- training_set_white %>%
select(rating) %>%
unlist()
train_control <- trainControl(method = 'cv',number = 5)
# -
# Next, we create a classifier, adding the *train_control* object to the *trControl* argument to conduct cross-validation. We defined a vector of numbers from 10 to 250 (increments of 5) and ran it using our training data to determine the best k-value to use for our final classifier.
set.seed(1234)
ks <- data.frame(k = seq(from = 10, to = 250, by = 5))
choose_k_red <- train(x = X_train_red,
y = Y_train_red,
method = "knn",
tuneGrid = ks,
trControl = train_control)
choose_k_white <- train(x = X_train_white,
y = Y_train_white,
method = "knn",
tuneGrid = ks,
trControl = train_control)
# To visualize how accuracy changed with k, we created line and point plots that compared k-values and their accuracies. These are shown below in Figures 6 and 7.
k_accuracies_red <- choose_k_red$results %>%
select(k, Accuracy)
k_accuracies_white <- choose_k_white$results %>%
select(k, Accuracy)
choose_k_plot_red <- ggplot(k_accuracies_red, aes(x = k, y = Accuracy)) +
geom_point(color = 'red') +
geom_line(alpha = 0.3) +
xlab("K") +
ggtitle("Figure 6: Accuracies (Red Wine)")
choose_k_plot_white <- ggplot(k_accuracies_white, aes(x = k, y = Accuracy)) +
geom_point(color = 'cyan') +
geom_line(alpha = 0.3) +
xlab("K") +
ggtitle("Figure 7: Accuracies (White Wine)")
choose_k_plot_red
choose_k_plot_white
# Our goal was to choose a k-value that yielded roughly the highest possible accuracy, while being reliable in the presence of uncertainty (i.e. changing the k-value slightly won't greatly reduce accuracy) (Timbers, Campbell and Lee, "Introduction to Data Science"). To do this, we used the following code to find the k-value which yielded the highest possible accuracy, and used the line charts in Figures 6 and 7 to confirm these selections.
best_k_red <- choose_k_red$results %>%
filter(Accuracy == max(Accuracy)) %>%
select(k) %>%
unlist()
best_k_red
best_k_white <- choose_k_white$results %>%
filter(Accuracy == max(Accuracy)) %>%
select(k) %>%
unlist()
best_k_white
# We found that for the red wine data set the optimal k-value was **15**, while for the white wine datset the optimal value was **35**.
#
# Next, we used these k-values to train our final models (classifiers) for red and white wines.
#Red
model_red <- train(x = X_train_red,
y = Y_train_red,
method = 'knn',
tuneGrid = data.frame(k = best_k_red),
trControl = train_control)
#White
model_white <-train(x = X_train_white,
y = Y_train_white,
method = 'knn',
tuneGrid = data.frame(k = best_k_white),
trControl = train_control)
model_white
model_red
# Finally, to determine our final models' effectiveness, we used it to predict the ratings of the observations in our test set, then compare those predictions from the actual ratings to evaluate the models' accuracy.
# +
#Red
X_test_red <- test_set_red %>%
select(-rating) %>%
data.frame()
Y_test_red <- test_set_red %>%
select(rating) %>%
unlist()
Y_test_predicted_red <- predict(object = model_red, X_test_red)
#White
X_test_white <- test_set_white %>%
select(-rating) %>%
data.frame()
Y_test_white <- test_set_white %>%
select(rating) %>%
unlist()
Y_test_predicted_white <- predict(object = model_white, X_test_white)
model_quality_red <- confusionMatrix(data = Y_test_predicted_red, reference = Y_test_red)
model_quality_white<- confusionMatrix(data = Y_test_predicted_white, reference = Y_test_white)
model_quality_red
model_quality_white
# -
# ### Discussion
# Our model classifies the quality of red wines slightly better than it does for whites. It shows an **accuracy** of **73.68% for red wines** and **71.73% for whites**. Red wine may have performed because there is slightly less class imbalance in that dataset. This can be seen in the 'Reference' section in each of the confusion matrices above. However, both classifiers were fairly equal in the abiility to correctly predict quality of wine.
#
# The final models used the most common value for quality, 6, to divide the dataset into the categories of ‘low’ (quality < 6) and ‘high’ (quality > 6). We did this because, as stated previously, there are a lot more normal wines (quality = 5 or 6) than excellent or poor ones. Without taking this step, the overabundance of 6s causes the accuracy of our model to decrease by about 20%, a very significant margin. The red wine dataset was relatively balanced (prevalence = 53.38%) and the model’s specificity was only slightly better than its sensitivity (74.73% vs 67.61%). The white wine dataset had about twice as many ‘high’ values as low values and the white wine model had a really high sensitivity value (86.12%) compared with its specificity value (54.39%), meaning it was significantly better at predicting when a wine is high quality than when it is low quality. If the white wine dataset was perfectly balanced, the model’s accuracy would be the average of the sensitivity and specificity values, which is 70.26%, an accuracy value similar to that of the red wine model.
#
# The results of our analysis are more or less what we expected to find. Initially, we hypothesized that certain characteristics *could* predict the quality of wine given the fact that they determine the taste, smell and consistency of the wine. We were surprised, however, to find that the best predictors for quality of wine was different for reds and whites. Furthermore, we were surprise to discover that sugar was not a strong predictor of quality of wine since intuitively sugar has lots do do with taste. We also found that different types of acidity affect red and white wines differently. For example, fixed acidity impacts quality of red wine more while volatile acidity impacts quality of white wine more.
#
# Some of the limitations we experienced include a slower model for the white wine data set. This was because K-nearest neighbour classifaction becomes slower as data gets larger. Since the original data set was fairly large, running the analyses took some time. In addition, we did not have all predictor variables available to us through the data set than may have impacted wine quality. Perhaps other factors such as location, brand, color, etc. may have impacted quality as well which is not accounted for in the classification model.
#
# The results of our analysis have the potential to be used by winemakers to adjust the ingredients, recipes and brewing processes they use to achieve high quality wines. Doing so will have a positive impact on profit margins and brand reputation. Furthermore, the attributes we find that are useful for predicting quality of wine can perhaps be used to anticipate the quality of other alcoholic or wine-related beverages.
#
# Additional questions that may arise from this analysis include ones about the relationship between consumer preferences and health effects of wine. For example, given the attributes that are desirable in a wine, why might wine-drinkers experience certain health consequences or benefits? The findings may also lead to questions about why certain physiochemicals such as sugar do not have an impact on wine quality ratings. There are also questions regarding why some physiochemicals impact red wine more than white wine and vice versa. In addition, future research may want to conduct a similar analysis but with more classes, for example "poor", "mediocre" and "good". It is possible that using more classes paints a better picture of which attributes are best used to predict the quality of red and white wines.
# ### References
# Cortez, Paulo, Antonio Cerdeira, Fernando Almeida, Telmo Matos and Jose Reis. "Modeling wine preferences by data mining from physicochemical properties" *Decision Support Systems*, Volume 47 (4): 547-553, 2009.
#
# Horák, M. "Prediction of wine quality from physicochemical properties" 2010 www.buben.piranhacz.cz/wp-content/uploads/PredictionWineQuality_mhorak_VD.pdf. Accessed 7 Apr. 2020
#
# *Kaggle*. Kaggle Inc., 2019, www.kaggle.com/. Accessed 7 Apr. 2020.
#
# Sussman, Zachary. “What's the Difference Between Red and White Wine?” *Food & Wine*, 23 May 2017 www.foodandwine.com/wine/whats-difference-between-red-and-white-wine. Accessed 7 Apr. 2020.
#
# Timbers, Tiffany-Anne, Trevor Campbell, and Melissa Lee. "Introduction to Data Science" 9 Mar. 2020, https://ubc-dsci.github.io/introduction-to-datascience/ Accessed 7 Apr. 2020.
| 21,136 |
/Save the Prisoner!.ipynb
|
f1bf745da547a4ed994ba50b72d7149b2d5210bb
|
[] |
no_license
|
jasonjklim/Hackerrank_practice
|
https://github.com/jasonjklim/Hackerrank_practice
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,787 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Save the Prisoner!
#
# https://www.hackerrank.com/challenges/save-the-prisoner/problem
# A jail has a number of prisoners and a number of treats to pass out to them. Their jailer decides the fairest way to divide the treats is to seat the prisoners around a circular table in sequentially numbered chairs. A chair number will be drawn from a hat. Beginning with the prisoner in that chair, one candy will be handed to each prisoner sequentially around the table until all have been distributed.
#
# The jailer is playing a little joke, though. The last piece of candy looks like all the others, but it tastes awful. Determine the chair number occupied by the prisoner who will receive that candy.
#
# For example, there are prisoners and pieces of candy. The prisoners arrange themselves in seats numbered to . Let's suppose two is drawn from the hat. Prisoners receive candy at positions . The prisoner to be warned sits in chair number .
#
# Function Description
#
# Complete the saveThePrisoner function in the editor below. It should return an integer representing the chair number of the prisoner to warn.
#
# saveThePrisoner has the following parameter(s):
#
# n: an integer, the number of prisoners
# m: an integer, the number of sweets
# s: an integer, the chair number to begin passing out sweets from
# Input Format
#
# The first line contains an integer, , denoting the number of test cases.
# The next lines each contain space-separated integers:
# - : the number of prisoners
# - : the number of sweets
# - : the chair number to start passing out treats at
#
# Constraints
#
# Output Format
#
# For each test case, print the chair number of the prisoner who receives the awful treat on a new line.
#
# Sample Input 0
#
# 2
# 5 2 1
# 5 2 2
# Sample Output 0
#
# 2
# 3
# https://www.hackerrank.com/challenges/save-the-prisoner/problem
def saveThePrisoner(n, m, s):
# 1. get reminder m / n
# n= 7, m= 19, s= 2
# 18%7 = 4
rem = ((s-1+m-1)%n) + 1
# 2. add rem to starting point to check who be warned prisoner
return rem
# +
n = 58124642
m =211432460
s = 5064816
saveThePrisoner(n,m,s)
# -
| 2,414 |
/Inference.ipynb
|
a3403fed778d58eca3d09e2f7c6de790845f84fa
|
[] |
no_license
|
abhisingh11/First_One
|
https://github.com/abhisingh11/First_One
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 11,715 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
from sklearn.metrics import accuracy_score, f1_score, classification_report
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer("english")
import nltk
import random
import numpy as np
from sklearn.ensemble import VotingClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
import pickle
# +
def stem_words(doc):
stemmed_doc = ''
for w in doc.split(' '):
stemmed_doc += stemmer.stem(w) + ' '
return stemmed_doc
def train(data, iterations):
models = []
corpus = data['corpus']
labels = data['labels']
for i in range(iterations):
random_indicies = random.sample(data['c_w_corpus'], 500) +random.sample(data['piper_corpus'],500) +random.sample(data['spurgeon_corpus'], 500) +random.sample(data['tozer_corpus'], 500)
X = corpus[random_indicies,:]
y = [labels[i] for i in random_indicies]
print('Generating', i+1)
if i % 2 ==0:
result = generate_SVC_clasifier_wo_vect(X,y)
else:
result = generate_tree_clasifier(X,y)
models.append(result)
with open('models.pkl', 'wb') as f:
pickle.dump(models, f)
def predict(X, y, vect):
models =pickle.load( open( "models.pkl", "rb" ) )
preds = []
final_prediction = []
print('Vectorizing Input')
X = vect.transform(X)
print('Predicting')
for m in models:
preds.append(m.predict(X))
for i in range(len(preds[0])):
this_pred = []
for j in range(len(preds)):
this_pred.append(preds[j][i])
final_prediction.append(most_common(this_pred))
print("Multinomial NB: " , round(accuracy_score(y, final_prediction), 3))
print(classification_report(y, final_prediction, labels=range(0,4)))
def generate_SVC_clasifier_wo_vect(X, y):
print('Fitting SVC')
svc = SVC()#, class_weight = weight_dict)
svc.fit(X, y)
return svc
def generate_tree_clasifier(X, y):
print('Fitting Tree')
tree = DecisionTreeClassifier(max_depth = 10)
tree.fit(X, y)
return tree
def most_common(lst):
return max(set(lst), key=lst.count)
def load_sermons():
data = []
labels = []
label= 0
count=0
authors = ['charles_wesley', 'spurgeon', 'tozer', 'piper']
for author in authors:
for sermon in os.listdir('./sermons/' + author):
#os.remove('./sermons/' + author + '/.DS_Store')
f=open('./sermons/'+ author + '/' + sermon, "r")
if f.mode == 'r':
contents =f.read()
if len(contents) > 100:
contents = stem_words(contents)
data.append(contents)
labels.append(label)
count +=1
print(label, author)
label +=1
count = 0
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.23)
c_w = []
piper = []
spurgeon=[]
tozer = []
for i in range(len(y_train)):
if y_train[i] == 0:
c_w.append((X_train[i], 0))
elif y_train[i] == 1:
spurgeon.append((X_train[i], 1))
elif y_train[i] == 2:
tozer.append((X_train[i], 2))
elif y_train[i] == 3:
piper.append((X_train[i], 3))
X = random.sample(c_w, 500) +random.sample(piper,500) +random.sample(spurgeon, 500) +random.sample(tozer, 500)+ random.sample(c_w, 500) +random.sample(piper,500) +random.sample(spurgeon, 500) +random.sample(tozer, 500)
np.random.shuffle(X)
X, y = [item[0] for item in X], [item[1] for item in X]
vect = TfidfVectorizer(stop_words = "english", ngram_range = {1,4})
corpus = vect.fit_transform(X)
c_w_corpus = []
piper_corpus = []
spurgeon_corpus =[]
tozer_corpus = []
for i in range(len(y)):
if y[i] == 0:
c_w_corpus.append(i)
elif y[i] == 1:
spurgeon_corpus.append(i)
elif y[i] == 2:
tozer_corpus.append(i)
elif y[i] == 3:
piper_corpus.append(i)
return {'c_w_corpus':c_w_corpus, 'piper_corpus':piper_corpus,
'spurgeon_corpus':spurgeon_corpus, 'tozer_corpus':tozer_corpus, 'corpus':corpus,
'X_test':X_test, 'y_test':y_test, 'labels':y, 'vect': vect}
# -
data = load_sermons()
#models = train(data, 10)
#predict(data['X_test'], data['y_test'], data['vect'])
# +
f=open('./sermons/john_wesley/john_wesley_Duty_of_Constant_Communion.txt', "r")
if f.mode == 'r':
contents = [f.read() ]
kyle = vect.transform(contents)
prob = nb.predict(kyle)
prob
#for i in range(0,4):
#print(authors[i], " : " , prob[0][i]*100 , "%")
# -
os.getcwd()
| 5,144 |
/New P.ipynb
|
2bc200426a413ad910c8975331c0308cb3a952f1
|
[] |
no_license
|
VafadarM/-Capstone-Project-Notebook-week-1
|
https://github.com/VafadarM/-Capstone-Project-Notebook-week-1
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,167 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import norm
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
df = pd.read_csv('C:\\Users\\Vineet\\Documents\\ISB-H\\pre term\\second course\\assign2\\ISB_assignment__Python_ISB1301444_vineet_kapoor\\Indian_cities.csv')
df.shape
df.describe()
# # describing the data types
df.dtypes
# # Histogram plot of Sex Ratio. The plot is near to normal..
figure = plt.figure(figsize=(15,8))
plt.hist(df['sex_ratio'],color = 'olive',bins = 30)
plt.xlabel('sex_ratio')
plt.ylabel('count of cities')
plt.title('Histogram of Sex-ratio')
# # Histogram plot of Child Sex Ratio. The plot is near to normal..
figure = plt.figure(figsize=(15,8))
plt.hist(df['child_sex_ratio'],color = 'skyblue',bins = 30)
plt.xlabel('sex_ratio')
plt.ylabel('count of cities')
plt.title('Histogram of Child-Sex-ratio')
# # the distribution of Child sex ratio
sns.distplot(df['child_sex_ratio'],rug = True)
# ## Kozhikode is the city with highest Sex-Ratio
df.sort_values('sex_ratio',ascending = False)[:1]
# # Bhiwandi is the city with lowest Sex- Ratio
df.sort_values('sex_ratio')[:1]
# ## Mean of Sex- Ratio grouped by States and sorted by sex ratio in ascending order.
df['sex_ratio'].groupby(df['state_name']).mean().sort_values()
#lowest 5 states
# # Table shows top 5 states having lowest sex-ratio. It is sorted by state name --Himachal pradesh has lowest sex- ratio
#
data_sex = pd.DataFrame(df['sex_ratio'].groupby(df['state_name']).mean().sort_values()[:5])
data_sex
# # 5 Cities, which have lowest sex- ratio and making a dataframe out of it.
data_sex_worst_city = pd.DataFrame(df['sex_ratio'].groupby(df['name_of_city']).mean().sort_values()[:5])
data_sex_worst_city['city_name'] = data_sex_worst_city.index.values
# # 5 Cities, which have highest sex- ratio and making a dataframe out of it.
data_sex_best_city = pd.DataFrame(df['sex_ratio'].groupby(df['name_of_city']).mean().sort_values(ascending = False)[:5])
data_sex_best_city['city_name'] = data_sex_best_city.index.values
frammes = [data_sex_worst_city,data_sex_best_city]
data_sex_worst_city = pd.DataFrame(df['sex_ratio'].groupby(df['name_of_city']).mean().sort_values()[:5])
data_sex_best_city = pd.DataFrame(df['sex_ratio'].groupby(df['name_of_city']).mean().sort_values(ascending = False)[:5])
frammes = [data_sex_worst_city,data_sex_best_city]
# ## Concatenated top 5 and bottom 5 cities on the basis of Sex - Ratio
###Analysis of top 5 and bottom 5 cities on the basis of number of male graduates::
total = pd.concat(frammes)
total
# # Best 5 cities with highest number of female graduates
data_female_graduates_best_city = pd.DataFrame(df['female_graduates'].groupby(df['name_of_city']).mean().sort_values(ascending = False)[:5])
data_female_graduates_best_city['city_name'] = data_female_graduates_best_city.index.values
data_female_graduates_best_city
# +
#5 cities with lowest number of female graduates
# -
data_female_graduates_worst_city = pd.DataFrame(df['female_graduates'].groupby(df['name_of_city']).mean().sort_values()[:5])
data_female_graduates_worst_city['city_name'] = data_female_graduates_worst_city.index.values
frames = [data_female_graduates_worst_city,data_female_graduates_best_city]
# # Concatenated the two dataframes and getting total 10 cities.
df2 = pd.concat(frames,ignore_index = True)
df2
data_sex.index.values
# to covert index into column
data_sex['state'] = data_sex.index.values
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(111)
N = len(data_sex['sex_ratio'])
y = data_sex['sex_ratio']
x = range(N)
plt.bar(x,y,width = 0.5,color = "darkblue")
ax.set_xticklabels(data_sex['state'])
ax.set_xlabel('states')
ax.set_title('States having lowest Sex-Ratio')
# # Aggregated Sum of population for all the states
##population by states
x = df.groupby('state_name').agg({'population_total':np.sum})
x
x.sort_values('population_total',ascending = False)
##maharashtra has highest total population in given dataset
df['ratio_0-6_pop_total_pop'] = df['0-6_population_total']/df['population_total']
del df['ratio_male_female_pop']
# # Created a pivot table of ratio of 0-6 total population for all the states.
df.pivot_table(index = "state_name",values = "ratio_0-6_pop_total_pop")
# # DISTRIBUTION of the data of total effective literacy rate is normal.
##plot of effective_literacy_rate_total
sns.distplot(df['effective_literacy_rate_total'])
fig = plt.figure()
res = stats.probplot(df['effective_literacy_rate_total'], plot=plt)
###heatmap of correlation matrix###
corr = df.corr()
fig,ax = plt.subplots(figsize = (12,9))
sns.heatmap(corr,vmax = 1.9)
# ### Created 2 new columns - longitude and latitude from a location variable
##to split columns in a dataframe:
df1 = df.join(df['location'].str.split(',',expand=True).rename(columns = {0:'Latitude',1:'longitude'}))
df1['longitude']
del df['location]
df['ratio_literate_total_pop'] = df['literates_total']/df['population_total']
chart1 = df[['name_of_city','state_name','ratio_literate_total_pop']]
chart1.sort_values('ratio_literate_total_pop').head(10)
#To 10 cities with least ratio of literates - 70% in U.P.
chart1.sort_values('ratio_literate_total_pop').tail(10)
#To 10 cities with highest ratio of literacy rates - west bengal and Kerala have 40% of cities in bottom#
df['state_name'].unique()
# States mentioned in dataset
# to get a particular column
df[df['state_name'].map(lambda state_name: 'KERALA' in state_name)]
value = ['KERALA','BIHAR']
df_B_K = pd.DataFrame(df[df.state_name.isin(value)])
# # Created a dataframe for only 2 states..Bihar and Kerala...
#bar chart for litearcy rates for kerala and bihar
ax = plt.subplot()
ax.set_ylabel = 'effective_literacy_rate_total'
ax.set_title('Comparison of Bihar and Kerala')
df_B_K.groupby('state_name').mean()['effective_literacy_rate_total'].plot(kind='bar',figsize=(10,8), ax = ax,color = ('coral','aquamarine'))
###top 2 and bottom 2 states with child sex ratio
data_female_child_sex_worst_states = pd.DataFrame(df['child_sex_ratio'].groupby(df['state_name']).mean().sort_values()[:2])
data_female_child_sex_worst_states['state_name'] = data_female_child_sex_worst_states.index.values
###top 2 and bottom 2 states with child sex ratio
data_female_child_sex_best_states = pd.DataFrame(df['child_sex_ratio'].groupby(df['state_name']).mean().sort_values(ascending =False)[:2])
data_female_child_sex_best_states['state_name'] = data_female_child_sex_best_states.index.values
frame1 = [data_female_child_sex_best_states,data_female_child_sex_worst_states]
###Mizoram and Assam have good child sex ratio and Haryana and punjab have lowest
df3 = pd.concat(frame1 ,ignore_index=True)
df3
df['ratio_male_female_eff_literacy_rate']=df['effective_literacy_rate_male']/df['effective_literacy_rate_female']
# ##Analyse the dataframe of the cities having more number of male literates as compared to female.
data_ratio_male_female_eff_literacy_rate_worst_cities = pd.DataFrame(df['ratio_male_female_eff_literacy_rate'].groupby(df['name_of_city']).mean().sort_values(ascending =False)[:10])
data_ratio_male_female_eff_literacy_rate_worst_cities
data_ratio_male_female_eff_literacy_rate_worst_states = pd.DataFrame(df['ratio_male_female_eff_literacy_rate'].groupby(df['state_name']).mean().sort_values(ascending =False)[:10])
data_ratio_male_female_eff_literacy_rate_worst_states
# # for top 10 cities with highest total graduates - compare total literates and child sex ratio
data_female_child_sex_best_cities = pd.DataFrame(df['total_graduates'].groupby(df['name_of_city']).mean().sort_values(ascending =False)[:10])
data_female_child_sex_best_cities
data_female_child_sex_best_cities['name_of_City'] = data_female_child_sex_best_states.index.values
# ##Delhi has highest number of graduates.
## Dataframe contains only Haryana state and I am analysing child sex ratio of different cities in Haryana
value = ['HARYANA']
df_H = pd.DataFrame(df[df.state_name.isin(value)])
df_H
ax = plt.subplot()
ax.set_ylabel = 'child_sex_ratio'
ax.set_title('child sex ratio of different cities in Haryana')
df_H.groupby('name_of_city').mean()['child_sex_ratio'].plot(kind='bar',figsize=(15,8), ax = ax,color = 'g')
ax.axhline(c, ls='--', color='r') # for cutoff value by mean
# # A red line is shown for cutoff value ,,Bahadurgarh, Bhiwani,Karnal, Rewari, Rohtak, Sirsa and Sonipat are the cities with low child sex- ratio
c = df_H['child_sex_ratio'].mean() # mean of child_sex_ratio of haryana cities.......
# # Ratio of literates / total population
ax = plt.subplot()
ax.set_title('ratio of literates/population for different states')
df.groupby('state_name').mean()['ratio_literate_total_pop'].plot(kind='bar',figsize=(15,8), ax = ax,color = 'purple')
ax.axhline(df['ratio_literate_total_pop'].mean(), ls='--', color='b') # for cutoff value by mean
df.sort_values('sex_ratio')[:20] # bottom 20 cities with lowest sex ratio
# # Scatterplot total population and Sex ratio
sns.jointplot(np.sqrt(df['sex_ratio']),np.sqrt(df['population_total']),color='crimson');
# # Scatterplot between total graduates and female graduates shows that it is highly correlated
with sns.axes_style("white"):
sns.jointplot(df['female_graduates'],df['total_graduates'],color='darkgreen');
Kernel Density plot between child sex ratio and effective literacy rate total
g = sns.jointplot(df['effective_literacy_rate_total'],df['child_sex_ratio'],color='r',kind = 'kde');
g.plot_joint(plt.scatter, c="w", s=5, linewidth=1, marker="+")
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels("$X$", "$Y$");
| 10,015 |
/Week_4/2.2.EDA_solar.ipynb
|
ed59f2c1142e82eaa33364aa9b3b91c38377d420
|
[] |
no_license
|
shinesunshine/Coursera_Capstone
|
https://github.com/shinesunshine/Coursera_Capstone
| 0 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 9,982 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Organization of the lecture
# Below you find the outline of the lectures. The numbering is not relating to real lectures but to a theme which typically lasts 1 hour of lecture time.
#
# ## [Motivation](Motivation.ipynb)
#
# ## [1. Introduction](lecture 1.ipynb) [(pdf)](pdfs/lecture 1.pdf)
# * Acoustic variables, Equation of State (EOS)
# * Linear wave equation (1-dimensional)
# * Linear wave in cartesian coordinates
# * velocity potential
# * *Speed of sound (as a function of temperature)*
# * *Simulation 1d-wave*
#
#
# ## [2. Planar Waves and 2d-waves](lecture 2.ipynb) [(pdf)](pdfs/lecture 2.pdf)
# * Harmonic Plane Waves
# * *Simulation of 2d-waves*
#
# ## [3. Energy, Intensity, and Impedance](lecture 3.ipynb) [(pdf)](pdfs/lecture 3.pdf)
# * Energy
# * Intensity
# * Impedance
# * decibel scale
#
# ## [4. Spherical Wave](lecture 4.ipynb) [(pdf)](pdfs/lecture 4.pdf)
# * Derivation of the spherical wave equation and solution
# * *Simulation of a spherical wave*
#
#
#
106c67-0e56-49aa-de52-2f90a7a39656"
df=pd.read_csv('/content/drive/MyDrive/Retail Sales Time Series Forecasting/missing_value_handled/missing_value_handled.csv')
df=df.sort_values(by='Date')
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="SgGCpMpcep8D" outputId="74af3516-dc79-452c-cf9d-a16ba640930d"
df.dtypes
# + colab={"base_uri": "https://localhost:8080/", "height": 220} id="-WcoLz6YfIxD" outputId="338a18ed-36ab-4376-ce13-4dba99a6d8b3"
df= df[df['Store'] == 2]
df=df[["Date","Sales"]]
df['Date']=pd.to_datetime(df['Date'])
df.set_index('Date',inplace=True)
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 277} id="5aXv3KKndZmP" outputId="0ca7166e-0440-4c7d-8653-402e9a64b51f"
df.describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 307} id="hiYbKEMZdZmT" outputId="ad9e295b-bb61-4709-bdc3-25dcf18f2b8e"
df.plot()
# + colab={"base_uri": "https://localhost:8080/", "height": 307} id="auNYdibUcoO_" outputId="8f988cee-f3f7-4e08-f517-1d2714a85362"
df.plot(style='k.')
# + colab={"base_uri": "https://localhost:8080/"} id="c59hIFZXf0cp" outputId="7714d092-7dbe-4d19-cada-e617e3a8bb13"
df.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="sJ9Igrunc2Kg" outputId="3f6baf56-0cf6-4796-eea7-943feaed8f83"
df.hist()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="Z4gQ8m9igfsk" outputId="94ddd103-aaad-45f9-9a0d-041375b58a24"
df.plot(kind='kde')
# + colab={"base_uri": "https://localhost:8080/", "height": 278} id="riBQpxQ9iB3Q" outputId="31ee0f2b-0e17-46a6-cb1d-a845ff5e4e01"
lag_plot(df)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 420} id="a6xX7OibdZmW" outputId="4a0f61f5-6155-4c1a-f7e5-959e43bf6a89"
# smoothing
moving_average_df=df.rolling(window=20).mean()
moving_average_df
# + colab={"base_uri": "https://localhost:8080/", "height": 307} id="UNguBinKdZmb" outputId="36273b88-72d7-4abc-8434-8645ca505f88"
moving_average_df.plot()
# + colab={"base_uri": "https://localhost:8080/"} id="EF39mm_GdZme" outputId="f0ae2c0f-9d1e-40cf-b323-3244d661a079"
sm.stats.durbin_watson(df) # correlation
# + colab={"base_uri": "https://localhost:8080/", "height": 498} id="UAQ-p4hndZmg" outputId="c9100814-bcfa-4f78-e698-aa9ab8b7a5c4"
# acf and pacf plots
# %matplotlib inline
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(df.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(df, lags=40, ax=ax2)
# + colab={"base_uri": "https://localhost:8080/"} id="ZkBXsgztneV5" outputId="5d58adcd-dde1-458d-e826-a6056151aded"
# Test for stationarity
test_result=adfuller(df['Sales'])
test_result
#Ho: It is non stationary
#H1: It is stationary
def adfuller_test(sales):
result=adfuller(sales)
labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
print('Critical Values:')
for key, value in result[4].items():
print(key, value)
if result[1] <= 0.05:
print("strong evidence against the null hypothesis(Ho), reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
adfuller_test(df['Sales'])
# + id="yExalXnDdZmn"
#test data 2015-07-01 to 2015-07-31
training_data,test_data=df[df.index<'2015-07-01'],df[df.index>='2015-07-01']
# + colab={"base_uri": "https://localhost:8080/"} id="w1hCAL_Ptq7k" outputId="83588a6a-e764-4a57-937d-977aad941014"
print(df.shape)
print(training_data.shape)
print(test_data.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="DEvNsuwddZmv" outputId="d04acc7f-2178-4e58-cc47-083b1e47d556"
# ARIMA
arima= ARIMA(training_data,order=(1,1,1))
model=arima.fit()
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="3QS_SKTRdZm1" outputId="a7d77484-d94f-4bd3-b4d3-4c416ea4e4a2"
model.aic
# + colab={"base_uri": "https://localhost:8080/", "height": 548} id="NQEG4LRydZm4" outputId="262d9365-2703-460c-a908-b14152ff2e5b"
pred= model.forecast(steps=31)[0]
test_data['predict']=pred
test_data[['Sales','predict']].plot(figsize=(20,8))
# + colab={"base_uri": "https://localhost:8080/"} id="2s2eFkgSOjJQ" outputId="2472aa87-d448-4fd6-ad2a-b19fa8258144"
# SARIMA
p = d = q = range(0, 2) # Define the p, d and q parameters to take any value between 0 and 2
pdq = list(itertools.product(p, d, q)) # Generate all different combinations of p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
# + colab={"base_uri": "https://localhost:8080/", "height": 416} id="mqLKkQ-HLXpI" outputId="8d8aa39d-0468-47af-99f0-847819639953"
model=sm.tsa.statespace.SARIMAX(df['Sales'],order=(1, 1, 1),seasonal_order=(1,1,1,12))
results=model.fit()
results.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 548} id="80hcKPt9NyWs" outputId="bd6a8142-3ec0-4693-d14a-c8e7f36509c5"
test_data['forecast']=results.predict(start=-31,dynamic=True)
test_data[['Sales','forecast']].plot(figsize=(12,8))
# + colab={"base_uri": "https://localhost:8080/", "height": 967} id="YnLVCjC3tq7n" outputId="7b8fe986-333e-4b25-a7c4-46e520934c0c"
test_data
# + colab={"base_uri": "https://localhost:8080/"} id="UDz5ds6Utq7o" outputId="7cedaef4-46a0-4fc0-9bc1-2bb9c5752414"
print("RMSE =",np.sqrt(mean_squared_error(test_data['Sales'], test_data['predict'])))
# + id="eAdt0lxNtq7o"
# + id="d4Mr6p_PuT2O"
| 7,195 |
/code/2-Modeling-for-Train-Data-and-Results-From-Test-Data.ipynb
|
d24f4ba696b283deb28781a078040084cbed5828
|
[] |
no_license
|
ynusinovich/Housing-Price-Predictions
|
https://github.com/ynusinovich/Housing-Price-Predictions
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 353,640 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data
#
# +
import pandas as pd
import os
DATA = os.path.join(os.getcwd(), 'data', 'ml-latest-small')
# -
movies_df = pd.read_csv(os.path.join(DATA,'movies.csv'))
movies_df
ratings_df = pd.read_csv(os.path.join(DATA,'ratings.csv'))
ratings_df
ntation.txt)
#
# ---
# ## Contents
# - [Import Libraries](#Import-Libraries)
# - [Load Data](#Load-Data)
# - [Set Up X and Y, and Training and Test Data](#Set-Up-X-and-Y,-and-Training-and-Test-Data)
# - [KNN Regression](#KNN-Regression)
# - [Ridge Regression](#Ridge-Regression)
# - [Lasso Regression](#Lasso-Regression)
# - [Visualizations](#Visualizations)
# - [Apply Model with Best Score to Test Data to Predict the Sale Prices, Drop Extra Columns, and Save Model and Predictions](#Apply-Model-with-Best-Score-to-Test-Data-to-Predict-the-Sale-Prices,-Drop-Extra-Columns,-and-Save-Model-and-Predictions)
# ## Import Libraries
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
from sklearn.preprocessing import StandardScaler, PolynomialFeatures, LabelEncoder
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.metrics import r2_score
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from matplotlib.ticker import FormatStrFormatter, FuncFormatter
from joblib import dump, load
pd.set_option('display.max_columns', 100)
pd.set_option('display.max_rows', 100)
# -
# ## Load Data
df = pd.read_csv("../datasets/train_clean.csv", dtype = {"Id":str, "PID":str})
df.head()
# ## Set Up X and Y, and Training and Test Data
X = df.drop(['Id', 'PID', 'SalePrice'], axis = 'columns')
y = df['SalePrice']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
# ## KNN Regression
# ### Check score for standard values of hyperparameters
knn = KNeighborsRegressor()
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
# ### Tune hyperparameters
knn_params = {
'n_neighbors': range(1, 101, 10),
'p': [1, 2, 3],
'weights': ['distance', 'uniform']
}
knn_gridsearch = GridSearchCV(KNeighborsRegressor(), knn_params, cv = 5, verbose = 0)
knn_gridsearch.fit(X_train, y_train)
knn_gridsearch.best_score_
knn_gridsearch.best_params_
best_knn = knn_gridsearch.best_estimator_
best_knn.score(X_test, y_test)
# ## Ridge Regression
# ### Check score for standard values of hyperparameters
alpha = 10.0
ridge_model = Ridge(alpha = alpha)
cross_val_score(ridge_model, X, y, cv = 5).mean()
# ### Tune hyperparameters
# +
# r_alpha_list = np.logspace(0, 5, 200)
# ridge_model = RidgeCV(alphas = r_alpha_list, cv = 5, store_cv_values = True)
# ridge_model = ridge_model.fit(X, y)
# ridge_model.alpha_
# +
# ridge_opt_model = Ridge(alpha = ridge_model.alpha_)
# cross_val_score(ridge_opt_model, X, y, cv = 5).mean()
# -
r_params = {
'alpha': np.logspace(0, 2, 400)
}
r_gridsearch = GridSearchCV(Ridge(), r_params, cv = 5, verbose = 0)
r_gridsearch.fit(X_train, y_train)
r_gridsearch.best_score_
r_gridsearch.best_params_
best_r = r_gridsearch.best_estimator_
best_r.score(X_test, y_test)
# ## Lasso Regression
# ### Check score for standard values of hyperparameters
alpha = .01
lasso_model = Lasso(alpha = alpha)
cross_val_score(lasso_model, X, y, cv = 5).mean()
# ### Tune hyperparameters
# +
# l_alpha_list = np.arange(0.001, 0.15, 0.0025)
# lasso_model = LassoCV(alphas = l_alpha_list, cv = 5)
# lasso_model = lasso_model.fit(X, y)
# lasso_model.alpha_
# +
# lasso_model = Lasso(alpha = lasso_model.alpha_)
# cross_val_score(lasso_model, X, y, cv = 5).mean()
# -
l_params = {
'alpha': np.arange(194, 198, .01)
}
l_gridsearch = GridSearchCV(Lasso(), l_params, cv = 5, verbose = 0)
l_gridsearch.fit(X_train, y_train)
l_gridsearch.best_score_
l_gridsearch.best_params_
best_l = l_gridsearch.best_estimator_
best_l.score(X_test, y_test)
best_l.coef_
# ## Visualizations
coeff_series = pd.Series(best_l.coef_, index = df.columns.drop(["Id","PID","SalePrice"]))
abs(coeff_series).sort_values()
coeff_series.sort_values()
# +
col_names = ["Model","Best Score on Training Data", "Best Score on Test Data"]
col_vals = [["KNN Regression", knn_gridsearch.best_score_, best_knn.score(X_test, y_test)],
["Ridge Regression", r_gridsearch.best_score_,best_r.score(X_test, y_test)],
["Lasso Regression", l_gridsearch.best_score_, best_l.score(X_test, y_test)]]
model_df = pd.DataFrame(col_vals, columns = col_names)
model_df.plot(x = "Model", y = ["Best Score on Training Data", "Best Score on Test Data"], kind = "bar", figsize = (20, 10))
plt.xlabel("Type of Model", fontsize = 20)
plt.xticks(rotation = "horizontal")
plt.ylabel("R^2 Scores", fontsize = 20)
plt.yticks(np.arange(0.83,0.92,0.01))
plt.ylim(0.83,0.91)
plt.title("R^2 Scores on Training and Test Data for Each Model", fontsize = 30)
plt.legend((model_df.columns[1], model_df.columns[2]))
plt.show()
# -
l_gridsearch_df = pd.DataFrame(l_gridsearch.cv_results_)
l_gridsearch_df.plot(x = 'param_alpha', y = 'mean_test_score', figsize = (15,10))
ax = plt.gca()
ax.get_legend().remove()
plt.xlabel("Tuning of Alpha Parameter", fontsize = 20)
plt.ylabel("R^2 Scores", fontsize = 20)
ax.yaxis.set_major_formatter(FormatStrFormatter('%.7f'))
plt.title("R^2 Score Based on Alpha Parameter Values", fontsize = 30)
plt.show()
print('The best cross-validation R^2 score for the train dataset is: {}'.format(l_gridsearch.best_score_))
print('The best cross-validation R^2 score for the test dataset is: {}'.format(best_l.score(X_test, y_test)))
coeff_series.plot.bar(figsize = (20,10))
plt.xlabel('House Feature', fontsize = 15)
plt.ylabel('Effect on Price ($)', fontsize = 15)
plt.yticks(np.arange(-10000,20001,5000))
ax = plt.gca()
ax.get_yaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))
plt.title('Effect of House Features on Price, Based on Feature Standard Deviations from Mean', fontsize = 25)
plt.vlines(x = df.columns.drop(["Id","PID","SalePrice"]), ymin = -10000, ymax = 20000, linestyles = "dotted")
plt.show()
# +
predictions = best_l.predict(X_test)
residuals = y_test - predictions
plt.figure(figsize = (15, 10))
plt.scatter(predictions, residuals)
plt.plot([0,500000], [0,0], color = "red")
plt.xticks(np.arange(0,500001,50000))
plt.yticks(np.arange(-150000,150001,50000))
plt.xlabel('Housing Price Predictions ($)', fontsize = 15)
plt.ylabel('Housing Price Errors ($)', fontsize = 15)
ax = plt.gca()
ax.get_xaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))
ax.get_yaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))
plt.title('Errors for Our Best Model')
plt.show()
# -
# ## Apply Model with Best Score to Test Data to Predict the Sale Prices, Drop Extra Columns, and Save Model and Predictions
df2 = pd.read_csv("../datasets/test_clean.csv", dtype = {"Id":str, "PID":str})
df2.head()
X = df2.drop(['Id', 'PID'], axis = 'columns')
y = best_l.predict(X)
df2["SalePrice"] = y
col_drop_list = df2.columns.drop(["Id","SalePrice"])
df2.drop(col_drop_list, axis = 1, inplace = True)
dump(best_l, '../models/lasso_model_tuned.joblib')
df2.to_csv("../datasets/test_results.csv", index = False)
state_union.sents(fieldid)
else:
all_sentences_by_pres_state_union[pres] += state_union.sents(fieldid)
# +
from nltk.lm import Laplace
models = dict()
orders = [1,2,3]
for pres, sents in all_sentences_by_pres_state_union.items():
models[pres] = dict()
print(pres)
for order in orders:
train, vocab = padded_everygram_pipeline(order, sents)
laplace_model = Laplace(order)
laplace_model.fit(train, vocab)
models[pres][order] = laplace_model
# -
from tqdm import tqdm_notebook as tqdm
# +
word_proba_by_pres = dict()
vocab = Vocabulary(state_union.words(), unk_cutoff=5)
for pres, model in models.items():
print(pres)
word_proba_by_pres[pres] = []
for word in tqdm(vocab):
word_proba_by_pres[pres].append(model[1].score(word))
# -
from scipy.stats import entropy
# +
kl_by_pres_couple = dict()
for pres1, proba1 in word_proba_by_pres.items():
for pres2, proba2 in word_proba_by_pres.items():
if pres1 is not pres2:
key = tuple(sorted((pres1,pres2)))
if key not in kl_by_pres_couple:
kl_by_pres_couple[key]= entropy(proba1,proba2)
# +
import operator
kl_by_pres_couple = sorted(kl_by_pres_couple.items(), key=operator.itemgetter(1))
print(kl_by_pres_couple)
# -
# #### Question 5.a
most_similar = kl_by_pres_couple[0]
most_dissimilar = kl_by_pres_couple[-1]
print("Most similar presidents: {:10s} and {} | {}".format(most_similar[0][0], most_similar[0][1], most_similar[1]))
print("Most dissimilar presidents: {:10s} and {} | {}".format(most_dissimilar[0][0], most_dissimilar[0][1], most_dissimilar[1]))
# #### Question 5.b
| 9,197 |
/Docker/install Docker on Mac.ipynb
|
bacb3803131731bf5b1702ced088443d60652724
|
[] |
no_license
|
megamindbrian/jupytangular
|
https://github.com/megamindbrian/jupytangular
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.js
| 3,414 |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .js
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.15.2
// kernel_info:
// name: node_nteract
// kernelspec:
// display_name: Javascript (Node.js)
// language: javascript
// name: javascript
// ---
// # How to install Docker on Mac?
//
// [Download Docker](https://download.docker.com/owx/stable/InstallDocker.dmg)
// and install then skip to step 3.
//
// ## 1
//
// Is Docker already installed?
//
// + inputHidden=false outputHidden=false
$$.async();
var exec = require('child_process').exec;
var installed = false;
var docker = exec('docker ps', (err, stdout, stderr) => {
if (stdout.indexOf('not found') > -1) {
$$.done('Docker not found, installing');
} else {
installed = true;
$$.done('Docker is already installed');
}
});
// -
// Where do I download docker?
// + inputHidden=false outputHidden=false active=""
// curl -sL http://git.io/vsk46 | \
// sed -e "s?{{docker-machine}}?$(which docker-machine)?" \
// -e "s?{{user-path}}?$(echo $PATH)?" \
// >~/Library/LaunchAgents/com.docker.machine.default.plist && \
// launchctl load ~/Library/LaunchAgents/com.docker.machine.default.plist
//
// -
// What other tools do I need?
//
// Install XCode, nodejs, and Docker with elevated.
// + inputHidden=false outputHidden=false
$$.async();
// /usr/bin/osascript -e 'do shell script "/path/to/myscript args 2>&1 etc" with administrator privileges'
var exec = require('child_process').exec;
var installCmd = exec('npm install rimraf JSONStream', () => {
$$.done('installed basic node utilities, rimraf, JSONStream, etc');
});
installCmd.stdout.on('data', (d) => console.log(d));
installCmd.stderr.on('data', (d) => console.log(d));
| 1,845 |
/Chapter04/Cosine Similarity.ipynb
|
deaafeb9939eb0e2bb3066494b46f7e2db06afee
|
[
"MIT"
] |
permissive
|
oriankeith001/Hands-On-Python-Natural-Language-Processing
|
https://github.com/oriankeith001/Hands-On-Python-Natural-Language-Processing
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,228 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Measuring Cosine Similarity between Document Vectors
import nltk
nltk.download('stopwords')
nltk.download('wordnet')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
import pandas as pd
import re
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# ## Building a corpus of sentences
sentences = ["We are reading about Natural Language Processing Here",
"Natural Language Processing making computers comprehend language data",
"The field of Natural Language Processing is evolving everyday"]
corpus = pd.Series(sentences)
corpus
# ## Data preprocessing pipeline
def text_clean(corpus, keep_list):
'''
Purpose : Function to keep only alphabets, digits and certain words (punctuations, qmarks, tabs etc. removed)
Input : Takes a text corpus, 'corpus' to be cleaned along with a list of words, 'keep_list', which have to be retained
even after the cleaning process
Output : Returns the cleaned text corpus
'''
cleaned_corpus = pd.Series()
for row in corpus:
qs = []
for word in row.split():
if word not in keep_list:
p1 = re.sub(pattern='[^a-zA-Z0-9]',repl=' ',string=word)
p1 = p1.lower()
qs.append(p1)
else : qs.append(word)
cleaned_corpus = cleaned_corpus.append(pd.Series(' '.join(qs)))
return cleaned_corpus
def lemmatize(corpus):
lem = WordNetLemmatizer()
corpus = [[lem.lemmatize(x, pos = 'v') for x in x] for x in corpus]
return corpus
def stem(corpus, stem_type = None):
if stem_type == 'snowball':
stemmer = SnowballStemmer(language = 'english')
corpus = [[stemmer.stem(x) for x in x] for x in corpus]
else :
stemmer = PorterStemmer()
corpus = [[stemmer.stem(x) for x in x] for x in corpus]
return corpus
def stopwords_removal(corpus):
wh_words = ['who', 'what', 'when', 'why', 'how', 'which', 'where', 'whom']
stop = set(stopwords.words('english'))
for word in wh_words:
stop.remove(word)
corpus = [[x for x in x.split() if x not in stop] for x in corpus]
return corpus
def preprocess(corpus, keep_list, cleaning = True, stemming = False, stem_type = None, lemmatization = False, remove_stopwords = True):
'''
Purpose : Function to perform all pre-processing tasks (cleaning, stemming, lemmatization, stopwords removal etc.)
Input :
'corpus' - Text corpus on which pre-processing tasks will be performed
'keep_list' - List of words to be retained during cleaning process
'cleaning', 'stemming', 'lemmatization', 'remove_stopwords' - Boolean variables indicating whether a particular task should
be performed or not
'stem_type' - Choose between Porter stemmer or Snowball(Porter2) stemmer. Default is "None", which corresponds to Porter
Stemmer. 'snowball' corresponds to Snowball Stemmer
Note : Either stemming or lemmatization should be used. There's no benefit of using both of them together
Output : Returns the processed text corpus
'''
if cleaning == True:
corpus = text_clean(corpus, keep_list)
if remove_stopwords == True:
corpus = stopwords_removal(corpus)
else :
corpus = [[x for x in x.split()] for x in corpus]
if lemmatization == True:
corpus = lemmatize(corpus)
if stemming == True:
corpus = stem(corpus, stem_type)
corpus = [' '.join(x) for x in corpus]
return corpus
# Preprocessing with Lemmatization here
preprocessed_corpus = preprocess(corpus, keep_list = [], stemming = False, stem_type = None,
lemmatization = True, remove_stopwords = True)
preprocessed_corpus
# ## Cosine Similarity Calculation
def cosine_similarity(vector1, vector2):
vector1 = np.array(vector1)
vector2 = np.array(vector2)
return np.dot(vector1, vector2) / (np.sqrt(np.sum(vector1**2)) * np.sqrt(np.sum(vector2**2)))
# ## CountVectorizer
vectorizer = CountVectorizer()
bow_matrix = vectorizer.fit_transform(preprocessed_corpus)
print(vectorizer.get_feature_names())
print(bow_matrix.toarray())
# ## Cosine similarity between the document vectors built using CountVectorizer
for i in range(bow_matrix.shape[0]):
for j in range(i + 1, bow_matrix.shape[0]):
print("The cosine similarity between the documents ", i, "and", j, "is: ",
cosine_similarity(bow_matrix.toarray()[i], bow_matrix.toarray()[j]))
# ## TfidfVectorizer
vectorizer = TfidfVectorizer()
tf_idf_matrix = vectorizer.fit_transform(preprocessed_corpus)
print(vectorizer.get_feature_names())
print(tf_idf_matrix.toarray())
print("\nThe shape of the TF-IDF matrix is: ", tf_idf_matrix.shape)
# ## Cosine similarity between the document vectors built using TfidfVectorizer
for i in range(tf_idf_matrix.shape[0]):
for j in range(i + 1, tf_idf_matrix.shape[0]):
print("The cosine similarity between the documents ", i, "and", j, "is: ",
cosine_similarity(tf_idf_matrix.toarray()[i], tf_idf_matrix.toarray()[j]))
| 5,654 |
/.ipynb_checkpoints/landmarks_localization-checkpoint.ipynb
|
76cd277d7cce5a9f6b3434b14b125fc41968159c
|
[] |
no_license
|
PeterTF656/The-Mountain-Backend
|
https://github.com/PeterTF656/The-Mountain-Backend
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 113,206 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import mediapipe as mp
import cv2
import numpy as np
from matplotlib import pyplot as plt
# +
def get_landmarks(img_path):
mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
with mp_face_mesh.FaceMesh(
static_image_mode=True,
max_num_faces=1,
min_detection_confidence=0.5) as face_mesh:
#for idx, file in enumerate(file_list):
image = cv2.imread(img_path)
# Convert the BGR image to RGB before processing.
results = face_mesh.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
# Print and draw face mesh landmarks on the image.
if not results.multi_face_landmarks:
raise
annotated_image = image.copy()
for face_landmarks in results.multi_face_landmarks:
#print('face_landmarks:', face_landmarks)
mp_drawing.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACE_CONNECTIONS,
landmark_drawing_spec=drawing_spec,
connection_drawing_spec=drawing_spec)
cv2.imwrite('annotated_image' + '.png', annotated_image)
landmarks = results.multi_face_landmarks[0].landmark
#landmarks_arr = np.array([[i.x, i.y, i.z] for i in landmarks])
return landmarks
landmarks = get_landmarks('Images/AF02.jpg')
# +
SEARCH_LIST=[[0,1,2], [0,1,3], [0,2,3], [1,2,3], [0,1,4], [0,2,4], [0,3,4], [1,2,4], [1,3,4], [2,3,4], [0,1,5], [0,2,5], [0,3,5], [0,4,5], [1,2,5], [1,3,5], [1,4,5], [2,3,5], [2,4,5],[3,4,5]]
def is_in_poly(p, poly):
"""
:param p: [x, y]
:param poly: [[], [], [], [], ...]
:return: is_in: boolean. True = p in poly
"""
px, py = p
is_in = False
for i, corner in enumerate(poly):
next_i = i + 1 if i + 1 < len(poly) else 0
x1, y1 = corner
x2, y2 = poly[next_i]
if (x1 == px and y1 == py) or (x2 == px and y2 == py): # if point is on vertex
is_in = True
break
if min(y1, y2) < py <= max(y1, y2): # find horizontal edges of polygon
x = x1 + (py - y1) * (x2 - x1) / (y2 - y1)
if x == px: # if point is on edge
is_in = True
break
elif x > px: # if point is on left-side of line
is_in = not is_in
return is_in
def get_pt_ref(pt, landmarks):
"""
:params pt: np.array([x, y]), the location of the point
:params landmarks: landmark obj
:return pt_ref: [np.array([x1, x2, x3]), np.array[idx1, idx2, idx3]]
:return boolean: True = found the correct solution as pt_ref
"""
landmarks_arr = np.array([[i.x, i.y] for i in landmarks])
dist = np.linalg.norm(landmarks_arr - pt, axis = 1)
sorted_idx = np.argsort(dist)
for search in SEARCH_LIST:
poly = landmarks_arr[sorted_idx][search]
if is_in_poly(pt, poly):
a = np.array([[poly[0,0], poly[1,0], poly[2,0]],[poly[0,1], poly[1,1], poly[2,1]], [1, 1, 1]])
b = np.array([pt[0], pt[1], 1])
ans = np.linalg.solve(a,b)
pt_ref = [ans, sorted_idx[search]]
return pt_ref, True # Found correct solution
return [np.array([1, 0, 0]), sorted_idx[[0,1,2]]], False # Solution not found. Return the nearest point
def get_pt_location(pt_ref, landmarks):
"""
:params pt_ref: [np.array([x1, x2, x3]), np.array[idx1, idx2, idx3]]
:params landmarks: landmark obj
:return np.array([x, y, z])
"""
landmarks_arr = np.array([[i.x, i.y, i.z] for i in landmarks])
ref_ratio, ref_idx = pt_ref
return np.dot(landmarks_arr[ref_idx].T, ref_ratio)
pt = np.array([0.59, 0.55])
pt_ref, flag = get_pt_ref(pt, landmarks)
print(pt_ref)
pt_xyz = get_pt_location(pt_ref, landmarks)
print(pt_xyz)
# +
import json
import pdb
def load_makeup(file_path, landmarks):
counts = []
fails = []
steps = []
with open(file_path) as fp:
for line in fp:
if line.endswith(':\n'):
step = []
fail = 0
count = 0
elif line == '\n':
steps.append(step)
fails.append(fail)
counts.append(count)
else:
#pdb.set_trace()
x_str, y_str=line.split()
x = float(x_str)
y = float(y_str)
pt = np.array([x, y])
pt_ref, flag = get_pt_ref(pt, landmarks)
count += 1
#if flag:
# step.append(pt_ref)
step.append(pt_ref)
if not flag:
fail += 1
return steps, counts, fails
file = 'makeup_keypoint.txt'
steps, counts, fails = load_makeup(file, landmarks)
# -
len(steps[0])
# +
demo_pic = 'Images/AF0.jpg'
landmarks = get_landmarks(demo_pic)
image = cv2.imread(demo_pic)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (1000, 1000))
w, h, d = image.shape
step_num = 0
for step in steps:
step_num += 1
for pt in step:
pt_xyz = get_pt_location(pt, landmarks)
x = int(pt_xyz[0] * w)
y = int(pt_xyz[1] * h)
color = ((step_num * 20)%255, 0, 0)
image = cv2.circle(image, (x,y), radius=1, color=color, thickness=1)
plt.imshow(image)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
cv2.imwrite('yushu_with_points_2.jpg', image)
# -
| 5,787 |
/2_terra.ipynb
|
290fb07c7c9c6644e984b1b67f7bbf715106a16b
|
[] |
no_license
|
aasfaw/qiskit-intros
|
https://github.com/aasfaw/qiskit-intros
| 3 | 4 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 59,978 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: qiskitdev
# language: python
# name: qiskitdev
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Using Qiskit Terra
#
# [email protected]<br/>
# April 2019<br/>
# (Thanks to [email protected], [email protected] and [email protected])
# + [markdown] slideshow={"slide_type": "fragment"}
# Qiskit Terra contains tools that **define**, **compile** and **execute** quantum circuits on arbitrary **backends**
# + [markdown] slideshow={"slide_type": "fragment"}
# For much of this talk, I will focus on a simple circuit that creates two-qubit entanglement.
#
# [Generating a Bell state](https://demonstrations.wolfram.com/GeneratingEntangledQubits/)
#
# + slideshow={"slide_type": "slide"}
# Your first Qiskit application
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
qr = QuantumRegister(2) # qubits indexed as qr[0], qr[1] and qr[2]
cr = ClassicalRegister(2) # classical bits indexed as cr[0], cr[1] and cr[2]
circuit = QuantumCircuit(qr, cr)
# + slideshow={"slide_type": "fragment"}
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.measure(qr, cr)
# + [markdown] slideshow={"slide_type": "fragment"}
# Additional gates can be found in our qiskit tutorials on Github<br/>
# [List of quantum operations](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb)
# + slideshow={"slide_type": "fragment"}
circuit.draw()
# + slideshow={"slide_type": "fragment"}
circuit.draw(output='latex_source')
# + slideshow={"slide_type": "fragment"}
print(circuit.qasm())
# + [markdown] slideshow={"slide_type": "slide"}
# ### Compiling and executing the circuit on a simulator backend
# + slideshow={"slide_type": "fragment"}
from qiskit import Aer, execute
# pick a backend, in this case a simulator
backend = Aer.get_backend('qasm_simulator')
# start a simulation job on the backend
job = execute(circuit, backend, shots=1000)
# collect the job results and display them
result = job.result()
counts = result.get_counts(circuit)
print(counts)
# + slideshow={"slide_type": "fragment"}
from qiskit.tools.visualization import plot_histogram
plot_histogram(counts)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Compiling and executing the circuit on a real device
# + slideshow={"slide_type": "slide"}
from qiskit import IBMQ
IBMQ.load_accounts()
IBMQ.backends()
# + slideshow={"slide_type": "fragment"}
# OPTION 1: pick a specific backend
backend = IBMQ.get_backend('ibmq_16_melbourne')
# OPTION 2: pick the least busy backend
from qiskit.providers.ibmq import least_busy
backend = least_busy(IBMQ.backends(simulator=False))
# start a simulation job on the backend
job = execute(circuit, backend, shots=1000)
# collect the job results and display them
result = job.result()
counts = result.get_counts(circuit)
print(counts)
# + slideshow={"slide_type": "fragment"}
# OPTION 1: pick a specific backend
backend = IBMQ.get_backend('ibmq_16_melbourne')
# OPTION 2: pick the least busy backend
from qiskit.providers.ibmq import least_busy
backend = least_busy(IBMQ.backends(simulator=False))
# start a simulation job on the backend
job = execute(circuit, backend, shots=1000)
# monitor the job
from qiskit.tools.monitor import job_monitor
job_monitor(job)
# collect the job results and display them
result = job.result()
counts = result.get_counts(circuit)
print(counts)
# -
plot_histogram(counts)
# %qiskit_backend_monitor backend
# + [markdown] slideshow={"slide_type": "slide"}
# ## What goes on under the hood when you call `execute`
# + [markdown] slideshow={"slide_type": "fragment"}
# <img src='execute_flow.pngpng.png'>
# + slideshow={"slide_type": "slide"}
qr = QuantumRegister(7, 'q')
qr = QuantumRegister(7, 'q')
tpl_circuit = QuantumCircuit(qr)
tpl_circuit.h(qr[3])
tpl_circuit.cx(qr[0], qr[6])
tpl_circuit.cx(qr[6], qr[0])
tpl_circuit.cx(qr[0], qr[1])
tpl_circuit.cx(qr[3], qr[1])
tpl_circuit.cx(qr[3], qr[0])
tpl_circuit.draw()
# + slideshow={"slide_type": "fragment"}
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import BasicSwap
from qiskit.transpiler import transpile
from qiskit.mapper import CouplingMap
# -
help(BasicSwap)
# + slideshow={"slide_type": "fragment"}
coupling = [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
simulator = Aer.get_backend('qasm_simulator')
coupling_map = CouplingMap(couplinglist=coupling)
pass_manager = PassManager()
pass_manager.append([BasicSwap(coupling_map=coupling_map)])
basic_circ = transpile(tpl_circuit, simulator, pass_manager=pass_manager)
basic_circ.draw()
# + slideshow={"slide_type": "slide"}
IBMQ.backends()
# -
realdevice = IBMQ.get_backend('ibmq_16_melbourne')
tpl_realdevice = transpile(tpl_circuit, backend = realdevice)
tpl_realdevice.draw(line_length = 250)
# + [markdown] slideshow={"slide_type": "slide"}
# **Transpilation is a work in progress, with significant contributions from within and outside IBM.**
# * Can lead to significant overhead on real devices, leading to more noise
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Running quantum algorithms
# + [markdown] slideshow={"slide_type": "fragment"}
# * Shor's Algorithm
# * Discrete FT = O($(2^n)^2$)
# * Fast FT = O($n\cdot2^n$)
# * Quantum FT = O($n\cdot n$) [Tutorial in Qiskit Github](https://github.com/Qiskit/qiskit-tutorials/blob/master/community/terra/qis_adv/fourier_transform.ipynb)
# * Grover's Algorithm
# * Classical search is at least O(N)
# * Grover's algorithm = O($\sqrt{N}$) [Tutorial in Qiskit Github](https://github.com/Qiskit/qiskit-tutorials/blob/master/community/algorithms/grover_algorithm.ipynb)
#
# * [Other Tutorials](https://github.com/Qiskit/)
# -
| 5,909 |
/DS-Unit-4-Sprint-1-NLP-main/Sprint-Challenge-U4-S1/LS_DS_415_Sprint_Challenge_1_AG_4.ipynb
|
44c130237aa6eb254349ae69c24194745bb640f6
|
[] |
no_license
|
jmmiddour/DSPT8-Unit-4-ML
|
https://github.com/jmmiddour/DSPT8-Unit-4-ML
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 314,521 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernel_info:
# name: u4-s1-nlp
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="69Gnrf6_BhXA"
# # Sprint Challenge
# ## *Data Science Unit 4 Sprint 1*
#
# After a week of Natural Language Processing, you've learned some cool new stuff: how to process text, how turn text into vectors, and how to model topics from documents. Apply your newly acquired skills to one of the most famous NLP datasets out there: [Yelp](https://www.yelp.com/dataset). As part of the job selection process, some of my friends have been asked to create analysis of this dataset, so I want to empower you to have a head start.
#
# The real dataset is massive (almost 8 gigs uncompressed). I've sampled the data for you to something more managable for the Sprint Challenge. You can analyze the full dataset as a stretch goal or after the sprint challenge. As you work on the challenge, I suggest adding notes about your findings and things you want to analyze in the future.
#
# ## Challenge Objectives
# *Successfully complete these all these objectives to earn a 2. There are more details on each objective further down in the notebook.*
# * <a href="#p1">Part 1</a>: Write a function to tokenize the yelp reviews
# * <a href="#p2">Part 2</a>: Create a vector representation of those tokens
# * <a href="#p3">Part 3</a>: Use your tokens in a classification model on yelp rating
# * <a href="#p4">Part 4</a>: Estimate & Interpret a topic model of the Yelp reviews
# + [markdown] id="tw4gjPVyBhXI"
# ### Import Data
# + colab={"base_uri": "https://localhost:8080/", "height": 561} id="TeIceIaLBhXJ" outputId="2532971c-305b-4404-d7fb-bbc596d5d592"
import pandas as pd
# Load reviews from URL
data_url = 'https://raw.githubusercontent.com/LambdaSchool/data-science-practice-datasets/main/unit_4/unit1_nlp/review_sample.json'
# Import data into a DataFrame named df
df = pd.read_json(data_url, lines=True)
# df = pd.DataFrame(df)
df.head()
# raise NotImplementedError()
# + id="QXLZamj-BhXK"
# Visible Testing
assert isinstance(df, pd.DataFrame), 'df is not a DataFrame. Did you import the data into df?'
assert df.shape[0] == 10000, 'DataFrame df has the wrong number of rows.'
# + [markdown] id="FEiwS3qSBhXK"
# ## Part 1: Tokenize Function
# <a id="#p1"></a>
#
# Complete the function `tokenize`. Your function should
# - accept one document at a time
# - return a list of tokens
#
# You are free to use any method you have learned this week.
# + id="Zl5UQi4CBhXL"
# Import NLP libraries
import spacy
from spacy.tokenizer import Tokenizer
# Optional: consider using spacy in your function
# if you do use this pre-trained model, YOU MUST USE THE SMALL VERSION en_core_web_sm
# if you don't already have the small version downloaded, you'll have to download it
# this is due to limited computational resources on CodeGrader
# if you don't plan on using this model in the SC, simply comment it out
nlp = spacy.load('en_core_web_sm')
# + colab={"base_uri": "https://localhost:8080/"} id="OurCv9vzBhXL" outputId="716d73ec-2c41-4977-87d9-5de99bd1a64b"
# Initialize the Tokenizer
tokenizer = Tokenizer(nlp.vocab)
def tokenize(doc):
"""
This function will take a document
and return a list of tokens for each document
:param doc: Can be a single document or a DataFrame with multiple docs
:return: A list of token for each document
"""
tokens = [] # empty list to hold tokens
# Iterate through all docs in the dataframe using a pipeline
for doc in tokenizer.pipe(doc, batch_size=250):
doc_tokens = [] # Temp empty list to hold tokens for each doc
# Iterate through each token within the doc
for token in doc:
if (token.is_stop != True) & ( # if it is not a default stop word
token.is_punct != True) & ( # if it is not punctuation
token.is_space != True): # if it is not an extra whitespace
# Add to the end of the temp list and lowercase all tokens
doc_tokens.append(token.text.lower())
# Add all the doc tokens to the tokens list
tokens.append(doc_tokens)
# Return the list of tokens for all docs in the dataframe
return tokens
# raise NotImplementedError()
# Add a tokenized column to my dataframe
df['tokens'] = tokenize(df['text'])
df['tokens'].head()
# + id="YCDZzkvxBhXL"
'''Testing'''
assert isinstance(tokenize(df.sample(n=1)["text"].iloc[0]), list), "Make sure your tokenizer function accepts a single document and returns a list of tokens!"
# + [markdown] id="xfnnGqd1BhXM"
# ## Part 2: Vector Representation
# <a id="#p2"></a>
# 1. Create a vector representation of the reviews
# 2. Write a fake review and query for the 10 most similiar reviews, print the text of the reviews. Do you notice any patterns?
# - Given the size of the dataset, use `NearestNeighbors` model for this.
# + colab={"base_uri": "https://localhost:8080/", "height": 287} id="23Cf2uLrBhXM" outputId="4326921e-b20b-4e52-a289-e0d6a6c39f7b"
# Create a vector representation of the reviews
# Name that doc-term matrix "dtm"
# Import TF-IDF vectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
# Instantiate the vectorizer object
tfidf = TfidfVectorizer(stop_words='english',
ngram_range=(1, 2),
max_df=0.5,
min_df=9,
max_features=2500)
# Fit the vectorizer
dtm_fit = tfidf.fit_transform(df.text)
# Turn into a Document Term Matrix
dtm = pd.DataFrame(dtm_fit.todense(), columns=tfidf.get_feature_names())
# Check my work
print(dtm.shape)
dtm.sample(5)
# raise NotImplementedError()
# 'vect__max_df': 0.5,
# 'vect__max_features': 2500,
# 'vect__min_df': 9,
# 'vect__ngram_range': (1, 2)
# + colab={"base_uri": "https://localhost:8080/"} id="h1LZ0znHBhXM" outputId="df8513e1-186e-4add-f11e-67c24529fb75"
# Create and fit a NearestNeighbors model named "nn"
from sklearn.neighbors import NearestNeighbors
# Fit on the DTM dataframe
nn = NearestNeighbors(n_neighbors=10,
algorithm='kd_tree',
leaf_size=8,
n_jobs=10)
nn.fit(dtm)
# raise NotImplementedError()
# + id="yDDYeImmBhXN"
'''Testing.'''
assert nn.__module__ == 'sklearn.neighbors._unsupervised', ' nn is not a NearestNeighbors instance.'
assert nn.n_neighbors == 10, 'nn has the wrong value for n_neighbors'
# + colab={"base_uri": "https://localhost:8080/"} id="tSy8H3B5BhXN" outputId="973098f4-7f10-48a6-a32e-6072e3cfd171"
# Create a fake review and find the 10 most similar reviews
fake = ["""
I ate lunch here with my best friend and my husband.
The soup was ice cold and the service was horrible.
The waitress made me feel like we were just a burden on her,
whenever she actually came to the table.
The prices are really low though.
This is one of those places where you get what you pay for.
You could not pay me enough to come back here again!
"""]
# Vectorize my fake review
my_review = tfidf.transform(fake)
# Run my KNN on my vectorized fake review
nn.kneighbors(my_review.todense())
# raise NotImplementedError()
# + colab={"base_uri": "https://localhost:8080/"} id="Ts-t0geyBhXN" outputId="37ef9317-b270-4f38-e4f8-3ddcf2ee26b5"
# Look at the 10 closest matched reviews to my fake review
print(f'Closest Review to mine:\n{df.text.loc[6311]}\n\n')
print(f'2nd closest Review to mine:\n{df.text.loc[6204]}\n\n')
print(f'3rd closest Review to mine:\n{df.text.loc[6008]}\n\n')
print(f'4th closest Review to mine:\n{df.text.loc[7340]}\n\n')
print(f'5th closest Review to mine:\n{df.text.loc[5796]}\n\n')
print(f'6th closest Review to mine:\n{df.text.loc[4168]}\n\n')
print(f'7th closest Review to mine:\n{df.text.loc[4697]}\n\n')
print(f'8th closest Review to mine:\n{df.text.loc[5333]}\n\n')
print(f'9th closest Review to mine:\n{df.text.loc[2944]}\n\n')
print(f'10th closest Review to mine:\n{df.text.loc[3518]}\n\n')
# + [markdown] id="R2nM2rTHBhXO"
# What patterns I see on my matches:
# - The first 2 reviews that are closest to mine look like they are in Chinese or something. I do not understand how that happened when I am using the english vocab for tokenizing and vectorizing the data.
#
# - The 3rd closest review was not even for a restaurant, it seems to be for a massage place of some sort and it is a positive review.
#
# - The 4th closest review doesn't really seem to align with anything in my review other than there is a one negitive statement.
#
# - The 5th closest review is a negative review because it is talking about food poisoning but said the food was good.
#
# - The 6th closest review was not even for a restaurant, it seems to be for another massage place of some sort and it is positive.
#
# - The 7th closest review is negitive but is for some kind of a nail salon.
#
# - The 8th closest review seems to be pretty much positive.
#
# - The 9th closest review is a positive review mostly but one thing that matches mine exactly is the statement "you get what you pay for".
#
# - The final recommend review is a negitive review but it is for a Sally bueaty supply store.
#
# - I would say out of the 8 reviews that were in English, and I could read, only 1 related closely to mine. It is possible that the first 2 did also, but can not read them. Out of the ones that did not relate to mine at all, 2 of them were positive reviews, while mine was negative all around, and the rest of them weren't even for a restaurant.
# + [markdown] id="S0tgpUHPBhXO"
# ## Part 3: Classification
# <a id="#p3"></a>
# Your goal in this section will be to predict `stars` from the review dataset.
#
# 1. Create a pipeline object with a sklearn `CountVectorizer` or `TfidfVector` and any sklearn classifier.
# - Use that pipeline to estimate a model to predict the `stars` feature (i.e. the labels).
# - Use the Pipeline to predict a star rating for your fake review from Part 2.
#
#
#
# 2. Create a parameter dict including `one parameter for the vectorizer` and `one parameter for the model`.
# - Include 2 to 3 possible values for each parameter
# - **Use `n_jobs` = 1**
# - Due to limited computational resources on CodeGrader `DO NOT INCLUDE ADDITIONAL PARAMETERS OR VALUES PLEASE.`
#
#
# 3. Tune the entire pipeline with a GridSearch
# - Name your GridSearch object as `gs`
# + id="cBJ6HtYoBhXP"
# Using these functions from explore_data.py but
# have to add the function here to pass the autograder
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
from sklearn.feature_extraction.text import CountVectorizer
def get_num_words_per_sample(sample_texts):
"""Gets the median number of words per sample given corpus.
# Arguments
sample_texts: list, sample texts.
# Returns
int, median number of words per sample.
"""
num_words = [len(s.split()) for s in sample_texts]
return np.median(num_words)
def plot_sample_length_distribution(sample_texts):
"""Plots the sample length distribution.
# Arguments
samples_texts: list, sample texts.
"""
plt.hist([len(s) for s in sample_texts], 50)
plt.xlabel('Length of a sample')
plt.ylabel('Number of samples')
plt.title('Sample length distribution')
plt.show()
def plot_frequency_distribution_of_ngrams(sample_texts,
ngram_range=(1, 2),
num_ngrams=50):
"""Plots the frequency distribution of n-grams.
# Arguments
samples_texts: list, sample texts.
ngram_range: tuple (min, mplt), The range of n-gram values to consider.
Min and mplt are the lower and upper bound values for the range.
num_ngrams: int, number of n-grams to plot.
Top `num_ngrams` frequent n-grams will be plotted.
"""
# Create args required for vectorizing.
kwargs = {
'ngram_range': ngram_range,
'dtype': 'int32',
'stop_words': 'english', # Added stop words to default kwargs
'strip_accents': 'unicode',
'decode_error': 'replace',
'analyzer': 'word', # Split text into word tokens.
}
vectorizer = CountVectorizer(**kwargs)
# This creates a vocabulary (dict, where keys are n-grams and values are
# idxices). This also converts every text to an array the length of
# vocabulary, where every element idxicates the count of the n-gram
# corresponding at that idxex in vocabulary.
vectorized_texts = vectorizer.fit_transform(sample_texts)
# This is the list of all n-grams in the index order from the vocabulary.
all_ngrams = list(vectorizer.get_feature_names())
num_ngrams = min(num_ngrams, len(all_ngrams))
# ngrams = all_ngrams[:num_ngrams]
# Add up the counts per n-gram ie. column-wise
all_counts = vectorized_texts.sum(axis=0).tolist()[0]
# Sort n-grams and counts by frequency and get top `num_ngrams` ngrams.
all_counts, all_ngrams = zip(*[(c, n) for c, n in sorted(
zip(all_counts, all_ngrams), reverse=True)])
ngrams = list(all_ngrams)[:num_ngrams]
counts = list(all_counts)[:num_ngrams]
idx = np.arange(num_ngrams)
plt.figure(figsize=(14,6))
plt.bar(idx, counts, width=0.8, color='b')
plt.xlabel('N-grams')
plt.ylabel('Frequencies')
plt.title('Frequency distribution of n-grams')
plt.xticks(idx, ngrams, rotation=45)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 352} id="N7OS5tvuBhXP" outputId="9e0c0783-cf36-45a2-a326-a7785c18f3b1"
# Imports
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
from sklearn.svm import LinearSVC
# Other plotting imports
# import eli5 # Have to comment out to pass autograder
import seaborn as sns
sns.set()
# Define my feature to use for predictions and target
X = df.text
y = df.stars
# Look at the shape of X and y
print(f'X: {X.shape}, y: {y.shape}')
# Get the median words per sample
med_words = get_num_words_per_sample(X)
print(f'Median words per sample: {med_words}')
# Get the ratio of number of samples / the median words per sample
sw_ratio = len(X) / med_words
print(f'Samples to words ratio: {sw_ratio}')
# Plot the length of characters in the data
plot_sample_length_distribution(X)
# + colab={"base_uri": "https://localhost:8080/", "height": 447} id="0szSHE7CBhXQ" outputId="125239e6-69cd-4a3a-fad7-cc109d25e725"
# Plot the top 30 n_grams
plot_frequency_distribution_of_ngrams(X, ngram_range=(1, 3), num_ngrams=30)
# + colab={"base_uri": "https://localhost:8080/"} id="i5YnT2wzBhXQ" outputId="4ad43018-c3ac-413c-c07f-e68f39b2b2e0"
# Initialize a vectorizer with basic parameters
vect = TfidfVectorizer(stop_words='english')
# Initialize a classifer with basic parameters
clf = LinearSVC()
# Define my pipeline
pipe = Pipeline([
('vect', vect),
('clf', clf)
])
# # Parameters to run through grid search
# params = {
# 'vect__max_df': (0.3, 0.5, 0.6),
# 'vect__min_df': (3, 5, 6, 9),
# 'vect__max_features': (2500, 5000),
# 'vect__ngram_range': ((1, 2), (1, 3)),
# 'clf__penalty': ('l1', 'l2'),
# 'clf__C': (0.01, 0.1, 0.3, 0.6)
# }
# ^-- Had to change my parameters to search because auto grader was timing out
# Had to change my parameters to execute faster for code grader
params = {
'vect__min_df': (3, 6, 9),
'clf__C': (0.01, 0.1, 0.3)
}
# Name the gridsearch instance "gs"
# Instantiate the grid search cv and fit it
gs = GridSearchCV(pipe, params, cv=5, n_jobs=1, verbose=1)
gs.fit(X, y)
# raise NotImplementedError()
# + id="9zapWDpEBhXQ"
# Visible Testing
prediction = gs.predict(["I wish dogs knew how to speak English."])[0]
assert prediction in df.stars.values, 'You gs object should be able to accept raw text within a list. Did you include a vectorizer in your pipeline?'
# + colab={"base_uri": "https://localhost:8080/"} id="SsBN7zI9BhXQ" outputId="51acb7af-929c-470d-fbe4-8423e3125577"
# # Get the best score from my grid search
print(f'Grid Search Best Score: {gs.best_score_}\n')
# # Look at the best parameters from the grid search
print(f'Grid Search Best Parameters:\n{gs.best_params_}')
# # Results from my original grid search:
# Grid Search Best Score: 0.6186
# Grid Search Best Parameters:
# {'clf__C': 0.1,
# 'clf__penalty': 'l2',
# 'vect__max_df': 0.5,
# 'vect__max_features': 2500,
# 'vect__min_df': 9,
# 'vect__ngram_range': (1, 2)}
# + colab={"base_uri": "https://localhost:8080/"} id="mxpXHy8oBhXR" outputId="b55e72a2-35b9-49e5-8e1e-f996ab779f89"
# Assign my best model parameters
best_mod = gs.best_estimator_
vect = best_mod.named_steps['vect']
clf = best_mod.named_steps['clf']
# Evaluate on my fake review
my_review_pred = best_mod.predict(fake)
print(f'Prediction of how many stars my review would be: {my_review_pred}')
# + colab={"base_uri": "https://localhost:8080/"} id="WoovgU14fOWy" outputId="e6b782bd-bdb8-4be7-dc1d-b92d838e804c"
pip install eli5
# + colab={"base_uri": "https://localhost:8080/", "height": 691} id="L0u6XuIFBhXR" outputId="ad5fb722-31e9-4f6e-8cec-1191ea12f69f"
import eli5
# Explain what the model learned about each class
eli5.show_weights(clf, vec=vect, top=30)
# Have to comment out to pass autograder
# + [markdown] id="f72mIJ3ZBhXR"
# ## Part 4: Topic Modeling
#
# Let's find out what those yelp reviews are saying! :D
#
# 1. Estimate a LDA topic model of the review text
# - Set num_topics to `5`
# - Name your LDA model `lda`
# 2. Create 1-2 visualizations of the results
# - You can use the most important 3 words of a topic in relevant visualizations. Refer to yesterday's notebook to extract.
# 3. In markdown, write 1-2 paragraphs of analysis on the results of your topic model
#
# When you construct your LDA model, it should look like this:
#
# ```python
# lda = LdaModel(corpus=corpus,
# id2word=id2word,
# random_state=723812,
# num_topics = num_topics,
# passes=1
# )
#
# ```
#
# __*Note*__: You can pass the DataFrame column of text reviews to gensim. You do not have to use a generator.
# + id="EosFBDWaBhXR"
from gensim import corpora
# Due to limited computational resources on CodeGrader, use the non-multicore version of LDA
from gensim.models.ldamodel import LdaModel
import gensim
import re
# + colab={"base_uri": "https://localhost:8080/"} id="ZEbYnzsLBhXR" outputId="cf061567-a4c4-402b-c661-f7a0a50823ca"
# Create a dictionary using a method from the gensim library
id2word = corpora.Dictionary(df.tokens)
# Look at the length of the words
len(id2word.keys())
# + colab={"base_uri": "https://localhost:8080/"} id="hiNcqjYbBhXS" outputId="fcb27114-023b-4682-d17f-9c492a75f089"
# Remove the extreme values
id2word.filter_extremes(no_below=2, no_above=0.9)
# Look at the length of the words now
len(id2word.keys())
# + id="7IHgPMsoBhXS"
# Create a BOW representation of the entire corpus
corpus = [id2word.doc2bow(text) for text in df.tokens]
# + [markdown] id="XFtIstuqBhXS"
# ### 1. Estimate a LDA topic model of the review tex
# + id="siL5bALaBhXS"
lda = LdaModel(corpus=corpus,
id2word=id2word,
num_topics=5,
passes=1)
# raise NotImplementedError()
# + [markdown] id="sC0rOrtHBhXS"
# #### Testing
# + id="9ISG50lbBhXS"
# Visible Testing
assert lda.get_topics().shape[0] == 5, 'Did your model complete its training? Did you set num_topics to 5?'
# + [markdown] id="bcBAbW9BBhXT"
# #### 2. Create 1-2 visualizations of the results
# + colab={"base_uri": "https://localhost:8080/"} id="KJNhcxS0e3Jo" outputId="72a89f63-03d9-4d9a-d182-d22ad2803344"
# !python -m pip install -U pyLDAvis
# + colab={"base_uri": "https://localhost:8080/", "height": 881} id="F5dD5K_IBhXT" outputId="babbb593-a2e0-422e-c149-df3951ea15f2"
# Import pyLDAvis
import pyLDAvis
import pyLDAvis.gensim
# Create a pyLDAvis visualization
pyLDAvis.enable_notebook()
py_vis = pyLDAvis.gensim.prepare(lda, corpus, id2word)
pyLDAvis.display(py_vis)
# raise NotImplementedError()
# + [markdown] id="sSDPJ4wuBhXT"
# #### 3. In markdown, write 1-2 paragraphs of analysis on the results of your topic model
# + [markdown] id="tDA0UX8XBhXT"
# Based on my visualization, it appears that I could do with better tokenization. I noticed that there are a few words that could have been added to the stop words list to improve my predictability. I can see that the largest topic is topic 1 at 30.3% of the tokens, which appears to be related to the positive reviews of restaurants. All the other topics appear to be related to the positive reviews as well.
#
# Topic 2 appears to be more related to spa type of businesses, not restaurants. Topic 3 appears to be more related to hotels for tourist. While topic 4 appears to be talking about local businesses mostly. Last but not least, topic 5 appears to be reviews of automotive businesses mostly.
# + [markdown] id="2chJrOsuBhXT"
# ## Stretch Goals
#
# Complete one of more of these to push your score towards a three:
# * Incorporate named entity recognition into your analysis
# * Compare vectorization methods in the classification section
# * Analyze more (or all) of the yelp dataset - this one is v. hard.
# * Use a generator object on the reviews file - this would help you with the analyzing the whole dataset.
# * Incorporate any of the other yelp dataset entities in your analysis (business, users, etc.)
particular numerical feature.</strong></li>
# </ul>
# </p>
# <p>Hence, for now, we would like to <strong>use it for the 2nd case where we need to compute the categorical features for which the null hypothesis has been rejected for 'Income', since we want to create a regression model for dealing with the missing values of 'Income'.</strong></p>
# +
# Understanding the number of Categories in each of the categorical feature.
def num_categories(categorical_features):
for feature in categorical_features:
print(feature, '---->> Categories ---->>', data[feature].nunique())
num_categories(cat_feat)
# -
from scipy.stats import f_oneway
# +
# for categorical features which have number of categories greater than 2, One Way ANOVA is used.
# But for categorical features which have number of categories=2, T-test is used,
# but the result of t-test and One Way ANOVA remains the same.
def one_way_anova_table(categorical_feature, data, num_features, cat_feats=None):
results = {}
# For a single num feature and multiple categorical features:
if len(num_features) == 1:
for cat_ft in cat_feats:
data_each_group = []
group_data = data.groupby(cat_ft)
for cat_grp in data[cat_ft].unique():
data_each_group.append(group_data.get_group(cat_grp)[num_features[0]])
stats, p = f_oneway(*data_each_group)
results[cat_ft] = (stats, p)
results_df = np.transpose(pd.DataFrame(results))
results_df.columns=['F-Statistic', 'P-Value']
print('ANOVA Results for Numerical Feature-->', num_features[0])
return results_df
# for single categorical feature and multiple numerical features:
else:
grouped_data = data.groupby(categorical_feature)
for num_ft in num_features:
data_each_group = [] # because after every numerical feat, I want the new list
for cat_grp in data[categorical_feature].unique():
data_each_group.append(grouped_data.get_group(cat_grp)[num_ft])
stats, p = f_oneway(*data_each_group)
results[num_ft] = (stats, p)
results_df = np.transpose(pd.DataFrame(results))
results_df.columns=['F-Statistic', 'P-Value']
print('ANOVA Results for Categorical Feature-->', categorical_feature)
return results_df
# +
# data which does not contains even a single null value. This will be our training
# data and the the data correspondig to the null is our test data.
intr_data = data[data['Income'].notnull()] # interested data
# Getting the names of the transformed numerical features:
intr_ft = [feat for feat in data.columns if 'new' in feat] # interested numerical features
intr_ft.extend(['Income', 'Recency'])
intr_ft
# +
# We use Chi-Square Test of Independence for inspecting whether the 2 categorical features are
# dependent or not.
# It will allow us to select the categorical features which are dependent on another cat. feature:
from scipy.stats import chi2_contingency
def categorical_dependency(cat_feature):
result = {} # will store data
# contingency table:
for cat_feat_1 in cat_feat:
if cat_feat_1 != cat_feature:
cont_tab = pd.pivot_table(intr_data[[cat_feat_1, cat_feature]],
values=[cat_feat_1, cat_feature],
columns=[cat_feat_1],
index=[cat_feature],
aggfunc=len)
# Nan Values in the Contingency Table donotes we do not have any of the observation common between those
# 2 categories of the Categorical Features.
cont_tab_corr = np.where(cont_tab.isnull(), 0, cont_tab) # contingency table corrected
chi2_val, p_val, dof, expected_freq_array = chi2_contingency(cont_tab_corr)
result[cat_feat_1] = (chi2_val, p_val)
result_df = np.transpose(pd.DataFrame(result))
result_df.columns=['Chisquare Value', 'P-Value']
print('Chi-Sqaure Test of Independence with respect to Categorical Feature -->{}'.format(cat_feature))
return result_df
# Testing the function and computing the chi-square and p-values of 'Country' with other categorical features :
categorical_dependency('Country')
# +
# for numerical feature-> 'Income'
# categorical feature -> all the categorical features:
# This table contains the p-values with respect to 'Income' and all the categorical features:
anova_table = one_way_anova_table(None, intr_data, ['Income'], cat_feats=cat_feat)
anova_table
# -
# <p>Choosing the categorical features which have<strong> p-values less than 0.05</strong>, since for those categorical features only,<strong> the null has been rejected for 'Income' i.e. these categorical features explain the variance of 'Income' </strong> and hence, these are the set of categorical features <strong>important for predicting 'Income'</strong>/p>
# +
# Considering the categorical features for which the null hypothesis has been rejected which are
# characterised by having a p-value less than 0.05.
imp_cat_table = anova_table[anova_table['P-Value']<0.05]
imp_cat_table
# -
# important categorical features which explain or account the variance of 'Income'
imp_cat_income = list(imp_cat_table.index)
imp_cat_income
# <p>These are the set of Categorical features which explain or account for the variance of 'Income', stored <strong>as a list --> imp_cat_feat</strong> and hence, we can go ahead and now choose the numerical features.</p>
# <h3>Important Numerical Features:</h3>
# <p>Since we have transformed maximum of the numerical features so as to remove the outliers of those numerical features, hence now we compute correlation with those transformed numerical features only and<strong> would not consider the original features since the original data contained outliers within them and now, the transformed features do not contain any outliers. Hence the regression model will be trained only on the transformed numerical features.</strong></p>
# +
# we consider the numerical features which have correlation with 'Income' i.e. target feature
# more than 0.3:
find_highly_corr_feat('Income', intr_ft, 0.3)
# +
# important numerical features which have pearson correlation higher than 0.3:
imp_num_income = list(find_highly_corr_feat('Income', intr_ft, 0.3).keys())
imp_num_income
# -
# <h3>Regression Model for predicting the missing values of 'Income':</h3>
# Number of Categories in categorical features:
num_categories(imp_cat_income)
def input_data(data, imp_cat_feat, imp_num_feat, add_feat):
# considering only the features which are given as imp_cat_feat and imp_num_feat:
imp_feats = imp_cat_feat + imp_num_feat + add_feat
data_new = data[imp_feats]
# one hot encoding or dummy encoding for categorical features:
for cat_feature in imp_cat_feat:
for category in data_new[cat_feature].unique():
data_new[cat_feature + ' ' +str(category)] = np.where(data_new[cat_feature]==category, 1, 0)
x_train = data_new.drop(imp_cat_feat, axis=1)
return x_train
# +
# input data for income:
data_income = input_data(data, imp_cat_income, imp_num_income, ['Income'])
data_income.columns
# -
# <h3>Preprocessing Data:</h3>
# +
from sklearn.preprocessing import MinMaxScaler
# Creating training and test data:
test_data = data_income[data_income['Income'].isnull()].drop(['Income'], axis=1)
train_data = data_income[data_income['Income'].notnull()].drop(['Income'], axis=1)
# Creating labels of training and validation data i.e. which contains the non-null values of 'Income':
# Since the 'Income' values are of pretty large scale, hence I have taken logarithm of it:
y_train_full = np.log(data_income[data_income['Income'].notnull()]['Income'])
# Scaling the training and test data:
# Creating an instance of MinMaxScaler class:
scaler = MinMaxScaler()
# Fitting and Transforming the features based upon the training data:
scaled_data_train = scaler.fit_transform(train_data)
train_scaled = pd.DataFrame(scaled_data_train, columns=train_data.columns)
# Transforming the features of test data fitted on training data:
scaled_data_test = scaler.transform(test_data)
test_scaled = pd.DataFrame(scaled_data_test, columns=test_data.columns)
# Creating the input features and labels of training data:
y_train = y_train_full[:2166]
x_train = train_scaled[:2166]
# Creating the Validation Data: (50 instances are there for validating the data)
y_val = y_train_full[-50:]
x_val = train_scaled[-50:]
# Setting the index of labels as the same as that of indep_features:
y_train.index = x_train.index
y_val.index = x_val.index
# Creating the input features and labels of test data:
x_test = test_scaled
# +
print('Shape of Training Test and Validation Data')
print('============================================')
# Shape of Training Data
print('Train Data -->', x_train.shape, 'Train Labels -->', y_train.shape)
# Shape of Test Data
print('Test Data -->', x_test.shape)
# Shape of Validation Data
print('Validation Data -->', x_val.shape, 'Validation Labels -->', y_val.shape)
# -
# <h3>Fitting Data using Linear Regression Model:</h3>
# +
# To get the significant predictors of 'Income', I train the LinearRegression model using Statsmodels module, since it also
# gives the p-value which indicates the importance of each of the independent variables or input variables:
import statsmodels.api as sm
model = sm.OLS(y_train, x_train)
result = model.fit()
result.summary()
# +
# Remove the Insignificant Features from all the input set i.e. training, test and validation:
insig_feats = ['MntFruits_log_new', 'MntFishProducts_log_new', 'MntSweetProducts_log_new',
'MntGoldProds_log_new']
x_train = x_train.drop(insig_feats, axis=1)
x_val = x_val.drop(insig_feats, axis=1)
x_test = x_test.drop(insig_feats, axis=1)
# +
# retraining the model after removing the insiginificant features:
model = sm.OLS(y_train, x_train)
result = model.fit()
result.summary()
# -
# Durbin Watson Test gives the value of around 2, which means there is no autocorrelation in our model which means that the error terms are not correlated with each other. From the above graph plotted between Residuals and predicted values, allows us to understand that the variance of the error terms are almost constant.
# <h4>Training a Linear Regression Model to predict 'Income' based upon the Significant Predictors as chosen above:</h4>
# +
# for training a linear regression model:
from sklearn.linear_model import LinearRegression
# for checking one of the assumptions of Linear Regression i.e. whether the residuals are independent or not i.e. No
# AutoCorrelation is expected:
from statsmodels.graphics.tsaplots import plot_acf
# +
lin_reg_mod = LinearRegression()
lin_reg_mod.fit(x_train, y_train)
# Predictions of Validation Data:
y_pred_val = lin_reg_mod.predict(x_val)
residuals = y_val-y_pred_val
# Assumption-1: Residuals should be normally distributed.
# Residuals are normally distributed:
plt.figure(figsize=(12, 6))
plt.subplot(121);
sns.distplot(residuals)
plt.title('Distribution of Residuals')
# Assumption-2: Homoscedasticity should be there i.e. the variance of the residuals
# should be constant over the whole data.
plt.subplot(122);
plt.scatter(residuals, y_pred_val)
plt.title('Checking for Homoscedasticity')
# -
# Assumption-3: The residuals should not be correlated i.e. there should not be any correlation between the error terms:
plot_acf(residuals)
plt.show()
# +
def mean_squared_error(y_true, y_pred):
mse = (np.mean((y_true-y_pred)**2))
return mse
# error of validation data:
mse_val = mean_squared_error(y_val, y_pred_val)
print('Mean Squared Error of Validation Data', mse_val)
# error of training data:
y_pred_train = lin_reg_mod.predict(x_train)
mse_train = mean_squared_error(y_train, y_pred_train)
print('Mean Squared Error of Training Data', mse_train)
# -
# <h3>Analysis of Linear Regression to handle missing values:</h3>
# <p> <ul>
# <li>If we train the model without applying any transformation to 'Income', then <strong>R-Squared was 0.47 </strong>and after applying <strong> logarithmic transformation to 'Income', the R-Squared is increased upto 0.56.</strong></li>
# <li>If we look closely at the residuals, the distribution is more like Gaussian <strong> except at the point lying towards the extreme left which is making it a left-skewed</strong>. The plot of homoscedasticity is also looking nice <strong>if we remove the 2 leftmost points from it </strong>, then the points seem to be randomly distributed which we wanted. Thus, there are 2 outliers but apart from them, the <strong>residuals are distributed close to normal distribution.</strong> Hence, we sort of fulfil the 2 major assumptions of Linear Regression.</li>
# <li>From, the autocorrelation plot, we can directly see that there is not even a single bar lying out of the blue region (which is considered to be statistically 0) and hence, we can conclude that<strong> the error terms are completely independent which fulfils the 3rd assumption of Linear Regression.</strong></li>
# <li><strong>Remember, our aim here is to deal with the missing values, and as far predicting the missing values through this model is concerned, the model seems to be working more than good for us. Had our aim been to solely predict 'Income' for some business purpose, then we could have tried to preprocess more of the independent features so as to make the residuals more closer to normal distribution.</strong></li>
# <li>Thus, overall, except a few points which are lying towards the left side, the model seems to work really nice and as far as dealing with missing values are concerned, we can for sure use this model to predict the NaN values of 'Income'.</li>
# </ul> </p>
# <h4>Predicting the Missing Values of 'Income' using the above trained Regression Model: </h4>
# +
# predicting for the missing values of Income:
income_missing_val = list(lin_reg_mod.predict(x_test))
final_income_val = np.exp(list(y_train_full) + income_missing_val)
final_income_val
# +
data_copy = data.copy() # creating a copy of the original data:
# data equivalent to the null values of 'Income':
null_income_data = data_copy[data_copy['Income'].isnull()]
# data equivalent to the non-null values of 'Income':
not_null_income_data = data_copy[data_copy['Income'].notnull()]
# -
# completed data:
cmp_data = pd.concat([not_null_income_data, null_income_data], axis=0)
cmp_data['Income'] = final_income_val
cmp_data['Income'][:2216]
not_null_income_data['Income']
# <h2>Section 2: Statistical Analysis of certain questions:</h2>
# <h3>Question 1) Significant Predictors for 'NumStorePurchases':</h3>
# +
# contains the transformed numerical features without outliers:
trans_num_feat = intr_ft + ['NumDealsPurchases','NumWebPurchases','NumCatalogPurchases','NumStorePurchases','NumWebVisitsMonth']
trans_num_feat
# +
# Regression Model:
# regression function does one-hot encoding for cat. features and runs a regression:
def regression(data, imp_cat_feat, imp_num_feat, target_feature):
# considering only the features which are given as imp_cat_feat and imp_num_feat:
imp_feats = imp_cat_feat + imp_num_feat
data_new = data[imp_feats]
# one hot encoding or dummy encoding for categorical features:
for cat_feature in imp_cat_feat:
for category in data_new[cat_feature].unique():
data_new[cat_feature + ' ' +str(category)] = np.where(data_new[cat_feature]==category, 1, 0)
x_train = data_new.drop(imp_cat_feat, axis=1)
y_train = data[target_feature]
#print('Independent Features', x_train.columns)
model = sm.OLS(y_train, x_train)
result = model.fit()
return result
# Important Categorical Features with respect to a numerical target feature:
def best_cat_feats(data, lst_target_feat, cat_feats):
anova_table = one_way_anova_table(None, data=data, num_features=lst_target_feat, cat_feats=cat_feats)
imp_cat_tab = anova_table[anova_table['P-Value']<0.05]
#print(imp_cat_tab)
best_cat = list(imp_cat_tab.index)
return best_cat
# Important Numerical Features with respect to the numerical target feature:
def best_num_feats(target_feature, num_feats, threshold):
imp_num = find_highly_corr_feat(target_feature, num_feats, threshold)
best_num = list(imp_num.keys())
return best_num
# Combining all the above functions in a single function:
def reg_model(data, num_feats, cat_feats, lst_target_feature, threshold):
imp_cat_feat = best_cat_feats(data, lst_target_feature, cat_feats)
imp_num_feat = best_num_feats(lst_target_feature[0], num_feats, threshold)
res_model = regression(data, imp_cat_feat, imp_num_feat, lst_target_feature[0])
return res_model.summary()
# -
# Regression Model Trained on Transformed Numerical features i.e. which do not contain outliers:
reg_model(cmp_data, trans_num_feat, cat_feat, ['NumStorePurchases'], 0.3)
# <h3>Explanation (Question 1)</h3>
# <p> From the above regression model, we can infer that in order to predict 'NumStorePurchases', out of all the features in our data, these features are statistically significant predictors:
# <ul><strong>
# <li>MntMeatProducts_log_new</li>
# <li>MntSweetProducts_log_new</li>
# <li>Income</li>
# <li>NumWebPurchases</li>
# <li>NumCatalogPurchases</li>
# <li>Education</li>
# <li>Kidhome</li>
# <li>AcceptedCmp3</li>
# <li>AcceptedCmp4</li>
# <li>AcceptedCmp5</li>
# <li>AcceptedCmp2</li>
# </strong></ul>
# Hence if we want to create a regression model which predicts the 'NunStorePurchases' i.e. the number of store purchases so as to allocate services for them in the market.
# </p>
#
# <p><strong>Business Objective:</strong> If we could predict the number of purchases from store, then we can plan accordingly for the availability of goods and products, manage the supply chain, etc. It gives us tonnes of business implications to do this.</p>
# <h3>Question 2) In terms of total purchase, does USA has more say than rest of the world?</h3>
# +
# Preparing data for based upon the question:
data_copy = data.copy()
data_copy['TotalPurchases'] = data_copy['NumDealsPurchases'] + data_copy['NumWebPurchases'] + data_copy['NumCatalogPurchases'] + data_copy['NumStorePurchases']
data_copy['US_only'] = np.where(data_copy['Country']=='US', 1, 0)
data_ques_2 = data_copy[['TotalPurchases', 'US_only']]
# T-Test :
from scipy.stats import ttest_ind
def t_test(data, cat_feature, num_feature):
grouped_data = data.groupby(cat_feature)
data_each_group = []
group_mean = []
for cat_grp in data[cat_feature].unique():
data_each_group.append(grouped_data.get_group(cat_grp)[num_feature])
group_mean.append((cat_grp, np.mean(data[data[cat_feature]==cat_grp][num_feature])))
t_val, p_val = ttest_ind(*data_each_group)
print('T-Value is --> {}'.format(t_val))
print('P-Value is --> {}'.format(p_val))
if p_val < 0.05:
print('We reject the null')
mean_df = pd.DataFrame(group_mean, columns=['Category', 'Mean'])
return mean_df
else:
print('We need to accept the null')
t_test(data_ques_2, 'US_only', 'TotalPurchases')
# -
# <h3>Explanation(Question 2):</h3>
# <p>From the above analysis, we get p-value which is greater than 0.05 and hence, we need to accept the null hypothesis which <strong>means that there is no significant difference between the population mean of total purchases of USA as compared to the population mean of total purchases of rest of the world. Hence it means, there is no significant proof that we can say Total Purchase of USA is more than the rest of the World. Thus, from our analysis, we can say that the population mean of Total Purchase of USA and Rest of the World is same. </strong></p>
# <p><strong>Business Objective-</strong> From our analysis, we can infer that the population average for total purchases in USA is no different than the rest of the world and<strong> this takes us to concluding that the market needs not to focuss only on USA but it also need to focuss on the Rest of The World since both of them have same population mean for Total Purchases. Hence, the marketting strategy should not only be confined to USA but equal efforts should be put for Rest of the World. It might be possible that Total Purchases in USA might differ largely from some other specific country but when we are considering USA versus Rest of the World, then we did not get any conclusive evidence to say that USA has more Total Purchases. Thus the maketting campaign needs to supply products equally well to the Rest of the World as to USA individually, since the population mean of Total Purchases are almost same.</strong></p>
# <h3>Question 3) Test if the customers who spent above average on buying 'Gold Products' in last 2 years have more number of StorePurchases or not?</h3>
# +
# Creating a new column year in the data:
year = []
for date in date_customer:
year.append('20' + str(date[-2:]))
data['Year'] = year
# 2014 is the current year hence we want ot consider the customers who bought Gold in year 2014 and 2013.
data_2013 = data[data['Year']=='2013'] # data only of year 2013
data_2014 = data[data['Year']=='2014'] # data only of year 2013
data_2_years = pd.concat([data_2013, data_2014]) # contains data only for 2 years
# Computing the average amount of GoldProducts since we want to create a feature which contains 2 categories as follows:
avg_amnt_gold = np.mean(data_2_years['MntGoldProds'])
data_2_years['AboveAvgGold'] = np.where(data_2_years['MntGoldProds'] > avg_amnt_gold, 1, 0)
# Further, we want to see if numstorepurchases is affected by the people who buy gold.
data_ques_3 = data_2_years[['AboveAvgGold', 'NumStorePurchases']]
t_test(data_ques_3, 'AboveAvgGold', 'NumStorePurchases')
# -
# Number of people who spend above average:
len(data_ques_3[data_ques_3['AboveAvgGold']==1])
# Number of people who spend below average:
len(data_ques_3[data_ques_3['AboveAvgGold']==0])
# <h3>Explanation (Question 3):</h3>
# <p>From the above analysis, we can observe that the null has been rejected and this means that the population mean of numstorepurchases for two group of customers i.e. one who spends above average on GoldProducts and the other one who spends below averag are different. From the table, we can observe the people who spend above average on Gold Products for the past 2 years(represented by Category 1) have large mean of Number of Store Purchases which means people who spend above average on gold tend to have large number of purchases from the store i.e. they prefer to buy from Store.</p>
# <p><strong>Business Objective:</strong> We can compute for the number of Customers who spend above average on buying Gold Products and based upon that we can further increase or decrease the supply of Gold Products in the Stores, since if the number of customers who spend above average on buying Gold, they would prefer to purchase them from Stores directly, hence the supply should be increased. Since in our case, the number of customers who spend below average are more than twice the number of customers who spend above average hence the supply should be reduced within the Stores since we do not have much customers who spend above average. Thus, the supply of Gold products should be increased for other channels i.e. NumDealsPurchases','NumWebPurchases','NumCatalogPurchases' since we have more customers who spend less than the averge amount.</p>
# <h3>Question 4)Test if Married PHd people spend more on fish? Also find the signifcant predictors which affect the 'MntFishProducts'.</h3>
# +
# Prepraring the data for Question 4 which involves the Married PhD candidates:
education = list(data['Education'])
marital_stats = list(data['Marital_Status'])
married_phd = [] # stores the value either 1 or 0
# Creating a list which contains 1 if customer is married and Phd else 0:
for idx in range(len(data)):
if education[idx]=='PhD' and marital_stats[idx]=='Married':
married_phd.append(1)
else:
married_phd.append(0)
# creating the data for question 4:
data['married_phd'] = married_phd
data_ques_4 = data[['MntFishProducts', 'married_phd']]
# Testing if Married PhD people spend more on Fish or not:
# Category 1-> Married Phd Customers:
# Category 0-> Other Customers:
t_test(data_ques_4, 'married_phd', 'MntFishProducts')
# -
# <h3>Explanation (Question 4):</h3>
# <p>The T-test's result is significance which means the population mean of the amount spent on Fish Products is different for Married-Phd Customers and the other customers. The test is significant since p-value is less than alpha i.e. 0.05, hence it is significant the we can say the population mean of the amoiunt spent of fish is different for 2 categories. <strong>But we wanted to test if the Married Phd people i.e. customers belonging to Category-1 spend more on Fish Products or not. To our surprise, from the above table, we can observe that the sample mean of customers belonging to Category-0 have large mean of amount spent on Fish Products as compared to the customers belonging to Category-1. Thus, based upon the data, we can say there is no conclusive evidence that Married PhD customers spend more on fish and from the data, it seems the non-married and non-phd customers spend more on buying Fish Products since their mean is high.</strong></p>
# <p><strong>Business Objective: Thus from our results, it seems that amount spent on Fish Products merely depends upon the liking of the customer irrespective of their marital status and education.</strong> Hence, had there been any relation between the 'MntFishProducts'--> 'Marital_Status' and 'Education', then the advertising team could use it as their weapon to attract the customers from that specific background i.e. Married PhD Customers: but in actual, no such relationship exist, thus the advertising teams cannot target any such group of people. </p>
# <h4>Significant Predictors of 'MntFishProducts':</h4>
# Regression Model Trained on Original Numerical features i.e. which contain outliers:
reg_model(cmp_data, num_feat, cat_feat, ['MntFishProducts'], 0.3)
# Regression Model Trained on Transformed Numerical features i.e. which do not contain outliers:
reg_model(cmp_data, trans_num_feat, cat_feat, ['MntFishProducts_log_new'], 0.3)
# <h3>Analysis:</h3>
# <p>There are 2 regression models which were trained: The first one includes the features as it is without treating their outliers and the 2nd model is trained on all the transformed numerical features where there are no outliers and hence, we can see that R-squared of 1st model is 0.534 where as R-Squared of 2nd Models is 0.634 i.e. the explainability and predictability of the target feature i.e. 'MntFishProducts' has been increased by 0.10% and the increment in R-Squared makes us confident that 2nd model is much much better than the 1st model and now we can consider all the features which are statistically significant in predicting the target feature and based upon their p-values, we will choose the importance of features.</p>
# <h3>Significant Predictors of 'MntFishProducts'</h3>
# <p><ul>
# <strong>
# <li>MntFruits_log_new</li>
# <li>MntMeatProducts_log_new</li>
# <li>MntSweetProducts_log_new </li>
# <li>MntSweetProducts_log_new</li>
# <li>MntGoldProds_log_new</li>
# <li>Education</li>
# <li>Kidhome</li>
# <li>Teenhome</li>
# <li>AcceptedCmp5</li>
# <li>AcceptedCmp1</li>
# <li>Response</li>
# </strong>
# </ul></p>
# <h3>Question 5) Is there any significant Relationship which exist between Geographical Regions and Success of Campaign?</h3>
# Using the Chi-Square Test of Independece:
categorical_dependency('Country')
# <h3>Explanation (Ques 5):</h3>
# <p>From the above table which contains the chi-square value and p-value, we can observe that p-value with respect to features-> $'AcceptedCmp3', 'AcceptedCmp4', 'AcceptedCmp5', 'AcceptedCmp1', 'AcceptedCmp2' and 'Response' $,<strong> are greater than 0.05 i.e. the level of significance and hence we accept the null hypothesis which means there is no dependence between the Geographical Regions and the above mentioned features which correspond to the success of a campaign.</strong></p>
# <p><strong>Business Objective:</strong> Thus, from the above Test of Independence (used to observe the dependence of 2 categorical features), we do not have any conclusive evidence that the success of a campaign depends on Geographical Region or Country. <strong>Hence this means the campaign instructors and decision makers should not exhaust all of their resources on campaigning for a particular country, rather they should allocate the resources for campaigning simultaneously on different countries with a solid plan, since the Country is not affecting the success of the Campaign.</strong></p>
# <h2>Section 3: Visualizations: </h2>
# <h3> 1) Most Successful Maketting Campaign:</h3>
# +
# Features denoting the success of a campaign:
cmp_feats = [feat for feat in cat_feat if 'Accepted' in feat]
cmp_feats.append('Response')
cmp_feats
# +
# Computing the number of customers who got themselves registered in respective campaigns:
success_cmp = {}
for feat in cmp_feats:
freq = len(data[data[feat]==1])
success_cmp[feat] = freq
# creating the values to be passed while creating a bar chart:
name_of_cmp = list(success_cmp.keys()) # name of campaign
num_cust = list(success_cmp.values()) # number of customers in each campaign
# plotting a bar chart:
plt.figure(figsize=(9, 5))
plt.grid(True, alpha=0.4)
plt.bar(x=name_of_cmp, height=num_cust,width=0.4, color='pink')
plt.ylabel('Number of Customers Registered', fontsize=14, color='darkblue')
plt.xlabel('Marketing Campaigns', fontsize=14, color='darkblue')
plt.title('Success of Campaign', fontsize=16, color='darkblue')
# -
# <h3>Analysis:</h3>
# <p>From the above, Bar Chart, we can clearly observe that the <strong> last Campaign i.e. 'Response' has registered most of the customers i.e. around 343 </strong> which are almost double of the previous campaigns.</p>
# <h3>2) Average Customer for our market:</h3>
# <strong>An average customer of the market pays:
# <ol>
# <li>Less than 100 dollars for Wines.</li>
# <li>Less than 10 dollars for Fruits.</li>
# <li>Less than 50 dollars for MntMeatProducts.</li>
# <li>Less than 10 dollars for Fish.</li>
# <li>Less than 10 dollars for Sweet Products.</li>
# <li>Less than 20 dollars for Gold Products.</li>
# </ol>
# </strong>
# Refer to the analysis done in Section 1:
# <h3>3) Which Products are performing the best?</h3>
# +
# Plotting the Cumulative Density Functions for each of the product:
# Getting the products out of all the numerical features:
products = num_feat[1:7]
products
# Storing the Cumulative Distributed Values for each of the product and storing it in a dictionary:
cdf_dict = {}
for feat in products:
cdf_dict[feat] = cum_density_func(data, feat)
# for plotting them, we convert the keys and values into list:
values_dict = list(cdf_dict.values())
keys_dict = list(cdf_dict.keys())
colors = ['orange', 'green', 'darkred', 'yellow', 'pink', 'red']
# plotting:
plt.figure(figsize=(20, 10))
for i in range(len(values_dict)):
plt.subplot(2, 3, i+1);
plt.plot(values_dict[i], np.arange(0.01, 1, 0.01), color='k', linewidth=3)
plt.grid(True, alpha=0.4)
plt.fill_between(values_dict[i], np.arange(0.01, 1, 0.01), color=colors[i])
plt.xlabel(keys_dict[i] + ' (in $)')
plt.ylabel('Percentiles of Customers')
plt.title('Cum. Density Func.')
# -
# From the above Cumulative Density Functions of all the products, we can see that the product -> <strong>Wines is performing quite good since 60% percent of people are paying less than 300 dollars. </strong>, Where as if you observe <strong>'MntGoldProducts', which is generally expensive, but from the cdf, we can observe, 60% of the people are paying less than 40 dollars. Thus, it can certainly be analysed that Gold Products are not being sold according to their value, since they are generally expensive and most of the people are paying too low i.e. they are not interested to spend more on it.</strong> Similarly, we can derive conclusion for other products, but just deciding it based upon how much most of the customers are paying, is not giving us the underlying intuition. <strong>So, what I'll do is take the transformed numerical features which do not contain any outliers and compute the average amount a customer is paying for each of the product and then we can compute what proportion of customers are paying above average price and the proportion of customers who are paying below the average price. This analysis will be more beneficial since we'll get an idea about the average price a customer is willing to pay for each of the product and based upon it for what products, people are willing to spend higher than the average price. Thus, through this procedure, we can get an idea of the best performing products in the market.</strong>
# +
# Considering the transformed features which do not have any outliers within them:
prod_trans = ['MntWines_new', 'MntFruits_log_new', 'MntMeatProducts_log_new',
'MntFishProducts_log_new','MntSweetProducts_log_new', 'MntGoldProds_log_new']
prod_trans
# +
# Computing the proportion of people buying a product above its avg price and below its avg price:
stats = []
for feat in prod_trans:
avg = np.mean(data[feat])
above_avg_prop = len(data[data[feat] > avg])/len(data)*100
below_avg_prop = len(data[data[feat] < avg])/len(data)*100
stats.append((above_avg_prop, below_avg_prop))
result = pd.DataFrame(stats, columns=['Above Average(%)', 'Below Average(%)'], index=prod_trans)
result
# +
# Visualizing it through a bar chart:
labels = ['', 'Wines', 'Fruits', 'Meat', 'Fish', 'Sweet', 'Gold']
plt.figure(figsize=(10, 6))
x = np.arange(len(result))
ax = plt.subplot(111)
plt.grid(True, alpha=0.5)
ax.bar(x-0.3, list(result['Above Average(%)']), width=0.3, align='center', label='Above Average', color='darkred')
ax.bar(x, list(result['Below Average(%)']), width=0.3, align='center',label='Below Average', color='darkblue')
ax.set_xticklabels(labels)
ax.set_ylim(0, 65)
ax.legend(loc='upper center')
ax.set_xlabel('Products', fontsize=14)
ax.set_ylabel('Customers(%)', fontsize=14)
ax.set_title('Customers paying above or below the average price of product', fontsize=15)
# -
# <h3>Analysis of Products:</h3>
# <p><ul>
# <li>Wines, Fruits and Sweet Products are being sold more at below average price which means most of the customers are buying these products at a price worth below average. ALthough, it seems the percentage of people are pretty same i.e. there is equal proportion of customers who are paying below and above the average price. But, if we try to compare based on numbers then for products <strong>Wines, Fruits and Sweet Products, the proportion of them being sold at below average price is more prevelant</strong>. Hence, these products seem to underperform.</li>
# <li>Thus, the products which <strong> seems to be working really good are MeatProducts, FishProducts and GoldProducts</strong> since, most of the customers are paying it above the average price of it.</li>
# </ul></p>
# <h3>4) Which channel is underperforming?</h3>
# +
# Total number of Channels which are there through which a customer buys a product:
channels = ['NumDealsPurchases','NumWebPurchases', 'NumCatalogPurchases', 'NumStorePurchases']
channels
# +
# Total Number of Products sold through each of the channels:
# This indicates the preference of the customers i.e. through which do most of the customers buy the
# stuff.
total_prods_sold = {}
for channel in channels:
total_prods_sold[channel] = sum(data[channel])
tot_prods_all_channels = sum(total_prods_sold.values())
for key in channels:
total_prods_sold[key] = (sum(data[key])/tot_prods_all_channels)*100
total_prods_sold
# Plotting a pie chart:
labels = ['Deals', 'Web', 'Catalog', 'Store']
values = list(total_prods_sold.values())
explode = (0.2, 0, 0.1, 0)
fig1, ax1 = plt.subplots(figsize=(5, 6))
ax1.pie(values, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=180)
plt.title('Proportion Of Products Sold at Each Channel')
plt.show()
# -
# <h3>Analysis of Channels:</h3>
# <p>The proportion in pie chart represents the proportion of products being sold through each of the channel. It is clearly observed that<strong> people buy more frequently through 'StorePurchase' i.e. 39% of the total products are being purchased from store.</strong> Further, we can observe that the <strong>least proportion of products are sold through 'DealsPurchase' i.e. only 15.6% of total products and through 'CatalogPurchase' i.e. 17.9% of total products</strong>. Thus, <strong>'DealsPurchase' and 'CatalogPurchase' are the underperforming channels.</strong></p>
| 59,357 |
/Convolution Neural Networks/Week 4/Art_Generation_with_Neural_Style_Transfer_v3a.ipynb
|
2e2ee43ed939a76249668d6adb66e103c099b362
|
[] |
no_license
|
LeonardoMonte/DeepLearningSpecialization
|
https://github.com/LeonardoMonte/DeepLearningSpecialization
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 772,114 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python [conda env:pyvizenv_2] *
# language: python
# name: conda-env-pyvizenv_2-py
# ---
# # Time Series Basics
import pandas as pd
from pathlib import Path
# %matplotlib inline
# Read the Amazon stock prices CSV file as a time series DataFrame
amazon_data = Path("../Resources/amazon.csv")
amazon_stock = pd.read_csv(amazon_data, parse_dates = True, index_col="Date", )
#amazon_stock.index = pd.to_datetime(amazon_stock.index)
amazon_stock.head()
# ### Slice Time Series Data
# Select all rows from September 2018
from_sept_amazon_stock = amazon_stock.loc['2018-09']
from_sept_amazon_stock.tail()
# Select all rows from September through October for 2018
from_sept_oct_amazon_stock = amazon_stock.loc['2018-09':"2018-10"]
from_sept_oct_amazon_stock.head()
# ### Plot Time Series Data
# Plot the closing prices using a line plot
amazon_stock.Close.plot()
# ### Resample Time Series Data
# Resample the closing prices to weekly and take the mean
amazon_stock_weekly = amazon_stock.resample('W').mean().dropna()
amazon_stock_weekly.head()
# Plot the weekly average closing prices as a line chart
amazon_stock_weekly['Close'].plot()
# # Optional Challenge
# +
# Use resample to get the highest closing price per month
# YOUR CODE HERE!
# Create a bar chart of the result
# YOUR CODE HERE!
| 1,520 |
/ipl_analysis/.ipynb_checkpoints/starter-checkpoint.ipynb
|
7aa9e94d6ed731a1c9c9a293d0419e7960344484
|
[] |
no_license
|
MichelAtieno/Test-Practice
|
https://github.com/MichelAtieno/Test-Practice
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 31,383 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="elmONJBjB1wZ"
# # Topic Modeling and Gibbs Sampling
# + [markdown] colab_type="text" id="ke8yIh3yC-yw"
# Задача: описать текст через распределение весов по некоторому фиксированному набору топиков (тегов). Например, для набора тегов Политика, Военные сражения, Спорт, Интернет, Драма представить роман "Война и мир" как вектор (0.3, 0.2, 0, 0, 0.5), а статью в газете про допинг в велоспорте как вектор (0.1, 0, 0.7, 0, 0.2).
#
# Для чего, например, это нужно: имея векторное представление для текстов, тексты можно сравнивать, рекомендовать похожие.
#
# Условие: даны только набор текстов и количество тем.
# + [markdown] colab_type="text" id="XJdIgvb5B1wd"
# ## Немного теории
# + [markdown] colab_type="text" id="mipLvBaZB1we"
# Будем представлять текст как неупорядоченный набор слов (Bag-of-words model). Предположим, что имеется K тегов и для каждого тега выбрано распределение $\phi_k$ над списком всевозможных слов (словарем из N слов). По сути, каждое $\phi_k$ - это вектор длины N из неотрицательных величин, в сумме дающих 1. Вектора $\phi_k$ независимы и моделируются распредеделением Дирихле $Dir(\beta)$. Теперь, чтобы собрать текст d из $n$ слов, будем действовать по следующей схеме:
#
#
# * выберем распредление для тегов $\theta_d$. Вновь, $\theta_d$ - это вектор длины K из неотрицательных величин, в сумме дающих 1. Поэтому естественно брать $\theta_d \sim Dir(\theta | \alpha)$
#
# * Для i от 1 до n:
# * выберем тег $z_i$ согласно распределению $\theta_d$
# * выберем слово $w_i$ из распределения для данного тега, т.е. $w_i \sim \phi_{z_i}$
# * добавляем слово $w_i$ в текст.
#
# Полученная модель называется моделью LDA (Latent Dirichlet Allocation). Описанная схема задает совместное распределение скрытых и наблюдаемых параметров по всем текстам корпуса размера M в виде:
#
#
# $p(\textbf{w}, \textbf{z}, \theta, \phi | \alpha, \beta) = Dir(\theta | \alpha) Dir(\phi|\beta)Cat(\textbf{z}|\theta)Cat(\textbf{w}|\phi_z)$.
#
# Здесь $\textbf{w}$ и $\textbf{z}$ обозначают вектора слов и тегов по всем текстам, $\theta$ - набор из $\theta_d$ для каждого документа (матрица $M\times K$), $\phi$ - набор из $\phi_k$ для каждого тега (матрица $K\times N$).
#
# + [markdown] colab_type="text" id="D9yKUGWnXCYm"
# 
# + [markdown] colab_type="text" id="hiXKW7NQWeO8"
# Наша задача - восстановить распределение $p(\textbf{z}, \theta, \phi | \textbf{w}, \alpha, \beta)$.
#
#
# Немного упростим жизнь, и поставим себе задачей восстановить распределение $p(\textbf{z} | \textbf{w}, \alpha, \beta) = \int\int p(\textbf{z}, \theta, \phi | \textbf{w}, \alpha, \beta)\textrm{d}\theta\textrm{d}\phi$.
#
# В этот момент на помощь приходить алгоритм Gibbs Sampling. Напомним, для оценки набора парамеров $\textbf{z} = (z_1, z_2, ..., z_m)$ используется схема:
#
# $z_i^{(t)} \sim p(z_i^{(t)}\ | \ z_1=z_1^{t}, ..., z_{i-1}=z_{i-1}^{t},
# z_{i+1}=z_{i+1}^{t-1}, z_{m}=z_{m}^{t-1})$.
# + [markdown] colab_type="text" id="aQeReuG_aDI7"
# Условные распределения выводятся так. Сначала замечаем, что
#
# $p(z_i|\textbf{z}_{\hat{i}}, \textbf{w}, \alpha, \beta) = \frac{p(z_i,\textbf{z}_{\hat{i}}, \textbf{w}| \alpha, \beta)}{p(\textbf{z}_{\hat{i}}, \textbf{w}| \alpha, \beta)} = \frac{p(\textbf{z}, \textbf{w}| \alpha, \beta)}{p(\textbf{z}_{\hat{i}}, \textbf{w}_{\hat{i}}| \alpha, \beta) p(w_i|\alpha, \beta)}$.
#
# Здесь $\textbf{z}_{\hat{i}}$, $\textbf{w}_{\hat{i}}$ - вектора без $i$-oй копмоненты.
# + [markdown] colab_type="text" id="yA0r2c_ob2c9"
# Далее расписываем:
#
# $p(\textbf{z}, \textbf{w}| \alpha, \beta) = \int\int p(\textbf{z}, \textbf{w}, \theta, \phi| \alpha, \beta)\textrm{d}\theta\textrm{d}\phi = \int\int Dir(\theta | \alpha) Dir(\phi|\beta)Cat(\textbf{z}|\theta)Cat(\textbf{w}|\phi_z)\textrm{d}\theta\textrm{d}\phi =
# \int Dir(\theta | \alpha) Cat(\textbf{z}|\theta)\textrm{d}\theta \int Dir(\phi|\beta)Cat(\textbf{w}|\phi_z)\textrm{d}\phi$
#
# и обнаруживаем, что оба интеграла в последнем выражении вычисляются аналитически. Для примера первый:
#
# $\int Dir(\theta | \alpha) Cat(\textbf{z}|\theta)\textrm{d}\theta = \prod\limits_d \int Dir(\theta_d | \alpha) Cat(\textbf{z}_d|\theta_d)\textrm{d}\theta_d = \prod\limits_d \int \frac{1}{B(\alpha)}\prod\limits_k \theta_{d, k}^{\alpha-1}\prod\limits_i \theta_{d, z_i}\textrm{d}\theta_d = \prod\limits_d\frac{1}{B(\alpha)}\int\prod\limits_k \theta_{d, k}^{n_{d, k} + \alpha - 1}\textrm{d}\theta_d = \prod\limits_d \frac{B(n_{d,\cdot} + \alpha)}{B(\alpha)}$.
#
# Здесь $n_{d,k}$ - количество тэгов $k$ в тексте $d$, $n_{d,\cdot}$ - вектор длины $K$ из этих величин.
#
# Аналогично, второй интеграл
# $\int Dir(\phi|\beta)Cat(\textbf{w}|\phi_z)\textrm{d}\phi = \prod\limits_k \frac{B(n_{k,\cdot} + \beta)}{B(\beta)}$,
#
# где $n_{k,\cdot}$ - вектор длины $N$ встречаемости слов внутри тэга $k$.
#
# Получаем:
#
# $p(\textbf{z}, \textbf{w}| \alpha, \beta) = \prod\limits_d \frac{B(n_{d,\cdot} + \alpha)}{B(\alpha)} \prod\limits_k \frac{B(n_{k,\cdot} + \beta)}{B(\beta)}$.
#
# Теперь
# $p(z_i|\textbf{z}_{\hat{i}}, \textbf{w}, \alpha, \beta) \propto \prod\limits_d \frac{B(n_{d,\cdot} + \alpha)}{B(n_{d,\cdot}^{\hat{i}} + \alpha)} \prod\limits_k \frac{B(n_{k,\cdot} + \beta)}{B(n_{k,\cdot}^{\hat{i}} + \beta)}$.
#
# Знак $\propto$ означает пропорциональность с точностью до общего множителя $p(w_i|\alpha, \beta)$. Векторы $n_{d,\cdot}^{\hat{i}}$ и $n_{k,\cdot}^{\hat{i}}$ получены из векторов $n_{d,\cdot}$ и $n_{k,\cdot}$ после выбрасывания $z_i$.
#
# Выражение упрощается дальше, расписывая бета-функцию через гамма-функции. Напомним,
# $B(x_1, ..., x_m) = \frac{\Gamma(x_1)\cdot...\cdot\Gamma(x_m)}{\Gamma(x_1 + ... + x_m)}$, а также $\Gamma(n) = (n-1)\Gamma(n-1)$. Получим:
#
# $p(z_i=k |\textbf{z}_{\hat{i}}, \textbf{w}, \alpha, \beta) \propto (n_{d_i, k}^{\hat{i}} + \alpha_k) \frac{n_{k, w_i}^{\hat{i}} + \beta_{w_i}}{\sum\limits_{w}(n_{k, w}^{\hat{i}} + \beta_{w})}$.
#
# + [markdown] colab_type="text" id="NiYEQ_IkZUim"
# С этого места можно полностью собрать алгоритм моделирования плотности $p(\textbf{z}| \textbf{w}, \alpha, \beta)$. Введем обозначение $n_k$ - количество слов, отнесенных к тегу $k$, $W$ - общее количество слов в корпусе, $\beta_{sum} = \sum\limits_w\beta_w$
#
# Алгоритм:
#
# * заведем счетчики $n_{k, w}$, $n_{d, k}$, $n_k$
# * случайным образом расставим теги словам, обновим счетчики $n_{k, w}$, $n_{d, k}$, $n_k$
# * пока не сойдемся к стационарному режиму:
# * для каждого $i$ от 1 до $W$:
# * для каждого $k$ от 1 до $K$:
# * $I = I\{z_i = k\}$ (индикатор)
# * вычисляем $p_k = (n_{d_i, k} + \alpha_k - I) \frac{n_{k, w_i} + \beta_{w_i} - I}{n_k + \beta_{sum} - I}$
# * сэмплим новый $z_i$ из полученного распределения $(p_1, ..., p_K)$
# * обновляем счетчики для учета обновленого значения $z_i$
#
# На практике удобно реализовавать так:
#
# * заведем счетчики $n_{k, w}$, $n_{d, k}$, $n_k$, заполненные нулями
# * случайным образом расставим теги словам, обновим счетчики $n_{k, w}$, $n_{d, k}$, $n_k$
# * пока не сойдемся к стационарному режиму:
# * для каждого $i$ от 1 до $W$:
# * $n_{d_i, z_i} \mathrel{-}= 1$, $n_{z_i, w_i} \mathrel{-}= 1$, $n_{z_i} \mathrel{-}= 1$
# * для каждого $k$ от 1 до $K$:
# * вычисляем $p_k = (n_{d, k} + \alpha_k) \frac{n_{k, w_i} + \beta_{w_i}}{n_k + \beta_{sum}}$
# * сэмплим новый $z_i$ из полученного распределения $(p_1, ..., p_K)$
# * $n_{d_i, z_i} \mathrel{+}= 1$, $n_{z_i, w_i} \mathrel{+}= 1$, $n_{z_i} \mathrel{+}= 1$
#
#
#
#
# + [markdown] colab_type="text" id="eaV85RF1gd6l"
# Восстановив распредление для $\textbf{z}$, можем оценить $\theta$ и $\phi$, о которых мы ненадолго забыли. Оценить можно, например, через матожидание по апостериорным распределениям. Получите формулы самостоятельно!
# + [markdown] colab_type="text" id="K13WQakft3kk"
# Литература:
#
# http://u.cs.biu.ac.il/~89-680/darling-lda.pdf
#
# https://www.cs.cmu.edu/~mgormley/courses/10701-f16/slides/lecture20-topic-models.pdf
# + [markdown] colab_type="text" id="S1M8TVaPiIwd"
# Перейдем к практике.
# + [markdown] colab_type="text" id="KKGHWSSiB1we"
# ## Датасет
#
# Возьмем популярный датасет [20 Newsgroups](http://qwone.com/~jason/20Newsgroups/), встроенный в пакет ```sklearn```. Датасет состоит из ~20К текстов, классифицированных на 20 категорий. Датасет разбит на ```train``` и ```test```. Для загрузки используем модуль ```fetch_20newsgroups```, в параметрах указать, что мета информацию о тексте загружать не нужно:
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="ghK1srgpB1wf" outputId="a64b1478-d337-44eb-8413-841ea0f70888"
import numpy as np
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'))
# + [markdown] colab_type="text" id="_BSEchmRB1wh"
# Выведем список категорий текстов:
# + colab={"base_uri": "https://localhost:8080/", "height": 363} colab_type="code" id="lR3knXluB1wi" outputId="d6b41c1a-21d8-4baf-de17-f6c8f7ce8f1f"
newsgroups_train.target_names
# + [markdown] colab_type="text" id="unhqlZciB1wl"
# Атрибут ```traget``` хранит номера категорий для текстов из обучающей выборки:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="PNjAq0D4B1wl" outputId="97f526ec-c9f7-466d-aee7-2df3428b26a9"
newsgroups_train.target[:10]
# + [markdown] colab_type="text" id="OjzuWGdBB1wp"
# Доступ к самим текстам через атрибут ```data```. Выведем текст и категорию случайного примера из обучающего датасета:
# + colab={"base_uri": "https://localhost:8080/", "height": 259} colab_type="code" id="etlhxjk0B1wq" outputId="56847531-dd5f-4ad8-dcac-28c83b2a4c4b"
n = 854
print('Topic = {0}\n'.format(newsgroups_train.target_names[newsgroups_train.target[n]]))
print(newsgroups_train.data[n])
# + [markdown] colab_type="text" id="fLheZ8pTB1wx"
# ## Векторное представление текста
#
# Представим текст как вектор индикаторов вхождений слов из некоторого словаря в текст. Это простейшая модель BOF.
#
# Сформируем словарь на основе нашего набора текстов. Для этого используем модуль ```CountVectorizer```:
# + colab={"base_uri": "https://localhost:8080/", "height": 259} colab_type="code" id="-eXBAq3SB1wy" outputId="37dca253-a58a-4390-af8f-43a8d751366d"
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS
vectorizer = CountVectorizer(lowercase=True, stop_words=ENGLISH_STOP_WORDS,
analyzer='word', binary=True, min_df=8, max_df=.04)
vectorizer.fit(newsgroups_train.data)
# + [markdown] colab_type="text" id="WYt0K78cB1w0"
# Количество проиндексированных слов:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="RjjRR9cTB1w1" outputId="bd25d0e9-8c2d-437f-e49e-207ca482cc1b"
len(vectorizer.vocabulary_)
# + [markdown] colab_type="text" id="rDTu6nCHB1xJ"
# Проиндексированные слова и их индексы:
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="0TRxTcoeB1xK" outputId="56f0a281-46ea-488d-9763-0e288c186069"
vectorizer.vocabulary_
# + [markdown] colab_type="text" id="Ki7CwruhB1xN"
# Индекс, например, для слова anyone:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="N2XVuQu3B1xP" outputId="dc04f83b-482a-4598-cd3e-71f2fdcaa2fe"
vectorizer.vocabulary_.get('car')
# + [markdown] colab_type="text" id="H9ckeTs2B1xT"
# А теперь преобразуем строку в вектор:
# + colab={} colab_type="code" id="DKb4MKQBB1xU"
text = 'I was wondering if anyone out there could enlighten me on this car I saw'
x = vectorizer.transform([text])
# + [markdown] colab_type="text" id="ULWh4DvrB1xX"
# Какой тип имеет объект, на который указывает ```x```?
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="LSi_b4mIB1xX" outputId="8c3bbb1c-13da-4dd3-e12a-1a87058f54e2"
type(x)
# + [markdown] colab_type="text" id="VdVtA7GmB1xZ"
# Разреженная матрица!
# + [markdown] colab_type="text" id="ugUSIc9PB1xa"
# ### Отступление про разреженные матрицы
# + [markdown] colab_type="text" id="OB_zLEVBB1xa"
# Список ненулевых элементов матрицы:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="5ygEpbSSB1xb" outputId="def34294-ba6a-481f-8346-5502b1279db6"
x.data
# + [markdown] colab_type="text" id="tejzzkvFB1xd"
# Индексы строк и столбцов для ненулевых элементов:
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="iYlLQ9WaB1xe" outputId="cf89901c-a082-4fc8-959c-f04dcf06eeab"
x.nonzero()
# + [markdown] colab_type="text" id="43HhCWNTB1xg"
# Преобразование к объекту ndarray (именно после приведения к такому виду разреженные матрицы можно подставлять в функции, например, библиотеки Numpy):
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="1U7rGrAYB1xh" outputId="5a9797a7-d8ec-4484-9daa-7f33593110ed"
x.toarray()
# + [markdown] colab_type="text" id="rNcjGOXGB1xk"
# Вернемся к словарю. Раскодируем вектор ```x``` в список слов:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="_A38zVwvB1xk" outputId="4e516746-8f3e-4cad-8c20-64d0ee8fdc8e"
vectorizer.inverse_transform(x)
# + [markdown] colab_type="text" id="P4TuPHK1B1xm"
# Пропало слово ```I```. Но дело в том, что по умолчанию ```CountVectorizer``` отбрасывает последовательности, короче 2 символов. На это указывает параметр ```token_pattern='(?u)\\b\\w\\w+\\b'```.
# + [markdown] colab_type="text" id="qZYK2sG_B1xn"
# Переведем весь набор текстов обучающего датасета в набор векторов, получим матрицу ```X_train```:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Uhj_aRW7B1xn" outputId="76e48a96-b2ef-4512-a807-026daf579ece"
X_train = vectorizer.fit_transform(newsgroups_train.data)
X_train.shape
# + [markdown] colab_type="text" id="DeYz7ZxuB1xs"
# О пользе разреженных матриц. Отношение числа ненулевых элементов ко всем элементам матрицы ```X_train```:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="lkc9UFcTB1xt" outputId="7eba71e7-52f1-4887-d02a-fde0418ceaf4"
X_train.nnz / np.prod(X_train.shape)
# + [markdown] colab_type="text" id="i9ZpBC7Sgauk"
#
# Задача: запустить модель LDA и Gibbs Sampling с числов тегов 20. Вывести топ-10 слов по каждому тегу. Соотнести полученные теги с тегами из датасета, сделать выводы.
# -
# # Уже словарь с оптимизированными размерами(см выше)
# + colab={} colab_type="code" id="Rn_580PkLCCj"
def LDA(X_train, a, b, n_top, n_iter):
n_k_w = np.zeros((n_top, X_train.shape[1]))
n_d_k = np.zeros((X_train.shape[0], n_top))
n_k = np.zeros(n_top)
docs, words = X_train.nonzero()
z = np.random.choice(n_top, len(docs))
for doc, word, k in zip(docs, words, z):
n_d_k[doc, k] += 1
n_k_w[k, word] += 1
n_k[k] += 1
for j in range(n_iter):
for i in range(len(docs)):
cur_word = words[i]
cur_doc = docs[i]
cur_topic = z[i]
n_d_k[cur_doc, cur_topic] -= 1
n_k_w[cur_topic, cur_word] -= 1
n_k[cur_topic] -= 1
p = (n_d_k[cur_doc,:]+a) * (n_k_w[:,cur_word] + b[cur_word]) / (n_k + b.sum())
p /= p.sum()
z[i] = np.random.choice(n_top, p = p)
n_d_k[cur_doc, z[i]] += 1
n_k_w[z[i], cur_word] += 1
n_k[z[i]] += 1
return n_k_w
# -
n_k_w = LDA(X_train,1 * np.ones(20), 1 * np.ones(X_train.shape[1]), 20, 50)
top_ten = np.argsort(n_k_w, axis=1)[:, :-11:-1]
for i in range(20):
doc = np.zeros((1, X_train.shape[1]))
for word in top_ten[i]:
doc[0, word] = 1
print('Topic №{}:\n{}'.format(i+1, ' '.join(vectorizer.inverse_transform(doc)[0])))
newsgroups_train.target_names
# Невооруженным глазом видно, что большая часть тем хорошо угадывается по ключевым словам. Эти темы почти всегда есть в списке, который в датасете.
| 16,529 |
/markov.ipynb
|
186a9c6b9332be78e4f3af9cdc9935436855d54c
|
[] |
no_license
|
eslrgs/field_notebooks
|
https://github.com/eslrgs/field_notebooks
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 121,659 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: field_data
# language: python
# name: field_data
# ---
# +
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from scipy import stats
from IPython.display import display
import striplog
from striplog.markov import Markov_chain
from mpl_toolkits.axes_grid1 import make_axes_locatable
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
from matplotlib.colors import LinearSegmentedColormap
# %matplotlib inline
# -
os.chdir("/Users/euan-soutter/Desktop/Python/Field_Data/Azerbaijan/Data") # Sets working directory
df = pd.read_csv("az_beds.csv",encoding = 'unicode_escape') # Loads data from working directory and assigns name 'df'
df.head() # Displays top 5 rows of df
df.facies_11A.unique() # Displays facies on log
# +
# Converts df to Markov friendly string
def v2s(df):
df = df
df = df[~np.isnan(df)]
df = ["%.0f" % df for df in df]
df = ''.join(map(str,df))
return df
# Assigns data
df4A = v2s(df.facies_4A.values)
df3A = v2s(df.facies_3A.values)
# -
# Makes transition matrix for unique states (facies)
m4A = Markov_chain.from_sequence(df4A, states=['1','2','3','4','5','6','7'],include_self=False)
m3A = Markov_chain.from_sequence(df3A, states=['1','2','3','4','6','7'],include_self=False)
# If chi2 > crit then facies are ordered
print(m4A.chi_squared())
print(m3A.chi_squared())
# Predicts next 10 facies for each log
print(m3A.generate_states(n=10))
print(m4A.generate_states(n=10))
# +
# Plots normalised difference matrix, which shows the facies transitions that skew the data
def plot_norm_diff(self, ax, cmap='RdBu', center_zero=True):
ax=ax
ma = np.ceil(np.max(self.normalized_difference))
if center_zero:
vmin, vmax = -ma, ma
else:
vmin, vmax = None, None
ticks = (np.arange(-10,10,2))
ticklabels = np.arange(-10,10,2)
im = ax.imshow(self.normalized_difference, cmap=cmap, vmin=vmin, vmax=vmax)
cbar = fig.colorbar(im, ax=ax, orientation='vertical',ticks = ticks,shrink=0.22,pad=0.03)
cbar.ax.set_ylabel('z-transitions', rotation=270,labelpad = 10)
cbar.ax.set_yticklabels(ticklabels)
ax.tick_params(axis='x', which='both',
bottom=False, labelbottom=False,
top=False, labeltop=True,)
ax.tick_params(axis='y', which='both',
left=False, labelleft=True,
right=False, labelright=False,)
ticks = np.arange(self.states.size)
ax.set_yticks(ticks)
ax.set_xticks(ticks)
labels = [str(s) for s in self.states]
ax.set_xticklabels(labels)
ax.set_yticklabels(labels)
# Deal with probable bug in matplotlib 3.1.1
ax.set_ylim(reversed(ax.get_xlim()))
fig, axs = plt.subplots(ncols=2,figsize=(8,12))
ax=axs[0]
plot_norm_diff(m4A,ax=axs[0])
ax.set_title('4A', fontweight='bold')
ax=axs[1]
plot_norm_diff(m3A,ax=axs[1])
ax.set_title('3A', fontweight='bold')
plt.tight_layout(w_pad=4)
plt.suptitle('Normalised Difference Matrix\n(higher z-transition = more anomalous transition)', fontweight='bold', y=0.69, x=0.48)
# -
# m4A.plot_norm_diff
# +
# Plots directed graph of facies transitions
# Thicker arrow equals more likely transition (but needs to be checked against transitions from cell above)
def plot_graph(self, ax=None,
figsize=None,
max_size=1000,
directed=True,
edge_labels=False,
draw_neg=False
):
if self.normalized_difference.ndim > 2:
raise MarkovError("You can only graph one-step chains.")
try:
import networkx as nx
except ImportError:
nx = None
if nx is None:
print("Please install networkx with `pip install networkx`.")
return
G = self.as_graph(directed=directed)
e_neg = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if d['weight'] <= -1.0}
e_small = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if -1.0 < d['weight'] <= 1.0}
e_med = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if 1.0 < d['weight'] <= 2.0}
e_large = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if d['weight'] > 2.0}
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos, ax=ax, node_size=1000, node_color='gold',zorder=0)
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_large, width=5, arrowsize=20, splines='curved')
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_med, width=2, arrowsize=10)
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_small,
width=1,
alpha=0.1,
edge_color='k')
if draw_neg:
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_neg,
width=2,
alpha=0.1,
edge_color='w')
if edge_labels:
nx.draw_networkx_edge_labels(G,pos,edge_labels=e_large)
nx.draw_networkx_edge_labels(G,pos,edge_labels=e_med)
labels = nx.get_node_attributes(G, 'state')
nx.draw_networkx_labels(G, pos, labels=labels, ax=ax,
font_size=20,
font_family='arial',
font_color='k')
fig, axs = plt.subplots(ncols=2,figsize=(10,5))
ax=axs[0]
plot_graph(m4A,ax=axs[0])
ax.axis('off')
ax.set_title('4A',fontweight='bold')
ax=axs[1]
plot_graph(m3A,ax=axs[1])
ax.axis('off')
ax.set_title('3A', fontweight='bold')
plt.tight_layout(w_pad=2)
plt.suptitle('Directed Facies Transitions', fontweight='bold', y=1.05, x=0.5, fontsize=16)
# -
# +
# ***** SOURCE DATA *****
"""
Markov chains for the striplog package.
"""
from collections import namedtuple
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
from .utils import hollow_matrix
class MarkovError(Exception):
pass
def regularize(sequence, strings_are_states=False) -> tuple:
"""
Turn a sequence or sequence of sequences into a tuple of
the unique elements in the sequence(s), plus a sequence
of sequences (sort of equivalent to `np.atleast_2d()`).
Args
sequence (list-like): A list-like container of either
states, or of list-likes of states.
strings_are_states (bool): True if the strings are
themselves states (i.e. words or tokens) and not
sequences of one-character states. For example,
set to True if you provide something like:
['sst', 'mud', 'mud', 'sst', 'lst', 'lst']
Returns
tuple. A tuple of the unique states, and a sequence
of sequences.
"""
if strings_are_states:
if isinstance(sequence[0], str):
seq_of_seqs = [sequence]
else:
seq_of_seqs = sequence
else:
# Just try to iterate over the contents of the sequence.
try:
seq_of_seqs = [list(i) if len(i) > 1 else i for i in sequence]
except TypeError:
seq_of_seqs = [list(sequence)]
# Annoyingly, still have to fix case of single sequence of
# strings... this seems really hacky.
if len(seq_of_seqs[0]) == 1:
seq_of_seqs = [seq_of_seqs]
# Now we know we have a sequence of sequences.
uniques = set()
for seq in seq_of_seqs:
for i in seq:
uniques.add(i)
return np.array(sorted(uniques)), seq_of_seqs
class Markov_chain(object):
"""
Markov_chain object.
TODO
- Integrate into `striplog` or move into own project.
- Pretty transition matrix printing with state names and row/col sums.
- Allow self-transitions. See also this:
https://stackoverflow.com/q/49340520/3381305
- Hidden Markov model?
- 'Joint' Markov model... where you have lithology and bioturbation index
(say). Not sure if this is really a thing, I just made it up.
- More generally, explore other sequence models, eg LSTM.
"""
def __init__(self,
observed_counts,
states=None,
step=1,
include_self=False
):
"""
Initialize the MarkovChain instance.
Args
observed_counts (ndarray): A 2-D array representing the counts
of change of state in the Markov Chain.
states (array-like): An array-like representing the possible states
of the Markov Chain. Must be in the same order as `observed
counts`.
step (int): The maximum step size, default 1.
include_self (bool): Whether to include self-to-self transitions.
"""
self.step = step
self.include_self = include_self
self.observed_counts = np.atleast_2d(observed_counts)
if states is not None:
self.states = np.asarray(states)
elif self.observed_counts is not None:
self.states = np.arange(self.observed_counts.shape[0])
else:
self.states = None
self.expected_counts = self._compute_expected()
return
def __repr__(self):
trans = f"Markov_chain({np.sum(self.observed_counts):.0f} transitions"
states = '[{}]'.format(", ".join("\'{}\'".format(s) for s in self.states))
return f"{trans}, states={states}, step={self.step})"
@staticmethod
def _compute_freqs(C):
"""
Compute frequencies from counts.
"""
epsilon = 1e-12
return (C.T / (epsilon+np.sum(C.T, axis=0))).T
@staticmethod
def _stop_iter(a, b, tol=0.01):
a_small = np.all(np.abs(a[-1] - a[-2]) < tol*a[-1])
b_small = np.all(np.abs(b[-1] - b[-2]) < tol*b[-1])
return (a_small and b_small)
@property
def _index_dict(self):
if self.states is None:
return {}
return {self.states[index]: index for index in range(len(self.states))}
@property
def _state_dict(self):
if self.states is None:
return {}
return {index: self.states[index] for index in range(len(self.states))}
@property
def observed_freqs(self):
return self._compute_freqs(self.observed_counts)
@property
def expected_freqs(self):
return self._compute_freqs(self.expected_counts)
@property
def _state_counts(self):
s = self.observed_counts.copy()
for axis in range(self.observed_counts.ndim - 2):
s = np.sum(s, axis=0)
a = np.sum(s, axis=0)
b = np.sum(s, axis=1)
return np.maximum(a, b)
@property
def _state_probs(self):
return self._state_counts / np.sum(self._state_counts)
@property
def normalized_difference(self):
O = self.observed_counts
E = self.expected_counts
epsilon = 1e-12
return (O - E) / np.sqrt(E + epsilon)
@classmethod
def from_sequence(cls,
sequence,
states=None,
strings_are_states=False,
include_self=False,
step=1,
ngram=False,
):
"""
Parse a sequence and make the transition matrix of the specified order.
**Provide sequence ordered in upwards direction.**
Args
sequence (list-like): A list-like, or list-like of list-likes.
The inner list-likes represent sequences of states.
For example, can be a string or list of strings, or
a list or list of lists.
states (list-like): A list or array of the names of the states.
If not provided, it will be inferred from the data.
strings_are_states (bool): rue if the strings are
themselves states (i.e. words or tokens) and not
sequences of one-character states. For example,
set to True if you provide something like:
['sst', 'mud', 'mud', 'sst', 'lst', 'lst']
include_self (bool): Whether to include self-to-self
transitions (default is `False`: do not include them).
step (integer): The distance to step. Default is 1: use
the previous state only. If 2, then the previous-but-
one state is used; but if ngram is true then both
the previous and the previous-but-one are used (and
the matrix is commensurately bigger).
ngram (bool): If True, we compute transitions from n-grams,
so the matrix will have one row for every combination
of n states. You will want to set return_states to
True to see the state n-grams.
return_states (bool): Whether to return the states.
TODO:
- Use `states` to figure out whether 'strings_are_states'.
"""
uniques, seq_of_seqs = regularize(sequence, strings_are_states=strings_are_states)
if states is None:
states = uniques
else:
states = np.asarray(states)
O = np.zeros(tuple(states.size for _ in range(step+1)))
for seq in seq_of_seqs:
seq = np.array(seq)
_, integer_seq = np.where(seq.reshape(-1, 1) == states)
for idx in zip(*[integer_seq[n:] for n in range(step+1)]):
O[idx] += 1
if not include_self:
O = hollow_matrix(O)
return cls(observed_counts=np.array(O),
states=states,
include_self=include_self
)
def _conditional_probs(self, state):
"""
Conditional probabilities of each state, given a
current state.
"""
return self.observed_freqs[self._index_dict[state]]
def _next_state(self, current_state: str) -> str:
"""
Returns the state of the random variable at the next time
instance.
Args
current_state (str): The current state of the system.
Returns
str. One realization of the next state.
"""
return np.random.choice(self.states,
p=self._conditional_probs(current_state)
)
def generate_states(self, n:int=10, current_state:str=None) -> list:
"""
Generates the next states of the system.
Args
n (int): The number of future states to generate.
current_state (str): The state of the current random variable.
Returns
list. The next n states.
"""
if current_state is None:
current_state = np.random.choice(self.states, p=self._state_probs)
future_states = []
for _ in range(n):
next_state = self._next_state(current_state)
future_states.append(next_state)
current_state = next_state
return future_states
def _compute_expected(self):
"""
Try to use Powers & Easterling, fall back on Monte Carlo sampling
based on the proportions of states in the data.
"""
try:
E = self._compute_expected_pe()
except:
E = self._compute_expected_mc()
return E
def _compute_expected_mc(self, n=100000, verbose=False):
"""
If we can't use Powers & Easterling's method, and it's possible there's
a way to extend it to higher dimensions (which we have for step > 1),
the next best thing might be to use brute force and just compute a lot
of random sequence transitions, given the observed proportions. If I'm
not mistaken, this is what P & E's method tries to estimate iteratively.
"""
seq = np.random.choice(self.states, size=n, p=self._state_probs)
E = self.from_sequence(seq).observed_counts
E = np.sum(self.observed_counts) * E / np.sum(E)
if not self.include_self:
return hollow_matrix(E)
else:
return E
def _compute_expected_pe(self, max_iter=100, verbose=False):
"""
Compute the independent trials matrix, using method of
Powers & Easterling 1982.
"""
m = len(self.states)
M = self.observed_counts
a, b = [], []
# Loop 1
a.append(np.sum(M, axis=1) / (m - 1))
b.append(np.sum(M, axis=0) / (np.sum(a[-1]) - a[-1]))
i = 2
while i < max_iter:
if verbose:
print(f"iteration: {i-1}")
print(f"a: {a[-1]}")
print(f"b: {b[-1]}")
print()
a.append(np.sum(M, axis=1) / (np.sum(b[-1]) - b[-1]))
b.append(np.sum(M, axis=0) / (np.sum(a[-1]) - a[-1]))
# Check for stopping criterion.
if self._stop_iter(a, b, tol=0.001):
break
i += 1
E = a[-1] * b[-1].reshape(-1, 1)
if not self.include_self:
return hollow_matrix(E)
else:
return E
@property
def degrees_of_freedom(self) -> int:
m = len(self.states)
return (m - 1)**2 - m
def _chi_squared_critical(self, q=0.95, df=None):
"""
The chi-squared critical value for a confidence level q
and degrees of freedom df.
"""
if df is None:
df = self.degrees_of_freedom
return scipy.stats.chi2.ppf(q=q, df=df)
def _chi_squared_percentile(self, x, df=None):
"""
The chi-squared critical value for a confidence level q
and degrees of freedom df.
"""
if df is None:
df = self.degrees_of_freedom
return scipy.stats.chi2.cdf(x, df=df)
def chi_squared(self, q=0.95):
"""
The chi-squared statistic for the given transition
frequencies.
Also returns the critical statistic at the given confidence
level q (default 95%).
If the first number is bigger than the second number,
then you can reject the hypothesis that the sequence
is randomly ordered.
"""
# Observed and Expected matrices:
O = self.observed_counts
E = self.expected_counts
# Adjustment for divide-by-zero
epsilon = 1e-12
chi2 = np.sum((O - E)**2 / (E + epsilon))
crit = self._chi_squared_critical(q=q)
perc = self._chi_squared_percentile(x=chi2)
Chi2 = namedtuple('Chi2', ['chi2', 'crit', 'perc'])
return Chi2(chi2, crit, perc)
def as_graph(self, directed=True):
if self.normalized_difference.ndim > 2:
raise MarkovError("You can only graph one-step chains.")
try:
import networkx as nx
except ImportError:
nx = None
if nx is None:
print("Please install networkx with `pip install networkx`.")
return
if directed:
alg = nx.DiGraph
else:
alg = nx.Graph
G = nx.from_numpy_array(self.normalized_difference, create_using=alg)
nx.set_node_attributes(G, self._state_dict, 'state')
return G
def plot_graph(self, ax=None,
figsize=None,
max_size=1000,
directed=True,
edge_labels=False,
draw_neg=False
):
if self.normalized_difference.ndim > 2:
raise MarkovError("You can only graph one-step chains.")
try:
import networkx as nx
except ImportError:
nx = None
if nx is None:
print("Please install networkx with `pip install networkx`.")
return
G = self.as_graph(directed=directed)
return_ax = True
if ax is None:
fig, ax = plt.subplots(figsize=figsize)
return_ax = False
e_neg = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if d['weight'] <= -1.0}
e_small = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if -1.0 < d['weight'] <= 1.0}
e_med = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if 1.0 < d['weight'] <= 2.0}
e_large = {(u, v):round(d['weight'],1) for (u, v, d) in G.edges(data=True) if d['weight'] > 2.0}
pos = nx.spring_layout(G)
sizes = max_size * (self._state_counts / max(self._state_counts))
nx.draw_networkx_nodes(G, pos, ax=ax, node_size=sizes, node_color='orange')
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_large, width=10, arrowsize=40, splines='curved')
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_med, width=4, arrowsize=20)
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_small,
width=3,
alpha=0.1,
edge_color='k')
if draw_neg:
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=e_neg,
width=2,
alpha=0.1,
edge_color='k')
if edge_labels:
nx.draw_networkx_edge_labels(G,pos,edge_labels=e_large)
nx.draw_networkx_edge_labels(G,pos,edge_labels=e_med)
labels = nx.get_node_attributes(G, 'state')
ax = nx.draw_networkx_labels(G, pos, labels=labels,
font_size=20,
font_family='sans-serif',
font_color='blue')
if return_ax:
return ax
else:
plt.axis('off')
plt.show()
return
def plot_norm_diff(self, ax=None, cmap='RdBu', center_zero=True):
if self.normalized_difference.ndim > 2:
raise MarkovError("You can only plot one-step chains.")
return_ax = True
if ax is None:
fig, ax = plt.subplots(figsize=(1 + self.states.size/2, self.states.size/2))
return_ax = False
ma = np.ceil(np.max(self.normalized_difference))
if center_zero:
vmin, vmax = -ma, ma
else:
vmin, vmax = None, None
im = ax.imshow(self.normalized_difference, cmap=cmap, vmin=vmin, vmax=vmax)
plt.colorbar(im)
ax.tick_params(axis='x', which='both',
bottom=False, labelbottom=False,
top=False, labeltop=True,
)
ax.tick_params(axis='y', which='both',
left=False, labelleft=True,
right=False, labelright=False,
)
ticks = np.arange(self.states.size)
ax.set_yticks(ticks)
ax.set_xticks(ticks)
labels = [str(s) for s in self.states]
ax.set_xticklabels(labels)
ax.set_yticklabels(labels)
# Deal with probable bug in matplotlib 3.1.1
ax.set_ylim(reversed(ax.get_xlim()))
if return_ax:
return ax
else:
plt.show()
return
# -
| 23,892 |
/detectron2_video.ipynb
|
ed9fcb98996ae3cccab55a7eabcdbf77767fb330
|
[] |
no_license
|
yuuuuuu99/Detectron2
|
https://github.com/yuuuuuu99/Detectron2
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 31,063 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="vukkNuCdJTsy" executionInfo={"status": "ok", "timestamp": 1604899876028, "user_tz": -480, "elapsed": 9500, "user": {"displayName": "\u66fe\u73ee\u745c", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiPYSYGADc29eZMBqL2Y0yOZut6dqSPIMQ8_DJEacQ=s64", "userId": "12040478592812319513"}} outputId="0d46cf9d-eb08-4aa2-a680-5ea07bbaf7bb" colab={"base_uri": "https://localhost:8080/"}
# install dependencies:
# !pip install pyyaml==5.1 pycocotools>=2.0.1
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# !gcc --version
# opencv is pre-installed on colab
# + id="WHUz6YajJdzT" executionInfo={"status": "ok", "timestamp": 1604899889193, "user_tz": -480, "elapsed": 7290, "user": {"displayName": "\u66fe\u73ee\u745c", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiPYSYGADc29eZMBqL2Y0yOZut6dqSPIMQ8_DJEacQ=s64", "userId": "12040478592812319513"}} outputId="224a88fb-189e-4031-fec7-45cdc4f1430e" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# install detectron2: (Colab has CUDA 10.1 + torch 1.6)
# See https://detectron2.readthedocs.io/tutorials/install.html for instructions
assert torch.__version__.startswith("1.7")
# !pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/index.html
# + id="J82CcRkbJ8wT" executionInfo={"status": "ok", "timestamp": 1604899916497, "user_tz": -480, "elapsed": 1943, "user": {"displayName": "\u66fe\u73ee\u745c", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiPYSYGADc29eZMBqL2Y0yOZut6dqSPIMQ8_DJEacQ=s64", "userId": "12040478592812319513"}} outputId="267d96a8-d4d8-4ab6-c375-21b117dff411" colab={"base_uri": "https://localhost:8080/", "height": 321}
# This is the video we're going to process
from IPython.display import YouTubeVideo, display
#video = YouTubeVideo("ll8TgCZ0plk", width=500)
video = YouTubeVideo("4IdRV7ZP4h8", width=500) #把要測試的影片網址"v="後面的部分貼上
display(video)
# + id="HFm2IFPrKAnV" executionInfo={"status": "ok", "timestamp": 1604900055207, "user_tz": -480, "elapsed": 17406, "user": {"displayName": "\u66fe\u73ee\u745c", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiPYSYGADc29eZMBqL2Y0yOZut6dqSPIMQ8_DJEacQ=s64", "userId": "12040478592812319513"}} outputId="90e3c1c2-c0c6-443a-b44a-e13ac535a948" colab={"base_uri": "https://localhost:8080/"}
# Install dependencies, download the video, and crop 5 seconds for processing
# !pip install youtube-dl
# !pip uninstall -y opencv-python-headless opencv-contrib-python
# !apt install python3-opencv # the one pre-installed have some issues
# #!youtube-dl https://www.youtube.com/watch?v=ll8TgCZ0plk -f 22 -o video.mp4
# !youtube-dl https://www.youtube.com/watch?v=4IdRV7ZP4h8 -f 22 -o video.mp4 #貼上想測試的影片網址
# !ffmpeg -i video.mp4 -t 00:00:06 -c:v copy video-clip.mp4
# + id="voLojv5rR6JN" executionInfo={"status": "ok", "timestamp": 1604900181624, "user_tz": -480, "elapsed": 121263, "user": {"displayName": "\u66fe\u73ee\u745c", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiPYSYGADc29eZMBqL2Y0yOZut6dqSPIMQ8_DJEacQ=s64", "userId": "12040478592812319513"}} outputId="fa58a78f-78b8-4d0c-ba74-3d31c80c2bfc" colab={"base_uri": "https://localhost:8080/"}
# Run frame-by-frame inference demo on this video (takes 3-4 minutes) with the "demo.py" tool we provided in the repo.
# !git clone https://github.com/facebookresearch/detectron2
# !python detectron2/demo/demo.py --config-file detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml --video-input video-clip.mp4 --confidence-threshold 0.6 --output video-output.mkv \
# --opts MODEL.WEIGHTS detectron2://COCO-PanopticSegmentation/panoptic_fpn_R_101_3x/139514519/model_final_cafdb1.pkl
# + id="ICeoLTANKOPl" executionInfo={"status": "ok", "timestamp": 1604900181864, "user_tz": -480, "elapsed": 639, "user": {"displayName": "\u66fe\u73ee\u745c", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiPYSYGADc29eZMBqL2Y0yOZut6dqSPIMQ8_DJEacQ=s64", "userId": "12040478592812319513"}} outputId="2f3aea35-5432-45cd-ca88-63ab0055b12e" colab={"base_uri": "https://localhost:8080/", "height": 17}
# Download the results
from google.colab import files
files.download('video-output.mkv')
e tienen estas inconsistencias debido al dataset del cuál se obtuvo la información, además de que probablemente al actualizar ese dataset muchos pokemon no tenían segundo tipo y fueron adquiriéndolo con el tiempo. Los valores faltantes de 'base_happiness' se deben a que para la octava generación no están disponibles estos datos. Por último, las columnas de 'type2' y 'percentage_male' tienen tantos faltantes porque algunos pokemon son de un solo tipo, por lo que su tipo 2 se marca como nulo, además, ciertos pokemon, como los legendarios no presentan un sexo, por lo que no se tiene un porcentaje de encuentro por sexo.
# Ahora veamos las cuentas por tipo.
def donut_chart(tamaños):
"""
Función que crea unan gráfica de 'dona partida' que muestra las cuentas que hay del tipo 1 y del tipo 2 para los tipos
de pokemon existentes.
Parámetros
-----
tamaños: lista de enteros
Lista con los tamaños que se quieren poner en la gráfica de 'dona partida'.
"""
# Iniciamos la figura y los ejes sobre los que pondremos nuestra gráfica.
fig, ax = plt.subplots(figsize = (10,10), subplot_kw=dict(aspect="equal"))
# La distancia que se aleja cada rebanada de la posición original
explodes = [0.1 for _ in range(len(tamaños))]
# Diccionario con las propiedades que queremos que tengam las rebanadas de la dona.
dicwedge = {'alpha': 0.7}
# Los colores oficiales del sitio de pokemon.com apra cada tipo
colores = {'water': '#4592c4', 'normal': '#a4acaf', 'grass': '#9bcc50', 'bug': '#729f3f',
'fire': '#fd7d24', 'psychic': '#f366b9', 'rock': '#a38c21', 'electric': '#eed535',
'ground': '#d79877', 'dark': '#707070', 'poison': '#b97fc9', 'fighting': '#d56723',
'dragon': '#a2535f', 'ghost': '#7b62a3', 'steel': '#9eb7b8', 'ice': '#51c4e7',
'fairy': '#fdb9e9', 'flying': '#8ca8d7'}
# Obtenemos una lista con nombres de los colores del diccionario creado arriba
color_names = list(colores.keys())
# Obtenemos una lista con los colores del diccionario creado arriba
list_colores = list(colores.values())
# Dibujamos una gráfica de pastel con las observaciones mencionadas arriba (todavía no es una dona)
wedges, text = ax.pie(tamaños,pctdistance = 0.85, explode = explodes,
colors = list_colores, wedgeprops = dicwedge, radius = 0.9)
# Le pondremos lo que quiere decir cada rebanada en unas cajas fuera de la dona
# Propiedades de la caja
bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72)
# Propiedades de las líneas que conectan la dona con la caja
kw = dict(arrowprops=dict(arrowstyle="-"),
bbox=bbox_props, zorder=0, va="center")
# Para cada rebanada hacemos una caja
for i, p in enumerate(wedges):
# Le decimos su posición
ang = (p.theta2 - p.theta1)/2. + p.theta1
y = np.sin(np.deg2rad(ang))
x = np.cos(np.deg2rad(ang))
horizontalalignment = {-1: "right", 1: "left"}[int(np.sign(x))]
# Estilo de la conexión entre las rebanadas y las cajas, utilizamos el diccionario creado arriba
connectionstyle = f"angle,angleA=0,angleB={ang}"
kw["arrowprops"].update({"connectionstyle": connectionstyle, "color": 'k'})
# Definimos qué tan alejadas estarán las cajas del centro de la dona
radio = 1.2
# Escribimos dentro de la caja
ax.annotate(color_names[i]+': '+str(tamaños[i]), xy=(x, y), xytext=(radio*x, radio*y),
horizontalalignment = horizontalalignment, **kw, alpha = 0.5)
# Dibujamos un círculo blanco en el centro de la gráfica de pastel para que parezca una dona
centre_circle = plt.Circle((0,0),0.75, fc = 'white') # Creamos un círculo centrado en (0,0) y radio 0.7
fig = plt.gcf() # Get the Current Figure
fig.gca().add_artist(centre_circle) # Get the Current Axis y agrega el círculo.
# Ponemos un título
ax.set_title('Pokemon por primer tipo', fontsize = 20)
# Guardamos
fig.savefig('../visualizacion/Tipo1.png')
plt.show()
# - Tipo 1
# Nos fijamos en el primer tipo
tipo1 = df_pokemon.type1
tipo1.value_counts()
tamaños1 = [valor for valor in tipo1.value_counts().values]
donut_chart(tamaños1)
# - Tipo 2
# Nos fijamos en el segundo tipo
tipo2 = df_pokemon.type2
tipo2.value_counts()
tamaños2 = [valor for valor in tipo2.value_counts().values]
donut_chart(tamaños2)
| 8,896 |
/HW/analysis/HW1.ipynb
|
f042a5b1bf7d33d57543a55b08e1365588866258
|
[] |
no_license
|
cmfeng/CSE599
|
https://github.com/cmfeng/CSE599
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 61,819 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Homework 1: Data Analysis Basics
# Problem
# Obtain the CSV (comma separated variable) file containing the counts of bicycles crossing the Fremont Bridge since 2012 (as described in https://data.seattle.gov/browse?category=Transportation&limitTo=datasets&utf8=%E2%9C%93). Create a project directory with subdirectories for data and analysis, and create a README file. Download the data from https://data.seattle.gov/resource/4xy5-26gy.csv put it in the data directory. Create an iPython Notebook to analyze these data. In the notebook: (1) Select the 2015 data, creating a new dataframe with fields for date, hour, and count, and use these data in questions (2)-(4); (2) use python matplotlib to plot the counts by hour; (3) compute the hourly average counts; and (4) determine what is the busiest hour of the day.
import pandas as pd
data=pd.read_csv('../data/4xy5-26gy.csv')
data.head()
times = pd.DatetimeIndex(data['date'])
data2015=data[times.year==2015].copy()
times2015=pd.DatetimeIndex(data2015['date'])
times2015
data2015['date']=times2015.date
data2015['hour']=times2015.hour
data2015['count']=data2015['fremont_bridge_nb']+data2015['fremont_bridge_sb']
data2015.head()
# %matplotlib inline
plot = data2015.groupby(['hour'])['count'].sum().plot()
plot.set_ylabel('sum count of nb and sb')
hourlyAvrCount=data2015.groupby(['hour'])['count'].mean()
hourlyAvrCount
hourlyAvrCount.argmax()
# Thus the busist hour in the day is 17:00-18:00.
data2015.to_csv('data2015.csv')
# !cat data2015.csv
| 1,787 |
/assignment1/.ipynb_checkpoints/1622assignment1jupyter-checkpoint.ipynb
|
d3db9fc8094ac31dec6ad2cd3755a7f28977e679
|
[] |
no_license
|
ljh6993/MIE_1622_Computational-Finance-and-risk-management
|
https://github.com/ljh6993/MIE_1622_Computational-Finance-and-risk-management
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 262,338 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install cplex
# # !pip install cplex
import cplex
# +
# Import libraries
import pandas as pd
import numpy as np
import math
import cplex
import matplotlib.pyplot as plt
# %matplotlib inline
# Complete the following functions
def strat_buy_and_hold_equally(x_init, cash_init, mu, Q, cur_prices):
total_cash= cur_prices@x_init+cash_init
x_optimal = np.array([ 1069,3255,1504, 95,2736,996,759,1064,457,308,1476,1810, 2793,1375,18726, 2431,2483, 162,1291,1235]);
trac_fee= abs(x_optimal-x_init) @cur_prices * 0.005
cash_optimal=total_cash-trac_fee- x_optimal @cur_prices
return x_optimal, cash_optimal
def strat_buy_and_hold(x_init, cash_init, mu, Q, cur_prices):
x_optimal = x_init
cash_optimal = cash_init
return x_optimal, cash_optimal
def strat_equally_weighted(x_init, cash_init, mu, Q, cur_prices):
#xinit curr_positions
#cash init, left cash, maybe 0 or positive
# cur_prices first trading day of each 2-month
# get the price of each stock
total_cash= cur_prices@x_init+cash_init
money_allc=np.ones((20))*total_cash/20
x_optimal=np.floor(money_allc/cur_prices)
trac_fee= abs(x_optimal-x_init) @cur_prices * 0.005
cash_optimal=total_cash-trac_fee- x_optimal @cur_prices
return x_optimal, cash_optimal
def strat_min_variance(x_init, cash_init, mu, Q, cur_prices):
total_cash= np.dot(cur_prices, x_init)+cash_init
# n=20
# w1=cp.Variable(n)
# prob1=cp.Problem(cp.Minimize(cp.quad_form(w1,Q)),[sum(w1)==1,w1>=0])
# prob1.solve(solver=cp.CPLEX,verbose=False,cplex_params={"qpmethod":6})
# w_minVar=w1.value
n=20
cpx = cplex.Cplex()
cpx.objective.set_sense(cpx.objective.sense.minimize)
c = [0.0] * n
lb = [0.0] * n
ub = [1.0] * n
#A is for linear constrains
A = []
for k in range(n):
A.append([[0],[1.0]])
var_names = ["w_%s" % i for i in range(1,n+1)]
cpx.linear_constraints.add(rhs=[1.0], senses="E")
cpx.variables.add(obj=c, lb=lb, ub=ub, columns=A,names=var_names)
## above all linear constrain and variable contrains
Qmat = [[list(range(n)), list(2*Q[k,:])] for k in range(n)]
cpx.objective.set_quadratic(Qmat)
alg = cpx.parameters.lpmethod.values
# print("Setting solution algorithm = ", alg.concurrent)
cpx.parameters.qpmethod.set(alg.concurrent)
cpx.parameters.threads.set(4)
cpx.set_results_stream(None)
cpx.set_warning_stream(None)
cpx.solve()
w_minVar = cpx.solution.get_values()
w_minVar=np.array(w_minVar)
# new position
x_optimal= np.floor(w_minVar*total_cash/cur_prices)
trac_fee=abs(x_optimal-x_init)@cur_prices *0.005
cash_optimal=total_cash-trac_fee-x_optimal@cur_prices
return x_optimal, cash_optimal
def strat_max_Sharpe(x_init, cash_init, mu, Q, cur_prices):
total_cash= np.dot(cur_prices, x_init)+cash_init
n=21
r_rf=0.025
daily_r_rf=0.025/252
u_rf=mu-daily_r_rf
u_rf_add=np.append(u_rf,0)
Q=np.append(Q,np.zeros((20,1)),axis=1)
Q=np.vstack([Q,np.zeros((21))])
# y=cp.Variable(n)
# prob=cp.Problem(cp.Minimize(cp.quad_form(y,Q)),[u_rf_add@y==1,sum(y[0:20])==y[20],y>=0])
# prob.solve(solver=cp.CPLEX,verbose=False,cplex_params={'qpmethod':6})
cpx = cplex.Cplex()
cpx.objective.set_sense(cpx.objective.sense.minimize)
c = [0.0] * n
lb = [0.0] * n
ub = [cplex.infinity] * n
#A is for linear constrains
A = []
for k in range(n):
if k==20:
A.append([[0,1],[u_rf_add[k],-1]])
else:
A.append([[0,1],[u_rf_add[k],1.0]])
var_names = ["y_%s" % i for i in range(1,n+1)]
cpx.linear_constraints.add(rhs=[1.0,0.0], senses="EE")
cpx.variables.add(obj=c, lb=lb, ub=ub, columns=A,names=var_names)
## above all linear constrain and variable contrains
Qmat = [[list(range(n)), list(2*Q[k,:])] for k in range(n)]
cpx.objective.set_quadratic(Qmat)
alg = cpx.parameters.lpmethod.values
# print("Setting solution algorithm = ", alg.concurrent)
cpx.parameters.qpmethod.set(alg.concurrent)
cpx.parameters.threads.set(4)
cpx.set_results_stream(None)
cpx.set_warning_stream(None)
cpx.solve()
y_minVar = cpx.solution.get_values()
y_minVar=np.array(y_minVar)
w_minVar=y_minVar[0:20]/y_minVar[20]
x_optimal= np.floor(w_minVar*total_cash/cur_prices)
trac_fee=abs(x_optimal-x_init)@cur_prices *0.005
cash_optimal=total_cash-trac_fee-x_optimal@cur_prices
return x_optimal, cash_optimal
# Input file
input_file_prices = 'Daily_closing_prices.csv'
# Read data into a dataframe
df = pd.read_csv(input_file_prices)
# Convert dates into array [year month day]
def convert_date_to_array(datestr):
temp = [int(x) for x in datestr.split('/')]
return [temp[-1], temp[0], temp[1]]
dates_array = np.array(list(df['Date'].apply(convert_date_to_array)))
data_prices = df.iloc[:, 1:].to_numpy()
dates = np.array(df['Date'])
# Find the number of trading days in Nov-Dec 2014 and
# compute expected return and covariance matrix for period 1
day_ind_start0 = 0
day_ind_end0 = len(np.where(dates_array[:,0]==2014)[0])
cur_returns0 = data_prices[day_ind_start0+1:day_ind_end0,:] / data_prices[day_ind_start0:day_ind_end0-1,:] - 1
mu = np.mean(cur_returns0, axis = 0)
Q = np.cov(cur_returns0.T)
# Remove datapoints for year 2014
data_prices = data_prices[day_ind_end0:,:]
dates_array = dates_array[day_ind_end0:,:]
dates = dates[day_ind_end0:]
# Initial positions in the portfolio
init_positions = np.array([5000, 950, 2000, 0, 0, 0, 0, 2000, 3000, 1500, 0, 0, 0, 0, 0, 0, 1001, 0, 0, 0])
# Initial value of the portfolio
init_value = np.dot(data_prices[0,:], init_positions)
print('\nInitial portfolio value = $ {}\n'.format(round(init_value, 2)))
# Initial portfolio weights
w_init = (data_prices[0,:] * init_positions) / init_value
# Number of periods, assets, trading days
N_periods = 6*len(np.unique(dates_array[:,0])) # 6 periods per year
N = len(df.columns)-1
N_days = len(dates)
# Annual risk-free rate for years 2015-2016 is 2.5%
r_rf = 0.025
# Number of strategies
strategy_functions = ['strat_buy_and_hold', 'strat_equally_weighted', 'strat_min_variance', 'strat_max_Sharpe','strat_buy_and_hold_equally']
strategy_names = ['Buy and Hold', 'Equally Weighted Portfolio', 'Mininum Variance Portfolio', 'Maximum Sharpe Ratio Portfolio','Buy_and_Hold_equally']
#N_strat = 1 # comment this in your code
N_strat = len(strategy_functions) # uncomment this in your code
fh_array = [strat_buy_and_hold, strat_equally_weighted, strat_min_variance, strat_max_Sharpe,strat_buy_and_hold_equally]
portf_value = [0] * N_strat
x = np.zeros((N_strat, N_periods), dtype=np.ndarray)
cash = np.zeros((N_strat, N_periods), dtype=np.ndarray)
for period in range(1, N_periods+1):
# Compute current year and month, first and last day of the period
if dates_array[0, 0] == 15:
cur_year = 15 + math.floor(period/7)
else:
cur_year = 2015 + math.floor(period/7)
cur_month = 2*((period-1)%6) + 1
day_ind_start = min([i for i, val in enumerate((dates_array[:,0] == cur_year) & (dates_array[:,1] == cur_month)) if val])
day_ind_end = max([i for i, val in enumerate((dates_array[:,0] == cur_year) & (dates_array[:,1] == cur_month+1)) if val])
print('\nPeriod {0}: start date {1}, end date {2}'.format(period, dates[day_ind_start], dates[day_ind_end]))
# Prices for the current day
cur_prices = data_prices[day_ind_start,:]
# Execute portfolio selection strategies
for strategy in range(N_strat):
# Get current portfolio positions
if period == 1:
curr_positions = init_positions
curr_cash = 0
portf_value[strategy] = np.zeros((N_days, 1))
else:
curr_positions = x[strategy, period-2]
curr_cash = cash[strategy, period-2]
# Compute strategy
x[strategy, period-1], cash[strategy, period-1] = fh_array[strategy](curr_positions, curr_cash, mu, Q, cur_prices)
# Verify that strategy is feasible (you have enough budget to re-balance portfolio)
# Check that cash account is >= 0
# Check that we can buy new portfolio subject to transaction costs
###################### Insert your code here ############################
if cash[strategy][period-1] < 0:
total_cash=np.dot(cur_prices, curr_positions)+curr_cash
ratio = x[strategy][period-1] / (sum(x[strategy][period-1]))
cash_balance = abs(cash[strategy][period-1])* ratio
position_balance = np.ceil(cash_balance / cur_prices)
x[strategy][period-1] = x[strategy][period-1] - position_balance
transaction = np.dot(cur_prices , abs(x[strategy][period-1] - curr_positions) )* 0.005
cash[strategy][period-1] = total_cash - np.dot(cur_prices , x[strategy][period-1]) - transaction
# Compute portfolio value
p_values = np.dot(data_prices[day_ind_start:day_ind_end+1,:], x[strategy, period-1]) + cash[strategy, period-1]
portf_value[strategy][day_ind_start:day_ind_end+1] = np.reshape(p_values, (p_values.size,1))
print(' Strategy "{0}", value begin = $ {1:.2f}, value end = $ {2:.2f}'.format( strategy_names[strategy],
portf_value[strategy][day_ind_start][0], portf_value[strategy][day_ind_end][0]))
# Compute expected returns and covariances for the next period
cur_returns = data_prices[day_ind_start+1:day_ind_end+1,:] / data_prices[day_ind_start:day_ind_end,:] - 1
mu = np.mean(cur_returns, axis = 0)
Q = np.cov(cur_returns.T)
# Plot results
###################### Insert your code here ############################
# -
# +
plt.figure(figsize=(10,8))
plt.plot(portf_value[0], color="r",label="Buy and Hold")
plt.plot(portf_value[1], color="g",label="Equally Weighted Portfolio")
plt.plot(portf_value[2], color="b",label="Mininum Variance Portfolio")
plt.plot(portf_value[3],label="Maximum Sharpe Ratio Portfolio")
plt.plot(portf_value[4],label="Buy and Hold Equally")
plt.legend(loc="best")
plt.title("Daily value of porfolio")
plt.ylabel("porfolio value")
plt.xlabel("trading days")
plt.show()
# -
stock_name=df.columns[1:,]
# +
#Mininum Variance Portfolio
# +
w_min_var=[]
for period in range(1,N_periods+1):
w_min_var.append(x[2, period-1]/sum(x[2, period-1]))
b=np.array(w_min_var)
df_stockweight=pd.DataFrame(b,columns=stock_name)
df_stockweight.plot(figsize=(10,8))
plt.legend(loc="upper right")
plt.title("dynamic change in porfolio Minvar")
plt.ylabel("weight")
plt.xlabel("trading period")
# +
#max sharpe ratio Portfolio
w_maxsharpe=[]
for period in range(1,N_periods+1):
w_maxsharpe.append(x[3, period-1]/sum(x[3, period-1]))
a=np.array(w_maxsharpe)
df_stockweight=pd.DataFrame(a,columns=stock_name)
df_stockweight.plot(figsize=(10,8))
plt.legend(loc="upper right")
plt.title("dynamic change in porfolio Maxsharpe")
plt.ylabel("weight")
plt.xlabel("trading period")
# +
##select “1/n” portfolio at the beginning of period 1 and hold it till the end of period 12
# from the above test, we already know the max sharpe ratio is the best
# we set n as 4, which means we keep 1/4 porfolio doesn't change
n=3
mu = np.mean(cur_returns0, axis = 0)
Q = np.cov(cur_returns0.T)
# Initial positions in the portfolio
init_positions = np.array([5000, 950, 2000, 0, 0, 0, 0, 2000, 3000, 1500, 0, 0, 0, 0, 0, 0, 1001, 0, 0, 0])
hold_positions=np.floor(init_positions*(1/n))
init_positions=init_positions-hold_positions
# Initial value of the portfolio
init_value = np.dot(data_prices[0,:], init_positions)
print('\nInitial portfolio value = $ {}\n'.format(round(init_value, 2)))
# here, we set to keep 1/n porfolio doesn't change, and then we modify the remaining porfolio as the
# Initial portfolio weights
w_init = (data_prices[0,:] * init_positions) / init_value
# Number of periods, assets, trading days
N_periods = 6*len(np.unique(dates_array[:,0])) # 6 periods per year
N = len(df.columns)-1
N_days = len(dates)
# Annual risk-free rate for years 2015-2016 is 2.5%
r_rf = 0.025
# Number of strategies
strategy_functions = ['strat_buy_and_hold', 'strat_equally_weighted', 'strat_min_variance', 'strat_max_Sharpe']
strategy_names = ['Buy and Hold', 'Equally Weighted Portfolio', 'Mininum Variance Portfolio', 'Maximum Sharpe Ratio Portfolio']
#N_strat = 1 # comment this in your code
N_strat = len(strategy_functions) # uncomment this in your code
fh_array = [strat_buy_and_hold, strat_equally_weighted, strat_min_variance, strat_max_Sharpe]
portf_value = [0] * N_strat
x = np.zeros((N_strat, N_periods), dtype=np.ndarray)
cash = np.zeros((N_strat, N_periods), dtype=np.ndarray)
for period in range(1, N_periods+1):
# Compute current year and month, first and last day of the period
if dates_array[0, 0] == 15:
cur_year = 15 + math.floor(period/7)
else:
cur_year = 2015 + math.floor(period/7)
cur_month = 2*((period-1)%6) + 1
day_ind_start = min([i for i, val in enumerate((dates_array[:,0] == cur_year) & (dates_array[:,1] == cur_month)) if val])
day_ind_end = max([i for i, val in enumerate((dates_array[:,0] == cur_year) & (dates_array[:,1] == cur_month+1)) if val])
print('\nPeriod {0}: start date {1}, end date {2}'.format(period, dates[day_ind_start], dates[day_ind_end]))
# Prices for the current day
cur_prices = data_prices[day_ind_start,:]
# Execute portfolio selection strategies
for strategy in range(N_strat):
# Get current portfolio positions
if period == 1:
curr_positions = init_positions
curr_cash = 0
portf_value[strategy] = np.zeros((N_days, 1))
else:
curr_positions = x[strategy, period-2]
curr_cash = cash[strategy, period-2]
# Compute strategy
x[strategy, period-1], cash[strategy, period-1] = fh_array[strategy](curr_positions, curr_cash, mu, Q, cur_prices)
# Verify that strategy is feasible (you have enough budget to re-balance portfolio)
# Check that cash account is >= 0
# Check that we can buy new portfolio subject to transaction costs
###################### Insert your code here ############################
if cash[strategy][period-1] < 0:
total_cash=np.dot(cur_prices, curr_positions)+curr_cash
ratio = x[strategy][period-1] / sum(x[strategy][period-1])
cash_balance = abs(cash[strategy][period-1])* ratio
position_balance = np.ceil(cash_balance / cur_prices)
x[strategy][period-1] = x[strategy][period-1] - position_balance
transaction = np.dot(cur_prices , abs(x[strategy][period-1] - curr_positions) )* 0.005
cash[strategy][period-1] = total_cash - np.dot(cur_prices , x[strategy][period-1]) - transaction
# Compute portfolio value
p_values = np.dot(data_prices[day_ind_start:day_ind_end+1,:], (x[strategy, period-1]+hold_positions)) + cash[strategy, period-1]
portf_value[strategy][day_ind_start:day_ind_end+1] = np.reshape(p_values, (p_values.size,1))
print(' Strategy "{0}", value begin = $ {1:.2f}, value end = $ {2:.2f}'.format( strategy_names[strategy],
portf_value[strategy][day_ind_start][0], portf_value[strategy][day_ind_end][0]))
# Compute expected returns and covariances for the next period
cur_returns = data_prices[day_ind_start+1:day_ind_end+1,:] / data_prices[day_ind_start:day_ind_end,:] - 1
mu = np.mean(cur_returns, axis = 0)
Q = np.cov(cur_returns.T)
# -
| 16,107 |
/create_political_share_xgboost.ipynb
|
6c8039b0d902934a1dec433c43cf641655311f6b
|
[
"BSD-3-Clause"
] |
permissive
|
launis/areadata
|
https://github.com/launis/areadata
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 11,159 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: gpa
# language: python
# name: gpa
# ---
# # Datojen haku ja esikäsittely
# +
import pandas as pd
import shap
from create_shap_values_via_xgboost import create_shap_values_via_xgboost
from create_target_columns import create_target_columns
from print_examples import show_all_results, create_party_results, show_one_results, cluster_dedndogram, compare_scatter
from show_election_result import show_election_result
from set_path import set_path
from read_and_prepare_data import read_and_prepare_data
from selected_cols import selected_cols
from create_prediction import optimize_one_par
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
pd.options.display.max_colwidth = 100
# -
mainpath, path = set_path('areadata')
stat, post, kunta_stat, vaalidata = read_and_prepare_data(path)
numeric_features, categorical_features = selected_cols(largeset=False, parties=False)
# +
#set values to feature engineering attributes
test_size = 0.3
metric = 'rmse'
Skfold=False
Verbose = False
testing=True
scaled = False
initial_params = {
#Initial xgboost parameters to be automatically tuned
'objective':'reg:squarederror',
'booster' : 'gbtree',
'eval_metric' : metric,
'seed' : 42,
}
filename = 'party_xgboost_vote_share_'
list_of_parties = ['VIHR', 'KOK', 'SDP', 'KD', 'KESK', 'RKP', 'PS', 'VAS']
selected_parties = ['KOK']
included_col_start = 'Ääniosuus '
all_included_columns = create_target_columns(list_of_parties, included_col_start)
target_col_start = 'Ääniosuus '
target = create_target_columns(selected_parties, target_col_start)
# -
data, X, y, test, X_test, y_test_pred, model_list, model_params, shap_data, features_dict, selected_columns_dict = create_shap_values_via_xgboost(path,
filename,
stat,
stat,
target,
initial_params,
numeric_features=numeric_features,
categorical_features=categorical_features,
scaled=scaled,
test_size=test_size,
Skfold=Skfold,
Verbose=False,
testing=testing)
data = show_election_result(data, y_test_pred, target_col_start, vaalidata, selected_parties)
compare_value = 1.5
columns = 6
scaled = True
included_columns = ['Miehet, 2018 (HE) osuudesta asukkaat']
samples = 5
show_cols = ['Postinumero','nimi', 'muncipality_name','Asukkaat yhteensä, 2018 (HE)', 'Suurin_puolue', 'Ennustettu Suurin_puolue']
comp_col = 'Miehet, 2018 (HE) osuudesta asukkaat'
compare = {}
stats_data = {}
k = {}
k_compare = {}
party = 'VIHR'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
party = 'KOK'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
party = 'SDP'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
party ='KD'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
party ='RKP'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
party ='KESK'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
party ='PS'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
party ='VAS'
compare_to, t = create_party_results(target_col_start, party, data, vaalidata, compare_value)
compare[t] = compare_to
stats_data[t], k[t], col_list = show_all_results(compare_to, data, X, y[t], shap_data[t], t, columns, all_included_columns, show_cols, comp_col, scaled=scaled, included_columns=included_columns, samples=samples)
pnro ='01300'
key = 'Postinumero'
show_one_results(data, X, y, model_list, shap_data, target, key, pnro, columns)
pnro ='00260'
key = 'Postinumero'
show_one_results(data, X, y, model_list, shap_data, target, key, pnro, columns)
selected_parties = ['VIHR', 'KOK', 'PS', 'SDP', 'KESK']
included_col_start = 'Ääniosuus '
col1 = 'Asumisväljyys, 2018 (TE) osuus total'
col2 = 'Talotyypit yhteensä 2019 Neliöhinta (EUR/m2) osuus total'
compare_scatter(selected_parties, included_col_start, X, col1, col2, shap_data, ylim_min=-0.01, ylim_max=0.02)
# +
selected_parties = ['VIHR', 'KOK', 'PS', 'SDP', 'KESK']
included_col_start = 'Ääniosuus '
col1 = 'Asumisväljyys, 2018 (TE) osuus total'
col2 = 'Ylimpään tuloluokkaan kuuluvat taloudet, 2017 (TR) osuudesta taloudet'
compare_scatter(selected_parties, included_col_start, X, col1, col2, shap_data, ylim_min=-0.01, ylim_max=0.02)
# -
cluster_dedndogram(X)
| 6,668 |
/ch01.NLP_WordEmbedding/1.TextAnalysis/7.cosine_similarity.ipynb
|
8661d688401fcbb4cfd1fff870d709d07ab84df3
|
[] |
no_license
|
s1len7/ai-rnn
|
https://github.com/s1len7/ai-rnn
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 40,719 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from scipy import sparse
import matplotlib.pyplot as plt
import gc
gc.enable()
# +
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB, BernoulliNB
from sklearn.pipeline import Pipeline
from sklearn.metrics import balanced_accuracy_score as bas
from sklearn.ensemble import BaggingClassifier
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.neighbors import NearestCentroid
from sklearn.preprocessing import StandardScaler, FunctionTransformer, MinMaxScaler, MaxAbsScaler
from sklearn.ensemble import RandomForestClassifier
from nltk.stem import SnowballStemmer
# -
# <hr>
# data = pd.read_csv('../data-simplified-1-reduced-wordbal-800.csv')
data = pd.read_csv('../data-reduced-800-v2-shuffled.csv', index_col = 0)
test = pd.read_csv('../test.csv')
catcode = pd.read_csv('../data-simplified-1-catcode.csv', header = None, names = ['category'])['category']
catcode.to_dict()
data.head()
# <hr>
def normalize(curr):
# remove accent
curr = curr.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8')
# to lower case
curr = curr.str.lower()
# remove not alphanumerics or . ,
curr = curr.str.replace('[^a-zA-Z0-9.,]', ' ')
# let , and . be the same char
curr = curr.str.replace('[.]', ',')
# remove . , not between numbers
curr = curr.str.replace('(?<=[0-9])[,]+(?=[0-9])', '.')
curr = curr.str.replace('[,]', ' ')
# set all digits to 0
curr = curr.str.replace('[0-9]', '0')
# separate ' <digits><letters ' like in 22g or 12ms
# curr = curr.str.replace('(^| )([0-9]+)([a-zA-Z]+)($| )', r'\1\2 \3\4')
# remove some Pt plurals
curr = curr.str.replace('([a-zA-Z]+[aeiou])(s)', r'\1')
# Other ideas:
return curr
sp = int(len(data) * 0.8) # Split Point
full = pd.concat([data[['title']], test[['title']]])
X_full = full.title
# %%time
X_full = normalize(X_full)
# %%time
wordfreq = X_full.str.split(expand=True).stack().value_counts().to_dict()
# %%time
uniquewords = {w for w, f in wordfreq.items() if f == 1}
xjoin = lambda s : ' '.join([w if w not in uniquewords else 'XXUXX'for w in s ])
# %%time
X_full = X_full.str.split().apply(xjoin)
X_full = 'XXSXX ' + X_full # + ' XXSXX'
# +
# %%time
count_vect = CountVectorizer(strip_accents='ascii', binary = True, min_df= 2, max_df = .95,
# analyzer = 'char',
ngram_range=(1,2),
# stop_words=['frete', 'Frete'],
)
X_train_counts = count_vect.fit_transform(X_full)
print(X_train_counts.shape, X_train_counts.count_nonzero())
# -
def sbc(x):
# sparse binary correlation; x : sparse
# can't correlate zero columns
cx = sparse.triu(x.T*x, k = 1, format='coo')
# print(cx.todense())
card = np.array(x.sum(axis = 0)).flatten()
# print(card)
cx.data = cx.data / (card[cx.row] + card[cx.col] - cx.data)
# print(cx.todense())
return np.array((cx == 1).sum(axis = 0) > 0).flatten()
# %%time
rem = sbc(X_train_counts)
print(rem.mean())
X_train_counts = X_train_counts[:, ~rem]
print(X_train_counts.shape, X_train_counts.count_nonzero())
# +
# %%time
tfidf_transformer = TfidfTransformer(norm='l2', use_idf=False, smooth_idf=True, sublinear_tf=False)
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_go = X_train_tfidf
print(X_train_tfidf.shape)
# -
sp2 = len(data)
X_train, y_train = X_go[:sp], data.category.values[:sp]
X_test, y_test = X_go[sp:sp2], data.category.values[sp:sp2]
X_train.shape, X_test.shape
class_weights = (1 / pd.Series(y_train).value_counts()).to_dtry:
t[0] = "Bharadwaja"
except:
pass
else:
print("there is no issue with this code")
# +
try:
l = [4,5,6,7,8,9,0]
print(l[100])
except:
print("There is an Error with COde")
t = (4,5,6,7,8,9,0)
print(t)
try:
t[0] = "Bharadwaja"
try:
list(t)
print(t)
except:
pass
except:
pass
else:
print("there is no issue with this code")
# -
# ## Try- Except - Finally
try:
f = open("test.txt","r")
except:
pass
try:
f = open("test.txt","r")
finally:
print("FInally i will excute in any case")
try:
f = open("test.txt","r")
finally:
print("FInally i will excute in any case")
l = [4,6,7,5]
print(l)
try:
f = open("test.txt","w+")
f.write("Im Bharadwaja")
finally:
f.close()
try:
f = open("test2.txt","r")
finally:
print("FInally will excute in any case")
l = [4,6,7,5]
print(l[10])
try:
f = open("test2.txt","r")
except:
pass
finally:
print("FInally will excute in any case")
l = [4,6,7,5]
print(l[10])
# +
try:
f = open("test2.txt","r")
print("try block is executed")
except:
pass
finally:
print("FInally will excute in any case")
l = [4,6,7,5]
try:
print(l[10])
except:
pass
# -
# ## try - except - else - finally
# +
try:
f = open("test2.txt","r")
print("try block is executed")
except:
pass
else:
print("else will Execute once try will execute succesfully")
finally:
print("FInally will excute in any case")
l = [4,6,7,5]
try:
print(l[10])
except:
pass
# +
try:
f = open("test.txt","w+")
print("try block is executed")
except:
pass
else:
print("else will Execute once try will execute succesfully")
finally:
print("FInally will excute in any case")
l = [4,6,7,5]
try:
print(l[10])
except:
pass
# +
try:
f = 5/2
print("try block is executed")
except:
pass
else:
print("else will Execute once try will execute succesfully")
finally:
print("FInally will excute in any case")
l = [4,6,7,5]
try:
print(l[10])
except:
pass
# -
# ### NOTE : Make use of try - except - else - finally in all the assignments hereafter
# ## Can we write Except Statement Multiple times?
# ## Is it possible to handle Multiple Exceptions ?
# ## Is it possible to Raise our own Exceptions ?
5/0
5/6
# +
try:
f = open("test3.txt","r")
print("try block is executed")
except IOError as e:
print("THis is my Error ",e)
# -
# ## Example
a = int(input("enter an integer"))
def askforint():
while True:
try:
a = int(input("enter an integer"))
except Exception as e:
print("THis is my Error message",e)
else:
print("Person Entered integer value")
break
finally:
print("Close the issue")
askforint()
# ## Multiple Except Block
def askforint():
while True:
try:
a = int(input("enter an integer"))
except FileNotFoundError as e:
print("THis is my Error message",e)
except IOError as e:
print("THis is my Error message",e)
else:
print("Person Entered integer value")
break
finally:
print("Close the issue")
askforint()
def askforint():
while True:
try:
a = int(input("enter an integer"))
except FileNotFoundError as e:
print("THis is my Error message",e)
except IOError as e:
print("THis is my Error message",e)
except ValueError as e:
print("THis is my Error message",e)
else:
print("Person Entered integer value")
break
finally:
print("Close the issue")
askforint()
def askforint():
while True:
try:
a = int(input("enter an integer"))
c = 5 / a
except FileNotFoundError as e:
print("THis is my Error message",e)
except IOError as e:
print("THis is my Error message",e)
except ValueError as e:
print("THis is my Error message",e)
else:
print("Person Entered integer value")
break
finally:
print("Close the issue")
askforint()
def askforint():
while True:
try:
a = int(input("enter an integer"))
c = 5 / a
except FileNotFoundError as e:
print("THis is my Error message",e)
except IOError as e:
print("THis is my Error message",e)
except Exception as e:
print("THis is my Error message",e)
except ValueError as e:
print("THis is my Error message",e)
except ArithmeticError as e:
print("THis is my Error message",e)
except ZeroDivisionError as e:
print("THis is my Error message",e)
else:
print("Person Entered integer value")
break
finally:
print("Close the issue")
askforint()
# ## Raise Custom Exceptions
5/0
5/6
def create_your_exceptions(a):
if a == 6 :
raise Exception(a)
else:
print("input is ok")
return a
create_your_exceptions(6)
create_your_exceptions(7)
# +
def create_your_exceptions(a):
if a == 6 :
raise Exception("Error is ",a)
else:
print("input is ok")
return a
try:
b = create_your_exceptions(6)
except Exception as e:
print(e)
# +
def create_your_exceptions(a):
if a > 6 :
raise Exception("Error is ",a)
else:
print("input is ok")
return a
try:
b = create_your_exceptions(10)
except Exception as e:
print(e)
# -
# ### Task
# ### Create a folder Test
# ### in Folder task Create Mod1.py ---> Write logic to print all even no
# ### Create ipynb file in the same Test Folder and import Mod1.py give a range as iput and get Even numbers. Take input from data and Try to Handle ass the Test cases
#
n=10
print(f"All the Even Number in the Range {n}",12)
try:
n = input("Enter an integer")
for i in range(n):
if i%2 == 0:
l.append(i)
print(f"All the Even Number in the Range {n}",l)
except Exception as e:
print("The Error is",e)
| 10,934 |
/notebooks/object_detection_training_and_inference.ipynb
|
ae1f85ac386e535b3635754d2a675ca6222e3553
|
[
"MIT"
] |
permissive
|
MariumAZ/mot
|
https://github.com/MariumAZ/mot
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,491 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# you can write to stdout for debugging purposes, e.g.
# print("this is a debug message")
def solution(S, K):
# write your code in Python 3.6
S = S.upper()
strArray = ''.join(x for x in S.split('-'))
# print(len(strArray))
result = strArray
for i in range(len(strArray) - K,0, -K):
result = result[:i] + '-' + result[i:]
# result += strArray[i:i+K]
return ''.join(x for x in result)
# -
solution('2-4A0r7-4k', 4)
.
# -
print(df & 7)
print(df | 1)
# +
# print(df << 1)
# TypeError: unsupported operand type(s) for <<: 'DataFrame' and 'int'
# +
# print(df << df)
# TypeError: unsupported operand type(s) for <<: 'DataFrame' and 'DataFrame'
# -
print(df > 3)
print((df > 3).all())
print((df > 3).all(axis=1))
print((df > 3).all(axis=None))
print(df.empty)
df_empty = pd.DataFrame()
print(df_empty.empty)
print(df.size)
print(df_empty.size)
nload pretrained weights on COCO
# + colab={} colab_type="code" id="tSQahaeC0T_d"
wget.download("http://models.tensorpack.com/FasterRCNN/COCO-MaskRCNN-R50FPN2x.npz", ".")
# + [markdown] colab_type="text" id="rR-D8zghz9r4"
# ### Launch the training
#
# See this [file](https://github.com/surfriderfoundationeurope/mot/blob/master/src/mot/object_detection/README.md) to choose the architecture you want and the according pre trained weights. The weights downloaded above correspond to ResNet50-FPN with 2X scheduling.
# Since we are not interested in segmentation, we set MODE_MASK=False. Also, because the dataset is pretty small, we don't need to train the network for a lot of steps. We decrease the learning rate at steps (250,500,750) * 8 GPUs, correspoding to the steps (2000, 4000, 6000).
# + colab={} colab_type="code" id="PvNO6o9Yz8DS"
# !python3 -m mot.object_detection.train --load COCO-MaskRCNN-R50FPN2x.npz --logdir resnet50_fpn --config DATA.BASEDIR=dataset_surfrider_cleaned MODE_MASK=False TRAIN.LR_SCHEDULE=250,500,750
# + [markdown] colab_type="text" id="DgDngrdMDsoR"
# ### Visualize predictions
# + colab={} colab_type="code" id="GEXZ6r18FrzY"
# !mkdir plastic_trained_resnet50_fpn
# + colab={} colab_type="code" id="YYdv-q_IEBaR"
wget.download("https://files.heuritech.com/raw_files/surfrider/resnet50_fpn/model-6000.index", "plastic_trained_resnet50_fpn")
wget.download("https://files.heuritech.com/raw_files/surfrider/resnet50_fpn/model-6000.data-00000-of-00001", "plastic_trained_resnet50_fpn")
# + colab={} colab_type="code" id="QM0eCcdTD8L8"
# !python3 -m mot.object_detection.predict --load plastic_trained_resnet50_fpn/model-6000 --predict dataset_surfrider_cleaned/Images_md5/9ddc58812851ad643114930524601f10 --config DATA.BASEDIR=dataset_surfrider_cleaned MODE_MASK=False
# + colab={} colab_type="code" id="Bs-QVy8KG-36"
from IPython.display import Image
Image('output.png')
# + colab={} colab_type="code" id="LyGKs4uRahRI"
| 3,131 |
/nx/pydotplus/svg/digraph.ipynb
|
1e4ae2649c37daf2a2186a2764e62dfd5e61ae3a
|
[
"MIT"
] |
permissive
|
ontouchstart/colab_notebook
|
https://github.com/ontouchstart/colab_notebook
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 11,412 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Genetic Algorithms
# This jupyter notebook applies genetic algorithm to the Makespan Problem. Genetic Algorithms are search methods based on selection and genetics. We define "chromosomes" as candidate solutions. The Makespan problem deals with the time a job takes from the beginning to the end. Given 1,...,n jobs with processing time p1,...,pn and m machines, the task is to assign jobs to machines in a way that minimizes time to finish all jobs.
#
# Cited literature comes from Sastry, Goldberg and Kendall's "Genetic Algorithms".
# ## 1. Initialization
import numpy as np
import random
import math
import matplotlib.pyplot as plt
from datetime import datetime
# In the initialization we first set up a list that illustrates the total number of machines. Starting from machine 0 until a given integer.
def define_machines(machine_number):
machines = []
for i in range(machine_number):
machines.append(i)
return machines
#print(define_machines(2))
# +
# DOES SOMEONE USE THIS? IF NOT, PLEASE DELETE.
# defining jobs
# jobs = []
# for i in range(1,301)
# jobs.append(i)
# jobs
# -
# In our example each job may take a while. The "process time" is random but has to be within an interval that is defined via a lower and an upper bound. Here these bounds are called borders. The output is an array that displays the time each job takes. That means positions 0 stands for job 0 and the integer that is taken at this position signifies the process time that is assigned to this job.
# +
def creating_process_times(stack1_border_high,stack1_border_low,stack1_number,stack2_border_high,
stack2_border_low,stack2_number):
stack1 = np.random.randint(stack1_border_low,stack1_border_high+1, size=stack1_number)
stack2 = np.random.randint(stack2_border_low,stack2_border_high+1, size=stack2_number)
process_times = np.concatenate((stack1 ,stack2), axis=0)
return process_times
# print(creating_process_times(4,1,10,8,5,20))
# -
# The initialization function randomly assigns jobs to machines. It takes the number of total machines (machine_number), the number of total jobs (job_number) and the population size. The population size is the most important factor when it comes to scalability and performance of our algorithm. A small population sizes might have the consequence that it will converge too quickly and delivers "substandard solutions". A large population size, on the other hand, may lead to "unnecessary expenditure of valuable computational time". The output is a list of arrays that displays all candidate solutions.
# +
def initialize(machine_number,job_number,population_size):
# Assigning random jobs to machines
population = []
chromosone = []
#job_locations = np.random.randint(0,machine_number,size = job_number)
for i in range(population_size):
chromosone = np.random.randint(0,machine_number,size = job_number)
population.append(chromosone)
job_locations = []
return population
print(initialize(2,30,100))
# -
# The evaluate_chromosomes function takes the process_time that each job takes, the chromosomes and the machines as input and evaluates how "fit" each chromosome is, i.e. how long it takes to get all the jobs done. In order to find out how long a chromosome takes, we have to add all processing times for each job that belong to one machine. The machine with the longest processing time defines the fitness of a chromosome.
# calculate processing time
def evaluate_chromosones(process_times,chromosone,machines):
time = 0
max_time = 0
for i in range(machines):
for j in range(len(chromosone)):
if chromosone[j] == i:
time = time + process_times[j]
if time > max_time:
max_time = time
time = 0
chromosome_fitness_level=max_time
return chromosome_fitness_level
print(evaluate_chromosones([ 5, 5, 5, 7, 7, 6, 5],[0, 1, 1, 0, 0, 1, 0],2))
# ## 2. Evaluation
# The Initialization has finished and the evaluate function calculates and array called population_fitness that displays the fitness level of each chromosome. Note that the order of the fitness values is important since it marks to which chromosome it belongs to.
# +
# take the function "compute time" and get fitness level for each chromosome, give back array of fitness values
def evaluate(population,process_times,machines):
population_fitness = []
for i in (population):
population_fitness.append(evaluate_chromosones(process_times, i,machines))
return population_fitness
# evaluate_chromosones(process_times, chromosone,machines):
# creating_process_times(stack1_border_high,stack1_border_low,stack1_number,stack2_border_high,stack2_border_low,stack2_number):
# initialize(machine_number,job_number,population_size):
evaluate(initialize(2,10,10), creating_process_times(4,1,5,8,5,5),2)
# -
# ## 3. Selection
# Now the survival-of-the-fittest mechanism comes into play and is applied to the candidate solutions. We chose two different selection processes. On the one hand, roulette wheel selection and on the other hand, tournament selection.
#
# In roulette wheel selection, the method receives the population, the fitness of the population as an array (order important) and the mating pool size. It first calculates the probability of each candidate according to its fitness value. Metaphorically speaking, we are designing a huge roulette wheel with spot sizes illustrating the probability of each candidate. Good solutions (i.e. smaller fitness values) have smaller slots (biased). Note that we are looking for the smallest fitness value since we want to minimize the processing time in our optimization problem. After calculating the cumulative propability (boundaries), a uniform random number called rand_mate $\in $ (0,1) is selected. If rand_mate < boundaries, then select the first chromosome. Consider also the case that rand_mate might be 1 and the mating pool is the same as the population.
# +
def roulette_select(population,population_fitness,mating_pool_size):
# TODO: take the variable "mating pool" and get this number of chromosomes from the population (Roulette, Tournament)
# returns array of chromosomes (mating pool)
denominator = np.sum(population_fitness)
probabilities = np.true_divide(population_fitness, denominator)
boundaries = []
boundaries.append(0)
boundaries[0] = probabilities[0]
for i in range(1,len(probabilities)):
boundaries.append(0)
boundaries[i] = probabilities[i]+boundaries[i-1]
mating_pool = []
for i in range(mating_pool_size):
rand_mate = np.random.uniform(0,1)
for j in range(len(boundaries)):
if boundaries[j] > rand_mate:
mating_pool.append(population[j-1])
break
if rand_mate == 1:
mating_pool.append(population[0])
break
#select the chromosome according to algorithm like rhoulette wheel selection and return the matin_pool as chromosomes
return mating_pool
population = [[1, 0, 1, 0, 1, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1, 0, 1, 0, 1], [0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [0, 1, 1, 1, 0, 1, 0, 0, 1, 1], [1, 0, 0, 0, 0, 1, 1, 1, 1, 0], [0, 0, 1, 1, 0, 0, 0, 1, 0, 1], [1, 0, 1, 0, 0, 0, 1, 1, 1, 0], [1, 1, 0, 0, 0, 1, 0, 1, 0, 1], [1, 1, 0, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 1, 0, 1]]
population_fitness = [27, 28, 31, 27, 27, 30, 23, 23, 25, 24]
roulette_select(population,population_fitness,5)
# -
# The second selection method is called tournament selection. It also takes the population, its fitness array and the mating pool size as input. It basically takes the chromosomes that have the lowest fitness values. Note that we have an optimization problem where we want to minimize the total processing time.
# +
def tournament_select(population,population_fitness,mating_pool_size):
# TODO: take the variable "mating pool" and get this number of chromosomes from the population (Roulette, Tournament)
# returns array of chromosomes (mating pool)
mating_pool=[]
highest_fitness_level= 0
old_highest_fitnesslevel_location=[]
for i in range(0,mating_pool_size):
for j in range(0,len(population_fitness)):
if population_fitness[j]>highest_fitness_level and j not in old_highest_fitnesslevel_location:
highest_fitness_level=population_fitness[j]
print(highest_fitness_level)
mating_pool.append(highest_fitness_level)
old_highest_fitnesslevel_location.append(j)
print(old_highest_fitnesslevel_location)
highest_fitness_level=0
# mydict= dict(zip(population,population_fitness))
# print(mydict)
#for key, value in sorted(mydict.iteritems(), key=lambda (k,v): (v,k)):
# print "%s: %s" % (key, value)
return mating_pool
# return mydict
population = [[1, 0, 1, 0, 1, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1, 0, 1, 0, 1], [0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [0, 1, 1, 1, 0, 1, 0, 0, 1, 1], [1, 0, 0, 0, 0, 1, 1, 1, 1, 0], [0, 0, 1, 1, 0, 0, 0, 1, 0, 1], [1, 0, 1, 0, 0, 0, 1, 1, 1, 0], [1, 1, 0, 0, 0, 1, 0, 1, 0, 1], [1, 1, 0, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 1, 0, 1]]
population_fitness = [27, 28, 31, 27, 27, 30, 23, 23, 25, 24]
tournament_select(population,population_fitness,5)
# Issues:
# problem is that saving the old best fitness level locations in an array and checking for that does not work yet.
# and also in the append of the mating pool it's' still appending the
# fitnesslevel of las position of the j loop when we actually want to append population[j]
# +
def tournament_select(population,population_fitness,mating_pool_size):
mydict= dict(zip(population_fitness,population))
print(mydict)
sorted_dict = sorted(mydict.items())
# print(sorted_dict)
# sorted_mating_pool = sorted_dict.items()
# sorted_mating_pool = mydict.values()
# sorted_dict.values()
# try:
# del sorted_dict['key']
# except KeyError:
# pass
# if 'key' in sorted_dict:
# del sorted_dict['key']
# population_fitness = list(map(int, population_fitness))
for i in range(len(population_fitness),0,1):
# sorted_dict.pop(population_fitness[i])
try:
del sorted_dict.key[i]
except KeyError:
pass
# sorted_mating_pool = sorted_dict.pop('key')
# print(sorted_mating_pool)
print(sorted_dict)
return sorted_dict
population = [[1, 0, 1, 0, 1, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1, 0, 1, 0, 1], [0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [0, 1, 1, 1, 0, 1, 0, 0, 1, 1], [1, 0, 0, 0, 0, 1, 1, 1, 1, 0], [0, 0, 1, 1, 0, 0, 0, 1, 0, 1], [1, 0, 1, 0, 0, 0, 1, 1, 1, 0], [1, 1, 0, 0, 0, 1, 0, 1, 0, 1], [1, 1, 0, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 1, 0, 1]]
population_fitness = [27, 28, 31, 27, 27, 30, 23, 23, 25, 24]
tournament_select(population,population_fitness,5)
# Two issues in this code:
# 1. Although i pass 10 values of populationa and corresponding fitness levels, only 7 are present in mydict
# 2. Not able to separate the valuescolumn from the sorted dict even after using pop and del methods
# -
# ## 4. Recombination
# In recombination, two chromosomes are taken, called parent1 and parent2, and they are recombined (have children, here: child1 and child2) that might produce better outputs, i.e. lower fitness values. Note that we are looking for the lowest fitness value since we want to minimize the processing time.
#
# We used two different methods of recombination. On the one hand, one-point-crossover and on the other hand, uniform-crossover. In one-point crossover the parents are recombined in such way that child1 receives the values of parent1 for ever
| 12,288 |
/First step.ipynb
|
6e6aa743e9b7596d99c272826d6f0a4ea851eb0e
|
[] |
no_license
|
kuroosan/Coursera
|
https://github.com/kuroosan/Coursera
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 4,786 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Creating the Dataframe
# ### Importing the necessary libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
import lxml.html as lh
import requests
from bs4 import BeautifulSoup
# ### Copying the webpage
wikipedia_link='https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
page= requests.get(wikipedia_link)
# ### Extract the Table from the page
doc = lh.fromstring(page.content)
tr_elements = doc.xpath('//tr')
#Create empty list
col=[]
i=0
#For each row, store each first element (header) and an empty list
for t in tr_elements[0]:
i+=1
name=t.text_content()
print (i,name)
col.append((name,[]))
for j in range(1,len(tr_elements)):
#T is our j'th row
T=tr_elements[j]
#If row is not of size 10, the //tr data is not from our table
if len(T)!=3:
break
#i is the index of our column
i=0
#Iterate through each element of the row
for t in T.iterchildren():
data=t.text_content()
#Check if row is empty
if i>0:
#Convert any numerical value to integers
try:
data=int(data)
except:
pass
#Append the data to the empty list of the i'th column
col[i][1].append(data)
#Increment i for the next column
i+=1
# ### Creating a Dataframe
Dict={title:column for (title,column) in col}
df=pd.DataFrame(Dict)
# ### Preprocessing
df.columns=[str(i).rstrip() for i in df.columns]
for i in df.columns:
df[i]=df[i].str.rstrip()
df=df[df['Borough']!='Not assigned']
df=df.drop([180],axis=0)
for i in df['Neighborhood']:
if i=='Not assigned':
i=df['Borough']
df.shape[0]
# Convert img to numpy array
plt.imshow(img)
plt.show()
# -
def img2arr_fromURLs(url_list, resize = False):
"""
請完成這個 Function
Args
- url_list: list of URLs
- resize: bool
Return
- list of array
"""
img_list=list(url_list)
return img_list
# +
result = img2arr_fromURLs(df[0:5][1].values)
print("Total images that we got: %i " % len(result)) # 如果不等於 5, 代表有些連結失效囉
for im_get in result:
pic_link=requests.get(im_get)
im_get=Image.open(BytesIO(pic_link.content))
plt.imshow(im_get)
plt.show()
| 2,590 |
/ugrad/CS6501/Presentation on Pac Learning in the Presence of One-sided Classification Noise.ipynb
|
aa990e9f7b948f60de2159d9e509635d61b7b584
|
[] |
no_license
|
CLARKBENHAM/side_projects
|
https://github.com/CLARKBENHAM/side_projects
| 0 | 2 | null | 2023-02-16T02:33:12 | 2021-10-04T05:47:52 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 784,722 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Union of K intervals
#
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
# -
# <img src="IntroHead.png">
# <img src="intro.png">
# <img src="intro2.png">
# <img src="prob1.png">
# <img src="Thrm21.png">
# <img src="Thrm31.png">
# <img src="Thrm312.png">
# <img src="Thrm313.png">
# <img src="Thrm314.png">
# <img src="Thrm315.png">
# <img src = "UnK.png">
# <img src = "UnionAlgo.png">
# +
def made_intervals(permuted_ex):
made_intervals = [permuted_ex[:-1,2] != permuted_ex[1:,2]]#compare labels
startstop = [0] + [i if i %2 == 1 else 0 for i in np.cumsum(made_intervals)]
_, groups = np.unique(startstop, return_inverse = True)#dont exclude 0 group, need to leave in for spaceing
num_vals = max(groups)
indicies = [[] for i in range(num_vals+1)]#list of list of intervals locs [most specific]
for counter, val in enumerate(groups):
indicies[val].append(counter)#appends row index of interval at its group
return indicies
class union_interval:
"Assumes a uniform distribution of samples"
def __init__(self, k = 3, epsilon = 0.1, gamma = 0.1, space = [-5,5], noise = 0, exact_n = True, const_w = False):
self.intervals = np.sort(np.random.uniform(space[0],space[1],2*k)).reshape(k,2)#intervals always smaller than sample space
self.ep = epsilon
self.gamma = gamma
self.space = space #[space[0] + 0.0001, space[1] - 0.001]#ensure bu
self.noise = noise
self.exact_n = exact_n
self.const_w = const_w
self.k = k
self.examples = np.array((self.space[0] - 0.01, 0, 0)).reshape(1,-1)#will always be ignored, put in so first example is always false
def reset_examples(self):
self.examples = np.array((self.space[0] - 0.01, 0, 0)).reshape(1,-1)
def add_example(self, ex):
if ex.ndim == 1:#1d array
self.examples = np.vstack((self.examples, ex))
elif ex.ndim == 2:#2d array
self.examples = np.concatenate((self.examples, ex), axis =0)#needs to be list
def bin_search(intervals, i, k = 0):
j = len(intervals)
if j <= 1:
return k
if i < intervals[j//2]:
return union_interval.bin_search(intervals[:j//2], i, k)
else:
return union_interval.bin_search(intervals[j//2:], i, k + j//2)
def in_intervals(intervals, i):
indx = union_interval.bin_search(intervals[:,0], i)
return intervals[indx, 0] <= i <= intervals[indx, 1]
def sample(self, m = 1, samp = None, weight = None, label = None):
if m == 1:
samp = samp or np.random.uniform(self.space[0], self.space[1])#change sampling distribution here
weight = float(weight or self.const_w or np.random.uniform(0,1))#wieghts random between 0 and 1
label = label or in_intervals(union_interval.intervals, samp)
self.add_example(np.array((samp, weight, label)))
else:
samples = np.random.uniform(*self.space, m)#change sampling distribution here
weights = np.random.uniform(int(self.const_w), 1, m)
labels = np.array([union_interval.in_intervals(self.intervals, i) for i in np.array(samples)])
self.add_example(np.column_stack((samples, weights, labels)))
def estimate_intervals(self):
"Uses tight bounds for determining region"#" as else might not be PAC learnable given divergent Distribution?"
self.examples = self.examples[np.argsort(self.examples[:,0]), :]#might as well keep it sorted
rand = np.random.choice(2, len(self.examples), p = [1-self.noise, self.noise])
permutes = np.array([i if not j else j for i,j in zip(rand, self.examples[:,2])])
permutes[0] = 0#a hack, can't have boundary start at first point
permuted_ex = np.concatenate((self.examples[:,:-1], permutes.reshape(-1,1)), 1)
indicies = made_intervals(permuted_ex)
del indicies[0]#remove 0 labeled indicies
weights = [0]*len(indicies)
for cnter, lst in enumerate(indicies):
weights[cnter] = np.sum(permuted_ex[lst, 1])
final_indxs = [j for i,j in sorted(zip(weights, indicies), key = lambda x: x[0])][-self.k:]#is sorted in ascending order
final_intervals = [0]*self.k
for cnt, lst in enumerate(final_indxs):
if len(lst) == 1:
final_intervals[cnt] = [permuted_ex[lst[0],0]]
else:
final_intervals[cnt] = list((permuted_ex[lst[0],0], permuted_ex[lst[-1],0]))#+ permuted_ex[lst,0][-1]#get's first and last value in index
return final_intervals, permutes
def calc_error(self, est = None):
error = 0
if est is None:
est, _ = self.estimate_intervals()
for i in range(self.k):
if isinstance(est[i], int):
est[i] = [float(est[i]) - 0.00001, float(est[i]) + 0.00001]
elif len(est[i]) == 1:
est[i] = [float(est[i][0]) - 0.00001, float(est[i][0]) + 0.00001]
mistake_region = np.sort(np.append(np.ravel(est), np.ravel(test1.intervals))).reshape(-1, 2)
for i in mistake_region:
error += abs(i[0]- i[1])
return error/abs(self.space[0] - self.space[1])
def theta_m(self):
return int(1/(self.ep*(1 - self.noise)) * (2*self.k * math.log(1/(self.ep*(1 - self.noise))) + math.log(1/self.gamma)))
def omega_m(self):
return int((self.k*2-1)/(self.ep*(1 - self.noise)))
# -
#init test
test1 = union_interval(k = 5, noise = 0.3, const_w = True)
test1.sample(m = 100)
# test1.examples = np.delete(test1.examples, 0, 0)
est, permutes = test1.estimate_intervals()
tru = test1.intervals
# print(est, "\n\n", tru)
test1.calc_error(est)
test1.theta_m()
test1.omega_m()
pass
# +
from matplotlib import colors
import matplotlib.transforms as mtransforms
import math
def plot_union(un_int, name = "test"):
fig = plt.figure(figsize = (16, 4))
ax = fig.add_subplot(1,1,1)
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
ax.spines['left'].set_color('white')
# Eliminate upper and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Show ticks in the left and lower axes only
ax.xaxis.set_ticks_position('bottom')
ax.get_yaxis().set_visible(False)
plt.xlim(un_int.space)
plt.ylim([-1,1])
clrs = un_int.examples[:,2]
wghts = un_int.examples[:,1]
xvals = un_int.examples[:,0]
cmap = colors.ListedColormap(['red', 'green'])
plt.scatter(xvals,[0]*len(xvals), s = wghts*200/un_int.k, c = clrs, cmap = cmap)#be better as bars not pnts
est_bnds, permutes = un_int.estimate_intervals()
error = un_int.calc_error(est_bnds)
plt.scatter(xvals,[-0.5]*len(xvals), s = wghts*200/un_int.k, c = permutes, cmap = cmap)#be better as bars not pnts
plt_bnds(est_bnds, ax, 'green', 0.2, est= True)
tru_bnds = un_int.intervals
plt_bnds(tru_bnds, ax, 'xkcd:lime', 0.05)
plt.title(f"Union Bound Prediction vs. Actual of {name}. \
True Error = {error/abs(un_int.space[0] - un_int.space[1]):.4f}")
plt.show()
return est_bnds, error, permutes
def plt_bnds(tru_bnds, ax, clr, alpha = 0.1, est= False):
trans = mtransforms.blended_transform_factory(ax.transData, ax.transAxes)
for bnd in tru_bnds:
x = np.arange(bnd[0] - 0.000001, bnd[-1] + 0.000001, 0.01)
clr = 'g'
if est:
clr = "b"
plt.plot(x, [0.8]*len(x), c = clr)
plt.plot(x, np.sin(x*min(24, 24/abs(bnd[0] - bnd[-1]+0.01)))*0.1+0.9, c = clr)
plt.plot(x, [-0.8]*len(x), c = clr)
plt.plot(x, np.sin(x*min(24, 24/abs(bnd[0] - bnd[-1]+0.01)))*-0.1 - 0.9, c = clr)
plt.axvline(x=bnd[0], c = clr)
plt.axvline(x=bnd[-1], c = clr)
ax.fill_between(x, 0, 1, where= x >= bnd[0],
facecolor=clr, alpha=alpha, transform=trans)#x limited to values between bnds, fills between all valid pnts
_, error, permutes = plot_union(test1, "test1")
# +
# from matplotlib import animation
import math
def get_prog(test1, steps = 12, extra = 3, plot_ep = True, calc_gamma = 1, plot_int = False, animate = False):
calced_intervals = [0]*(steps + extra -1)#only saves 1 set of interval
calced_error = np.empty([steps + extra - 1, calc_gamma])# will always be float [[0]*calc_gamma for i in range(steps + extra - 1)]
inc_m = int(test1.theta_m()//steps)
test1.reset_examples()
for m in range(1, steps + extra):
for rp in range(calc_gamma):
test1.sample(m = m * inc_m)#want to show "improvement" on the same sampleed points
calced_intervals[m - 1], _ = test1.estimate_intervals()#redone each time w/ the noise given
calced_error[m - 1, rp] = test1.calc_error(est = calced_intervals[m - 1])
if rp < calc_gamma - 1:#only appends the last samples drawn
test1.examples = test1.examples[:-inc_m, :]#removes most reccently sampled
if plot_int:
plot_union(test1, name = f"{m * test1.theta_m()//steps} Samples")
seps = np.arange(inc_m, inc_m*(steps + extra), inc_m)
mean_ep = np.ravel([np.mean(calced_error[i]) for i in range(steps + extra - 1)])
emp_gamma = [np.mean(np.ravel(calced_error[i]) >= test1.ep) for i in range(steps + extra - 1)]
plt.scatter(seps, mean_ep)
plt.axhline(y = test1.ep)
plt.title("Mean Epsilon")
plt.xlabel("Number of Samples")
plt.ylabel("Epsilon (under uniform distribution)")
plt.axvline(x = test1.theta_m())
plt.axvline(x = test1.omega_m())
plt.show()
plt.scatter(seps, emp_gamma)
plt.title(f"Gamma based on {calc_gamma} Repitions")
plt.xlabel("Number of Samples")
plt.ylabel("Gamma")
plt.axhline(y = test1.gamma)
plt.axvline(x = test1.theta_m())
plt.axvline(x = test1.omega_m())
plt.show()
test1 = union_interval(k = 5, noise = 0.3)
get_prog(test1, calc_gamma = 10, animate = False, plot_int = True)
| 10,832 |
/2020-03-18-warmup.ipynb
|
1b7a43e50904afc14949ddc9fbc5aef731f77dca
|
[] |
no_license
|
CodeupClassroom/curie-statistics-exercises
|
https://github.com/CodeupClassroom/curie-statistics-exercises
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,979 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pre-class work 2: Bimodal, 1-dimensional distribution
#
# ## Task 1
# After running the code below, comment on the Stan output.
# 1. Use the Rhat value to comment on whether the Markov chains converged properly or not. Explain your answer.
#
# 2. What are the total number of samples and the effective number of samples? Are these values good or not?
#
#
# No, neff is 2 and rhat is 5 which is way outside our ideal.
#
# +
import pystan
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import pandas
import seaborn
# %matplotlib inline
# +
# The Stan model. In this case we have no data. Instead we specify an
# unnormalized target distribution directly and use Stan to sample from it.
# Running this cell compiles the Stan model, which takes some time.
stan_data = {}
stan_code = """
parameters {
real x;
}
model {
target += log_sum_exp(normal_lpdf(x | -4, 0.5), normal_lpdf(x | 4, 1));
}
"""
stan_model = pystan.StanModel(model_code=stan_code)
# +
# Run Hamiltonian Monte Carlo using Stan with 4 Markov chains, a 1000 step
# warm-up phase and 1000 step sampling phase for each chain. The warm-up samples
# are discarded, so we are left with 4 x 1000 = 4000 samples.
parameters = ['x']
results = stan_model.sampling(data=stan_data)
print(results.stansummary(pars=parameters))
samples = results.extract()
# -
# Now, comment on the Stan output above.
#
# 1. Use the Rhat value to comment on whether the Markov chains converged properly or not. Explain your answer.
# 2. What are the total number of samples and the effective number of samples? Are these values good or not?
#
# ## Task 2
# Run the code below and use the plots to answer the following questions.
#
# 1. Are the samples correlated or reasonably independent?
# 2. What information does the pair plot provide?
#
# Extremely correlated at every step (almost perfectly)
# Does the pair plot show that by nature of the distribution, x is very correlated with itself.
#
# +
# Plot sample autocorrelation for each parameter.
def plot_acf(x):
'''
Plot the autocorrelation function for a series x. This corresponds to the
acf() function in R. The series x is detrended by subtracting the mean of
the series before computing the autocorrelation.
'''
from scipy import signal
plt.acorr(
x, maxlags=20, detrend=lambda x: signal.detrend(x, type='constant'))
for param in parameters:
plt.figure(figsize=(12, 4))
plot_acf(samples[param])
plt.title(f'Autocorrelation of {param} samples')
plt.show()
# -
# Make pair plot of the posteriors over all parameters of the model.
df = pandas.DataFrame(
data=np.transpose([samples[param] for param in parameters]),
columns=parameters)
seaborn.pairplot(df, size=4, plot_kws={'marker': '.', 'alpha': 0.25})
plt.show()
# ## Task 3
# Since the target distribution is 1-dimensional we can visualize it. This will not be the case with higher-dimensional distributions.
#
# Run the code below and answer the following question.
#
# 1. Are the Stan samples distributed according to the target distribution?
# 2. Relate your answer to question 1 to what you observed from the autocorrelation plot in Task 2.
#
# Yes, which is confusing as autocorrelation indicates high correlation.
# +
# Plot the 4000 Stan samples below the 1-dimensional pdf.
def p(x):
return (stats.norm.pdf(x, -4, 0.5) + stats.norm.pdf(x, 4, 1)) / 2
plt.figure(figsize=(12, 6))
plot_x = np.linspace(-6, 8, 500)
plt.plot(plot_x, p(plot_x))
plt.plot(
samples['x'],
stats.uniform.rvs(loc=-0.05, scale=-0.01+0.05, size=len(samples['x'])),
'k.', alpha=0.05)
plt.ylim(-0.05, 0.5)
plt.xlabel('x')
plt.title('Stan samples and true pdf')
plt.show()
x = np.mean(samples['x'] < -1)
print('Split of samples between the two modes (should be about 0.5):', min(x, 1-x))
corresponding key value
# can be accessed using square brackets [] as in python or by using ".get()"
# phone number to words using dictionarty
number = input("enter your phone number")
print(number)
word = { "1":"one","2":"two","3":"three","4":"four"}
for i in number:
#print(word[i],end=" ") we can use this also but if user enter anything other than the dictionary key
# it will throw error
#print(word.get(i),end = "") this will show none if key is not recognized
print(word.get(i,"!"),end = " ")#this will show ! instead of None
# my work
txt = input("enter your text >")
words = txt.split(" ")
final_sentence = ""
emoji={":)":"😀",
":(":"☹️"
}
for item in words:
if item in emoji:
final_sentence += emoji.get(item) + " "
continue
final_sentence += item + " "
print(final_sentence)
#mosh's work
txt = input("enter your text >")
words = txt.split(" ")
final_sentence = ""
emoji={":)":"😀",
":(":"☹️"
}
for item in words:
final_sentence += emoji.get(item,item) + " "
print(final_sentence)
# # functions ,arguments ,parameters ,return values
# arguments are actual values that we specify when calling function
# ,but parameters are place holders for actual values
# python accepts arguments in 2 ways
# 1) possitinal argument - by default
#
# 2) keyword argument - if there are multiple numerical values it's prefered to use keyword argument
# because it will improve readability.
# if we are using both in a then first argument must be possitional and next all we can do keyword argument
def test_function(first_name,second_name,third_name):
print(f'Hi {first_name} {second_name} {third_name} ! ')
test_function("first",third_name="second",second_name="third")
# # anticipate and manage errors
# if a python program run successfully it's exit code will be "0"
# common errors include ValueError ,ZeroDivisionError
#
# As a good programmer we should anticipate these kinds of error
#
# ### use try except
#
# +
try:
age = int(input("enter your age > "))
print (1000/age)
except ValueError:
print("Invalid Entry")
except ZeroDivisionError:
print("input cant be Zero")
# -
# # classes
# Classes make a blueprint for objects and objects are the instances of class
#
# As per convention first letter of class will be capital if there are multiple words capitalize each first letters of words.
#
class TestClass:
def draw():
print("draw")
def kill(self):
print("kill")
object1 = TestClass()
object1.kill()
# we can also add attributes for each objects like variable as shown below
object1.variable1 = 199
print(object1.variable1)
# # constructor
class TestClass:
#constructor
def __init__(self,x_cord,y_cord):
self.x = x_cord
self.y = y_cord
def draw(self):
print(f"points are ({self.x},{self.y})")
def kill(self):
print("kill")
object1 = TestClass(12,90)
object1.kill()
object1.draw()
# we can also add attributes for each objects like variable as shown below
object1.variable1 = 199
print(object1.variable1)
# # error
# ---
# ```
# class Laptop:
# def details():
# print('Hello! I am a laptop.')
#
# laptop1 = Laptop()
# laptop1.details()
# ```
# ---
# ### Output
#
# ```
# Traceback (most recent call last):
# File "example1.py", line 6, in <module>
# laptop1.details()
# TypeError: details() takes 0 positional arguments but 1 was given
#
# ```
# ---
# You might be wondering that you have not passed any arguments when you called details() method on the laptop1 object. But, why Python is throwing the TypeError?
#
# ---
# Here is why. By default, if the method is not a static python method, then implicitly the object (self) is passed as argument. So, when you called laptop1.details(), it is actually being called as laptop1.details(laptop1).
# ## INHERITANCE IN PYTHON
class Animal:
def walk(self):
print("walk")
class Cat(Animal):# this means all the methords of Animal class is inherited to Cat class.
def meow(self):
print("meow")
object2 = Cat()
object2.walk()
# # dice roll using tuple and class
# +
import random
class Dice:
def __init__(self):
self.numbers = [1, 2, 3, 4, 5, 6]
print(random.choice(numbers))
def roll(self):
number_tuple = (random.choice(self.numbers),random.choice(self.numbers))
print(number_tuple)
object_dice = Dice()
object_dice.roll()
# -
# # dice roll mosh
# +
import random
class Dice:
def roll(self):
# we don't need to specify tuples in brackets ,by default python will take it as tuples
return random.randint(1,6),random.randint(1,6)
object_dice = Dice()
object_dice.roll()
# -
# # directories pathlib
from pathlib import Path # here path is a class
path = Path("test_directory")
#return boolean
print(path.exists())
#create a directory with name test_directory
if not path.exists():
path.mkdir()
# to remove the same directory
#path.rmdir()
# # using glob methord
# +
from pathlib import Path # here path is a class
path1 = Path()# no argument implies current directory
# this will print everything inside that directory * means everything
for file in path1.glob("*"):
print(file)
print("_"*20)
# to print only the files just modify the pattern
for file in path1.glob("*.*"):
print(file)
print("_"*20)
# to print only some specific files with .py extension
for file in path1.glob("*.py"):
print(file)
print("_"*20)
| 9,627 |
/auto-encodeurs.ipynb
|
31caedf01387aba2241e7e251abb296c1ee7fee2
|
[] |
no_license
|
MichelYABA/DeepLearning
|
https://github.com/MichelYABA/DeepLearning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 4,343,369 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="c0_Bvdg-Egih"
# # <center>PRACTICAL LECTURE 4</center>
#
# # <center>PART1. Autoencoders </center>
# + [markdown] id="XFBbQUXSE3OF"
# 1. Import libraries
# + colab={"base_uri": "https://localhost:8080/"} id="yXRAfjqr0MLn" outputId="f62122ff-78ae-4256-b305-d6fadddf97b7"
from google.colab import drive
drive.mount('/content/gdrive')
# + colab={"base_uri": "https://localhost:8080/"} id="gDgrApoi0Iz-" outputId="fc98d0fb-7156-43ae-e0d3-6e73a9aadc94"
# %cd gdrive/My Drive/tpsDeepLearning/
# + id="F_ZTUfXBD9iK"
import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
from matplotlib import pyplot as plt
# + [markdown] id="IDn4Ict6E-V5"
# 2. Convert vector to image
# The following function allows to convert a vector to a image
#
# + id="AsIx4-1BFYAr"
def to_img(x):
x = 0.5 * (x + 1)
x = x.view(x.size(0), 28, 28)
return x
# + [markdown] id="aRyXp2mxFg8u"
# 3. We now write a function which allows to display the images using the imshow() function.
# + id="a9Zuz8YwFggJ"
def display_images(in_, out, n=1):
for N in range(n):
if in_ is not None:
in_pic = to_img(in_.cpu().data)
plt.figure(figsize=(18, 6))
for i in range(4):
plt.subplot(1,4,i+1)
plt.imshow(in_pic[i+4*N])
plt.axis('off')
out_pic = to_img(out.cpu().data)
plt.figure(figsize=(18, 6))
for i in range(4):
plt.subplot(1,4,i+1)
plt.imshow(out_pic[i+4*N])
plt.axis('off')
# + [markdown] id="Umk1cbovFzqr"
# 4. Define a data loading step and load the MNIST dataset
# + id="dK0bEX43Fzaa"
batch_size = 256
img_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
dataset = MNIST('./data', transform=img_transform, download=True)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
# + id="YZ7u45gU3FcK" colab={"base_uri": "https://localhost:8080/"} outputId="55abe930-0dd2-4a9d-82a9-4f26f9cf8cf8"
dataset
# + [markdown] id="Y3uruhfZGJu7"
# 5. Fix the used device
# + id="pcQRF6yMGJ5z"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# + [markdown] id="vjX05vNdGbjB"
# 6. Define the Autoencoder model architecture and reconstruction loss
# with :
# n = 28 x 28 = 784
#
# A. Use d = 30 for standard AE (under-complete hidden layer)
#
# B. Use d = 500 for denoising AE (over-complete hidden layer)
#
# C. Explain the difference between standard AE and denoising AE.
# Analyse the results and conclude.
#
# + [markdown] id="RUPLWMw-8F8g"
# ## A. Use d = 30 for standard AE (under-complete hidden layer)
# + colab={"base_uri": "https://localhost:8080/"} id="pE0bp1177Aun" outputId="d530f8ff-56a8-48fa-aa7e-416b14542b32"
28*28
# + id="Ii9cRbguGbu1"
class Autoencoder(nn.Module):
def __init__(self, d):
super().__init__()
# la fonction d'activation utilisée ici c'est la tangente hyperbolique (tanh)
# cette fonction est utile dans des situations où on n'a pas forcément une probabilité à ressortir
# elle varie entre -1 et 1
self.encoder = nn.Sequential(
nn.Linear(28 * 28, d),
nn.Tanh(),
)
self.decoder = nn.Sequential(
nn.Linear(d, 28 * 28),
nn.Tanh(),
)
# fonction qui permet d'encoder et décoder l'information
def forward(self, x):
x = self.encoder(x) # étape d'encogage partant de la couche d'entrée à la seule couche cachée
x = self.decoder(x) # étape de décodage partant de la couche d'entrée à la seule couche cachée
return x
# + id="fewXbX15gPSb"
d=30;
model = Autoencoder(d).to(device)
criterion = nn.MSELoss() # on définit un critère qui va nous servir pour la fonction de coût
# + [markdown] id="zt6sfb52Pfwm"
# 7. Configure the optimiser. We use here : learning_rate equals to 1e-3
# + id="sIh1QdvGPf7u"
learning_rate = 1e-3 # soit 0.001, ce qui est pas mal
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer = torch.optim.Adam(
model.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
# + [markdown] id="qkCI5HPzPzRX"
# 8. Train the standard autoencoder and the denoising autoencoder using the following code :
#
# + id="iAXJUQIvPza6"
def training_sae(num_epochs, model, criterion, optimizer):
#num_epochs = num_epochs
epoch_loss_sae =[]
# do = nn.Dropout() # comment out for standard AE
for epoch in range(num_epochs):
for data in dataloader:
img, _ = data
img = img.to(device)
img = img.view(img.size(0), -1)
# noise = do(torch.ones(img.shape)).to(device)
# img_bad = (img * noise).to(device) # comment out for standard AE
# ===================forward=====================
output = model(img) # feed <img> (for std AE) or <img_bad> (for denoising AE)
loss = criterion(output, img.data) #calcul the loss (erreur) function using MSE
# ===================backward====================
optimizer.zero_grad() # set the gradients to zero
loss.backward() # calcule les gradients, ceci permet de savoir dans quelle direction on va ajuster les poids
# Esk les poids vont augmenter ou diminuer
optimizer.step() # mise a jour des poids, ceci détermine l'intensité à laquelle les poids sont mis à jour
# ===================log========================
print(f'epoch [{epoch + 1}/{num_epochs}], loss:{loss.item():.4f}') #4f pour dire qu'on garde 4 chiffres après la virgule
epoch_loss_sae.append(loss.item())
display_images(None, output) # pass (None, output) for std AE, (img_bad, output) for denoising AE
return epoch_loss_sae
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="TZhoo8V8hY3K" outputId="fa274222-b733-4e6b-8648-69d830cd1d3a"
nb_epoch = 20
epoch_loss_sae = training_sae(nb_epoch, model, criterion, optimizer)
# + [markdown] id="KyTz7uk4QZF2"
# 9. Visualise a few kernels of the encoder :
#
# + id="AdF3pA2JQZRI" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="8f5bb40c-39ce-4c51-d8ec-2cb74e2258ff"
display_images(None, model.encoder[0].weight, 5)
# + [markdown] id="nBmYjk1N8RDH"
# ## B. Use d = 500 for denoising AE (over-complete hidden layer)
#
# + id="zHkiCAQB8QQg"
d=500;
model1 = Autoencoder(d).to(device);
# + [markdown] id="OYeBbI3y8zRt"
# Configure the optimiser. We use here : learning_rate equals to 1e-3
# + id="7bvuAJdx8qMT"
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer1 = torch.optim.Adam(
model1.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
# + [markdown] id="3WkjtfQZ9B5_"
# Train the standard autoencoder and the denoising autoencoder using the following code :
# + id="jkfjDtTu7jAk"
def training_dae(num_epochs, model, criterion, optimizer):
epoch_loss_dae =[]
do = nn.Dropout() # comment out for standard AE
for epoch in range(num_epochs):
for data in dataloader:
img, _ = data
img = img.to(device)
img = img.view(img.size(0), -1)
noise = do(torch.ones(img.shape)).to(device) # le bruit
img_bad = (img * noise).to(device) # comment out for standard AE
# ===================forward=====================
output = model(img_bad) # feed <img> (for std AE) or <img_bad> (for denoising AE)
loss = criterion(output, img.data) #calcul the loss (erreur) function using MSE
# ===================backward====================
optimizer.zero_grad() # set the gradients to zero
loss.backward() # calcule les gradients, ceci permet de savoir dans quelle direction on va ajuster les poids
# Esk les poids vont augmenter ou diminuer
optimizer.step() # mise a jour des poids, ceci détermine l'intensité à laquelle les poids sont mis à jour
# ===================log========================
print(f'epoch [{epoch + 1}/{num_epochs}], loss:{loss.item():.4f}')
epoch_loss_dae.append(loss.item())
display_images(img_bad, output) # pass (None, output) for std AE, (img_bad, output) for denoising AE
return epoch_loss_dae
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="o24ofxnljOWx" outputId="08dfcbb8-ff44-4036-8d60-878c590eed1a"
epoch_loss_dae = training_dae(nb_epoch, model1, criterion, optimizer1)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="d7DltoUcPtcc" outputId="fe4b52aa-b5b3-409b-b7b2-441942e2a39f"
display_images(None, model1.encoder[0].weight, 5)
# + [markdown] id="jvjIT41j-eCf"
# ## C. Explain the difference between standard AE and denoising AE.
# + id="ZowepiMm4Yh6"
def displayLoss(listeLoss, catLoss):
plt.plot(listeLoss)
#plt.plot(epoch_loss_dae)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(catLoss, loc='upper left')
plt.show()
# + [markdown] id="6WqnYzDHRGRY"
# 10. Analyse the obtained results.
# + [markdown] id="PbxVuFA-RUoc"
# Pour mieux analyser les résultats obtenus, nous avons choisi de représenter l'évolution de l'erreur en fonction des époques pour les 2 types de modèles auto-encodeurs.
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="QCQgZNwuDvB4" outputId="8c49b08d-f29f-4305-999f-91829803839d"
plt.plot(epoch_loss_sae)
plt.plot(epoch_loss_dae)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['loss_SAE', 'loss_DAE'], loc='upper left')
plt.show()
# + [markdown] id="QRmwWgyIRqVh"
# Le graphe ci dessous représente l'évolution de la perte d'une part pour le modèle auto-encodeur standard (la ligne bleue) et de l'autre pour le modèle auto-encodeur débruiteur (la ligne rouge).
# - Pour le SAE, la perte varie entre 0.20 et 0.06. La perte ne cesse de diminuer au fil des époques jusqu'à la dernière. Ce qui en soit est une bonne nouvelle. Mais étant donné que la courbe de perte décroit jusqu'au bout et ne se stabilise pas à un moment donné, cela indique que notre modèle n'a pas fini son apprentissage. On aura donc besoin d'encore plus d'époques pour mieux entrainer ce modèle jusqu'à ce qu'il commence à se stabiliser.
# - Pour DAE, la perte varie entre 0.07 et 0.04. Ce qui est déjà bon. Mais à partir de l'époque 7, on constate que la diminution de la perte reste constante. Ce qui revient à dire qu'on peut arrêter son apprentissage à 7 époques pour éviter le problème de surapprentissage (over-fitting), ce qui le rendre moins performant sur de nouvelles données.
#
# L'avantage du DAE sur SAE peut s'expliquer par la valeur de "d" car plus d est petit, plus l'image sera compressé. Le fait que l'image soit beaucoup compressé fait que l'image reconstruite devienne beaucoup plus différente de l'original.
#
# Un autre élément qui peut influencer les résultats de l'apprentissage c'est le taux d'apprentissage. En effet, ici le taux est de 0.001. Cette valeur étant certes acceptable on va tenter de lui donne une valeur légèrement plus grande ou plus petite.
# + [markdown] id="xn5s5W7NV1PX"
# 11. Changes the parameters of the Autoencoder and analyse theirs impact. Conclude.
# + [markdown] id="7eFzUmEs2CRJ"
#
# + [markdown] id="2WP24F2I3X3s"
# - Choix d'un taux d'apprentissage plus grand (soit 0.01)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="oeIXunT-hunG" outputId="9d196e48-3a04-49df-f38b-aa26b8fb271a"
d=30;
model2 = Autoencoder(d).to(device)
criterion = nn.MSELoss() # on définit un critère qui va nous servir pour la fonction de coût
learning_rate = 1e-2 # soit 0.001, ce qui est pas mal
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer2 = torch.optim.Adam(
model2.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
nb_epoch = 20
epoch_loss_sae = training_sae(nb_epoch, model2, criterion, optimizer2)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="vQ_ACE_D4txG" outputId="90558e2c-76e1-4320-ac72-afbdd969ae3a"
displayLoss(epoch_loss_sae, ['loss_SAE'])
# + [markdown] id="g_KaQhgs4Lx1"
# Avec un taux d'apprentissage élevé, l'apprentissage prend effectivement moins de temps, l'image de sortie se déteriore au fur et à mesure qu'on entraine le modèle. La perte est très instable et varie au cours des époques
# + [markdown] id="0a0a0lkW3_Ej"
# - Choix d'un taux d'apprentissage plus petit
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="SehpUQ6g7Mmt" outputId="242f3487-7972-4937-9600-547309086a48"
d=30;
model3 = Autoencoder(d).to(device)
criterion = nn.MSELoss() # on définit un critère qui va nous servir pour la fonction de coût
learning_rate = 1e-4 # soit 0.001, ce qui est pas mal
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer3 = torch.optim.Adam(
model3.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
nb_epoch = 20
epoch_loss_sae = training_sae(nb_epoch, model3, criterion, optimizer3)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="B2dOMUYM5iuz" outputId="7f87fde6-3b16-48f3-e982-1395a2a70a15"
displayLoss(epoch_loss_sae, ['loss_SAE'])
# + [markdown] id="2kTr6FYl5Y0v"
# Avec un taux d'apprentissage trop petit, on constate que la perte diminue au fil des époques mais l'apprentissage prend beaucoup plus de temps. D'où la nécessité d'ajouter plus d'époques d'apprentissage pour que la perte se stabilise. Enfin l'image reconstruite est moins bon que quand on prend un taux de 0.001
# + [markdown] id="vnMZoHBt6OzA"
# - Augmentation du nombre d'époque (soit 100)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="GKX5fuJnjfru" outputId="f1bd3962-fd0d-450b-9a5a-ae3b59fa20cd"
d=30;
model4 = Autoencoder(d).to(device)
criterion = nn.MSELoss() # on définit un critère qui va nous servir pour la fonction de coût
learning_rate = 1e-3 # soit 0.001, ce qui est pas mal
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer4 = torch.optim.Adam(
model4.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
nb_epoch = 100
epoch_loss_sae = training_sae(nb_epoch, model4, criterion, optimizer4)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="R6wgYqb96mBs" outputId="b30eec23-5879-4811-e9db-713428dcc9ce"
displayLoss(epoch_loss_sae, ['loss_SAE'])
# + [markdown] id="1vxXhP5_-9Rx"
# On constate que la perte se stabilise à partir de l'époque 50. Ceci dit, on devra passer de 20 à 50 épochs
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="SPm2wBMc_XgE" outputId="9b06d8b0-902f-475e-b1e4-79d13be21938"
d=30;
model5 = Autoencoder(d).to(device)
criterion = nn.MSELoss() # on définit un critère qui va nous servir pour la fonction de coût
learning_rate = 1e-3 # soit 0.001, ce qui est pas mal
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer5 = torch.optim.Adam(
model5.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
nb_epoch = 30
epoch_loss_sae = training_sae(nb_epoch, model5, criterion, optimizer5)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="1WCck9IB_f0M" outputId="36a94e94-9fc1-4228-99ec-dac5abc2ffad"
displayLoss(epoch_loss_sae, ['loss_SAE'])
# + [markdown] id="K7FK0m4hDjcD"
# On constate que la perte commence à se stabiliser aprrès l'époch 20. Le choix d'une époch 30 sera donc raisonnable
# + [markdown] id="yZ4xQzePD06V"
# - Modification de la taille de sortie avec un taux de 0.001 et un nombre d'époch de 30
# + [markdown] id="kCahwkQUWcoD"
# On commence d'abord par augmenter la valeur de d dans SAE. Soit d=100
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="D8l0wHm6VzQr" outputId="aadf3171-a1b8-4633-c33e-baa83753dc38"
d = 100
model6 = Autoencoder(d).to(device)
criterion = nn.MSELoss() # on définit un critère qui va nous servir pour la fonction de coût
learning_rate = 1e-3 # soit 0.001, ce qui est pas mal
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer6 = torch.optim.Adam(
model6.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
nb_epoch = 30
epoch_loss_sae = training_sae(nb_epoch, model6, criterion, optimizer6)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="yf6mHZBVZ__u" outputId="0b9e0543-b1d7-4082-beef-56af6ad6c1b0"
displayLoss(epoch_loss_sae, ['loss_SAE'])
# + [markdown] id="VOse3HrILz62"
# Avec une taille de sortie de 100, on obtient une image de sortie plus proche de l'image originale. Aussi, la perte part de 0.11 à 0.01. Ensuite elle se stabilise après l'époch 18.
# + [markdown] id="7FQ-pxCVMexe"
# - Avec l'auto-encodeur DAE
# - Modification de la taille de sortie (soit d=30)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="wLR9wjxdykwd" outputId="fab895af-9b92-4ac9-8687-42a510d1b166"
d = 30
model7 = Autoencoder(d).to(device)
criterion = nn.MSELoss() # on définit un critère qui va nous servir pour la fonction de coût
learning_rate = 1e-3 # soit 0.001, ce qui est pas mal
# l'optimizer permet d'appliquer l'algo du gradient pour minimiser le coût
optimizer7 = torch.optim.Adam(
model7.parameters(), # permet de récupérer tous les paramètres de l'auto-encodeur
lr=learning_rate, # permet de déterminer la vitesse à laquelle on ajuste les poids
)
nb_epoch = 20
epoch_loss_dae = training_dae(nb_epoch, model7, criterion, optimizer7)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="mQ4nu5Hy_OU3" outputId="73bb1739-5747-482b-8d2b-9e0121bc2fe3"
displayLoss(epoch_loss_dae, ['loss_DAE'])
# + [markdown] id="eSTR8dCR2U-L"
# Quand on diminue la dimension de la couche de sortie à 30, on constate qu'on obtient une image reconstruite est moins nette que celle obtenue avec une dimension de la couche de sortie égale à 500.
# + [markdown] id="LPNtpCrCOFq_"
# - Conclusion
#
# Que ce soit pour les auto-encodeurs standards ou débruiteurs, le choix de la dimension de sortie "d" est déterminant sur les résultats de sortie.
# + [markdown] id="AbcZgQItXKNC"
# 10. Analyse the obtained results.
#
# 11. Changes the parameters of the Autoencoder and analyse theirs impact. Conclude.
# + [markdown] id="oPBblJf0Qgld"
# # PART2. Convolutions Neural Networks
# + [markdown] id="yPcYpc0MSqPi"
# 1. Import the librariries and set the parameters
# + id="Iw3o0hACOtwB"
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
import numpy
# + id="_8NKeHdMQg_l"
def set_default(figsize=(10, 10), dpi=100):
plt.style.use(['dark_background', 'bmh'])
plt.rc('axes', facecolor='k')
plt.rc('figure', facecolor='k')
plt.rc('figure', figsize=figsize, dpi=dpi)
def plot_data(X, y, d=0, auto=False, zoom=1):
X = X.cpu()
y = y.cpu()
plt.scatter(X.numpy()[:, 0], X.numpy()[:, 1], c=y, s=20, cmap=plt.cm.Spectral)
plt.axis('square')
plt.axis(np.array((-1.1, 1.1, -1.1, 1.1)) * zoom)
if auto is True: plt.axis('equal')
plt.axis('off')
_m, _c = 0, '.15'
plt.axvline(0, ymin=_m, color=_c, lw=1, zorder=0)
plt.axhline(0, xmin=_m, color=_c, lw=1, zorder=0)
def plot_model(X, y, model):
model.cpu()
mesh = np.arange(-1.1, 1.1, 0.01)
xx, yy = np.meshgrid(mesh, mesh)
with torch.no_grad():
data = torch.from_numpy(np.vstack((xx.reshape(-1), yy.reshape(-1))).T).float()
Z = model(data).detach()
Z = np.argmax(Z, axis=1).reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.3)
plot_data(X, y)
set_default()
# function to count number of parameters
def get_n_params(model):
np=0
for p in list(model.parameters()):
np += p.nelement()
return np
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# + [markdown] id="H4huenTJTUgi"
# 2. Load the Dataset (MNIST) using PyTorch DataLoader utilities and visualize some images :
#
# + id="cesTQNZ8TUtz" colab={"base_uri": "https://localhost:8080/", "height": 459} outputId="1f81c7d7-123b-4960-c692-6fb41396eea9"
input_size = 28*28 # images are 28x28 pixels
output_size = 10 # there are 10 classes
train_loader = torch.utils.data.DataLoader(datasets.MNIST('./data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=1000, shuffle=True)
plt.figure(figsize=(16, 6))
for i in range(10):
plt.subplot(2, 5, i + 1)
image, _ = train_loader.dataset.__getitem__(i)
plt.imshow(image.squeeze().numpy())
plt.axis('off');
# + [markdown] id="4xeU-uzgTqAt"
# 3. Create the model classes
#
#
#
# + id="zJQK3WnVTqKM"
# réseau de neurone complètement connecté
class FC2Layer(nn.Module):
#input_size : nbre de neurones dans la couche d'entrée
#n_hidden : nombre de neurones dans la couche cachée
# ouput_size : nombre de neurones dans la couche de sortie
def __init__(self, input_size, n_hidden, output_size):
super(FC2Layer, self).__init__()
self.input_size = input_size
self.network = nn.Sequential(
nn.Linear(input_size, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, output_size),
nn.LogSoftmax(dim=1) # le choix de softmax comme fonction d'activation c'est parce qu'on a plusieurs classes de sorties
)
# prédiction de la gauche vers la droite
def forward(self, x):
x = x.view(-1, self.input_size)
return self.network(x)
# réseau de neurone convolutif
class CNN(nn.Module):
def __init__(self, input_size, n_feature, output_size):
super(CNN, self).__init__()
self.n_feature = n_feature
self.conv1 = nn.Conv2d(in_channels=1, out_channels=n_feature, kernel_size=5)
self.conv2 = nn.Conv2d(n_feature, n_feature, kernel_size=5)
self.fc1 = nn.Linear(n_feature*4*4, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x, verbose=False):
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
x = x.view(-1, self.n_feature*4*4)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.log_softmax(x, dim=1) # le choix de softmax comme fonction d'activation c'est parce qu'on a plusieurs classes de sorties
return x
# + [markdown] id="aW2g9eEjTzLr"
# 4. Run on a GPU: device string
#
# Switching between CPU and GPU in PyTorch is controlled via a device string, which will seemlessly determine whether GPU is available, falling back to CPU if not:
# + id="dqWphFcNTzWt"
accuracy_list = []
def train(epoch, model, perm=torch.arange(0, 784).long()):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# send to device
data, target = data.to(device), target.to(device)
# permute pixels
data = data.view(-1, 28*28)
data = data[:, perm]
data = data.view(-1, 1, 28, 28)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(model, perm=torch.arange(0, 784).long()):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
# send to device
data, target = data.to(device), target.to(device)
# permute pixels
data = data.view(-1, 28*28)
data = data[:, perm]
data = data.view(-1, 1, 28, 28)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum().item()
test_loss /= len(test_loader.dataset)
accuracy = 100. * correct / len(test_loader.dataset)
accuracy_list.append(accuracy)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
accuracy))
# + [markdown] id="4lfgQB_NUIff"
# 5. Train a small fully-connected network
# + colab={"base_uri": "https://localhost:8080/"} id="eIRaO7xAmz8T" outputId="ac26b176-1ebc-4954-f2e1-8d9633c5af97"
n_hidden = 8 # number of hidden units
model_fnn = FC2Layer(input_size, n_hidden, output_size)
model_fnn.to(device)
optimizer = optim.SGD(model_fnn.parameters(), lr=0.01, momentum=0.5)
print('Number of parameters: {}'.format(get_n_params(model_fnn)))
for epoch in range(0, 1):
train(epoch, model_fnn)
test(model_fnn)
# + [markdown] id="ZolZsyFrUTY1"
# 6. Train a ConvNet with the same number of parameters
# + id="TqnR2VW1UTjU" colab={"base_uri": "https://localhost:8080/"} outputId="4c3908db-ba17-451e-93c3-e66b6960eee0"
# Training settings
n_features = 6 # number of feature maps
model_cnn = CNN(input_size, n_features, output_size)
model_cnn.to(device)
optimizer = optim.SGD(model_cnn.parameters(), lr=0.01, momentum=0.5)
print('Number of parameters: {}'.format(get_n_params(model_cnn)))
for epoch in range(0, 1):
train(epoch, model_cnn)
test(model_cnn)
# + [markdown] id="A4iRmbCbs3d-"
# 7. Changes the parameters of the model.
# + [markdown] id="ydiJ3jK8utSC"
# - Sur le modèle complètement connecté
# + id="ab-JtGl01oPl"
accuracy_list = []
def train1(epoch, model, optimizer, perm=torch.arange(0, 784).long()):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# send to device
data, target = data.to(device), target.to(device)
# permute pixels
data = data.view(-1, 28*28)
data = data[:, perm]
data = data.view(-1, 1, 28, 28)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# + colab={"base_uri": "https://localhost:8080/"} id="I7wyGkDbw6Nq" outputId="ba17626c-1c03-46f3-a0ea-c9128ec6b509"
n_hidden = 20 # number of hidden units
model_fnn__ = FC2Layer1(input_size, n_hidden, output_size)
model_fnn__.to(device)
optimizer__ = optim.SGD(model_fnn__.parameters(), lr=0.01, momentum=0.5)
print('Number of parameters: {}'.format(get_n_params(model_fnn__)))
for epoch in range(0, 1):
train1(epoch, model_fnn__, optimizer__)
test(model_fnn__)
# + [markdown] id="C-TyBrmH_4lS"
# - Sur le modèle convolutif
#
# On augmente le nombre de feature maps
# + colab={"base_uri": "https://localhost:8080/"} id="IhbyameBAGK1" outputId="20ef0c98-92b8-45a8-ef66-1932976b1da4"
# Training settings
n_features = 20 # number of feature maps
model_cnn_ = CNN(input_size, n_features, output_size)
model_cnn_.to(device)
optimizer_ = optim.SGD(model_cnn_.parameters(), lr=0.01, momentum=0.5)
print('Number of parameters: {}'.format(get_n_params(model_cnn_)))
for epoch in range(0, 1):
train1(epoch, model_cnn_, optimizer_)
test(model_cnn_)
# + [markdown] id="ApJmDPjXUXmb"
# 8. Analyze the results and the impact of these paremeters.
# + [markdown] id="IoF9WU3dtA9Q"
# Avant la modification des paramètres, on constate :
# - Sur le modèle complètement connecté, une précision de 96% sur les données d'apprentissage et de 88% sur les données de test. Avec une perte de 0.472770 sur les données d'apprentissage et de 0.4140 sur les données de test. Ce qui indique que le modèle s'est trop entrainé sur les données d'apprentissage et fait des prédictions moins bonnes sur les données de test. Il y a un sur-apprentissage.
#
# - Sur le modèle convolutif, On obtient une précision de 96% sur les données d'apprentissage et 95% sur les données de test. Ce qui est bien. Par contre sur la perte, on pense pouvoir faire mieux en ajoutant soit une autre couche de convolution ou en augmentant la taille des feature_maps
#
# Après changement des paramètres, on constate :
#
# - L'augmentation du nombre de couches cachées dans le réseau de neurone complètement connecté permet d'avoir des meilleurs résultats. En effet, avec 20 couches cachées, on passe de 88% d'accuracy à 91%. Ce qui permet d'obtenir des meilleurs résultats.
#
# - L'augmentation du nombre des feature maps à 20 dans le réseau de neurone convolutif améliore également l'accuracy sur les données de test.
| 30,843 |
/Experiments/Mars3DOF/Baseline/recurrent_policy-60step-pot1-fullstate-gtvf.ipynb
|
ae026025b76b1a2125269441494f0ab1cac1ddeb
|
[
"MIT"
] |
permissive
|
Aerospace-AI/RL-Meta-Learning-ACTA
|
https://github.com/Aerospace-AI/RL-Meta-Learning-ACTA
| 3 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,907,729 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: th_daily
# language: python
# name: th
# ---
# # 正则化解惑
# 本文记录了我在学习正则化模型(主要是Ridge Regression)时的一些困惑和思考。困惑主要有三点:
# 1. 正则化模型中为什么需要把特征x中心化(centering)或者标准化(standardization)?有什么好处?参数估计结果有哪些差异?
# 2. 为什么对y一般只需要中心化而不需要标准化?
# 3. 为什么不对bias项进行惩罚?
# 下面以Ridge Regression为例尝试对以上三个问题进行解释。基本设定如下:假设 $\mathbf{x}_i \in \mathbb{R}^p$ , $y_i \in \mathbb{R}$, $i=1, \cdots, n$ 为一组训练样本,我们希望找到$w_0 \in \mathbb{R}$ 和 $\mathbf{w} \in \mathbb{R}^p$ 使得 ridge regression 的目标函数
# \begin{eqnarray}
# f(w_0, \mathbf{w}) &=& \text{MSE}(w_0, \mathbf{w}) + \lambda \left\lVert \mathbf{w}\right\rVert^2 \\
# &=& \frac{1}{n} \sum \limits_{i=1}^{n} (y_i - w_0 - \mathbf{w}^T \mathbf{x}_i) ^ 2 + \lambda \left\lVert \mathbf{w}\right\rVert^2
# \end{eqnarray}
# 最小,其中 $\lambda$ 为事先给定的正则化系数。
# ## 困惑一:为什么需要把特征x中心化或标准化?
# 对变量 $x$ 中心化是指去掉它的均值,而标准化则还要除上它的标准差。
# #### 中心化
# 为什么要进行中心化?其实在看 *The Elements of Statistical Learning* 的 Ridge Regression (P63) 之前我对中心化也并不了解,平常听得和用得更多的是标准化。但是,看完书并做了对应的课后习题(Exercise 3.5)后我发现中心化不仅在**理论推导**上很有用,并且提供了**估计正则化模型参数的一般方法**。通常,为了把上面的函数 $f$ 写成矩阵形式,我们会人工加入一列 $(1, 1, \cdots, 1)^T$作为 bias 项所对应的列,从而得到 $n \times (p+1)$ 的设计矩阵 $X$,然后再把 $w_0$ 加入到 $\mathbf{w}$ 中得到 $\mathbf{w}' \in \mathbb{R}^{p+1}$。这样可以把 $f$ 写成矩阵形式:
# \begin{eqnarray}
# f(w_0, \mathbf{w}) = f^{*}(\mathbf{w'}, \mathbf{w}) = \frac{1}{n} \left\lVert \mathbf{y} - X\mathbf{w}' \right\rVert^2 + \lambda \left\lVert \mathbf{w}\right\rVert^2.
# \end{eqnarray}
# 可以看到,由于我们没有将 bias 项加入到正则化项中(不加的原因在困惑三中说明),使得目标函数的矩阵形式是比较复杂的(既有 $\mathbf{w}又包含 \mathbf{w}'$),不能通过简单的对 $\mathbf{w}'$ 求导来求最优解。但是,将特征 $x$ 中心化之后,我们将看到参数 $w_0$ 和 $\mathbf{w}$ 的最优解都会有一个**显示表达式**。
# 另外,如果我们对 $f(w_0, \mathbf{w}) = \displaystyle{\frac{1}{n} \sum \limits_{i=1}^{n} (y_i - w_0 - \mathbf{w}^T \mathbf{x}_i) ^ 2 + \lambda \left\lVert \mathbf{w}\right\rVert^2}$ 关于 $w_0$ 求偏导并令其为0,则有:
# \begin{eqnarray}
# \frac{-2}{n} \sum \limits_{i=1}^{n} (&y_i& - w_0 - \mathbf{w}^T \mathbf{x}_i) = 0 \\
# \implies w_0 &=& \frac{1}{n} \sum \limits_{i=1}^{n} (y_i - \mathbf{w}^T \mathbf{x}_i) \\
# \implies w_0 &=& \overline{y} - \mathbf{w}^T \overline{\mathbf{x}}
# \end{eqnarray}
# 其中,$\overline{y} \in \mathbb{R}$ 为 $y$ 的均值,$\overline{\mathbf{x}} \in \mathbb{R}^p$ 为特征的均值向量。可以看到,如果 $\overline{\mathbf{x}}$ 为0,则 $w_0$ 可以简单的用 $y$ 的均值 $\overline{y}$ 进行估计,**而将 $x$ 中心化以后再求它的均值向量$\overline{\mathbf{x}}$ 刚好会得到0!** 这也是对特征 $x$ 进行中心化的一个原因:bias 项 $w_0$ 可以用 $y$ 的均值 $\overline{y}$ 估计 !
# 下面推导将特征 $x$ 中心化以后参数 $w_0$ 和 $\mathbf{w}$ 的最优解。
# 首先,将目标函数 $f(w_0, \mathbf{w})$ 作恒等变换转换成新的函数 $g(\overset{\sim}{w_0}, \mathbf{w})$:
# \begin{eqnarray}
# f(w_0, \mathbf{w}) &=& \frac{1}{n} \sum \limits_{i=1}^{n} \left(y_i - w_0 - \mathbf{w}^T \overline{\mathbf{x}} - \mathbf{w}^T(\mathbf{x}_i - \overline{\mathbf{x}})\right)^2 + \lambda \left\lVert \mathbf{w}\right\rVert^2 \\
# &=& \frac{1}{n} \sum \limits_{i=1}^{n} (y_i - \overset{\sim}{w_0} - \mathbf{w}^T \overset{\sim}{\mathbf{x}_i}) + \lambda \left\lVert \mathbf{w}\right\rVert^2 \\
# &\triangleq& g(\overset{\sim}{w_0}, \mathbf{w})
# \end{eqnarray}
# 其中,$\overset{\sim}{w_0} = w_0 + \mathbf{w}^T \overline{\mathbf{x}}$,$\overset{\sim}{\mathbf{x}_i} = \mathbf{x}_i - \overline{\mathbf{x}}$。于是,最小化函数 $f$ 等价于最小化函数 $g$。
# 其次,因为中心化之后的均值向量变为 $\displaystyle{\overline{\overset{\sim}{\mathbf{x}}}} = 0$ ,根据之前的说明,$g$ 关于 $\overset{\sim}{w_0}$ 求导得到 $\overset{\sim}{w_0}$ 的估计为 $\overline{y}$,代入 $g$ 中得到:
# $$
# g(\mathbf{w}) = \frac{1}{n} \sum \limits_{i=1}^{n} \left(\overset{\sim}{y_i} - \mathbf{w}^T \overset{\sim}{\mathbf{x}_i} \right)^2 + \lambda \left\lVert \mathbf{w}\right\rVert^2
# $$
# 其中,$\overset{\sim}{y_i} = y_i - \overline{y}$。
# 若令
# $$
# \overset{\sim}{X} =
# \begin{bmatrix}
# \overset{\sim}{\mathbf{x}_1}^T \\
# \vdots \\
# \overset{\sim}{\mathbf{x}_n}^T
# \end{bmatrix},
# \quad
# \overset{\sim}{\mathbf{y}} =
# \begin{bmatrix}
# \overset{\sim}{y_1} \\
# \vdots \\
# \overset{\sim}{y_n}
# \end{bmatrix}
# $$
# 则可以将 $g$ 写成矩阵形式:
# $$
# g(\mathbf{w})= \frac{1}{n} \left\lVert \overset{\sim}{\mathbf{y}} - \overset{\sim}{X}\mathbf{w} \right\rVert^2 + \lambda \left\lVert \mathbf{w}\right\rVert^2.
# $$
# 上式类似 Ridge Regression 在教科书上通常定义的目标函数。对 $\mathbf{w}$ 求偏导并令结果为0可以得到:
# $$
# \mathbf{w} = \left(\overset{\sim}{X}^T \overset{\sim}{X} + n \lambda I \right)^{-1} \overset{\sim}{X}^T \overset{\sim}{\mathbf{y}}.
# $$
# 同时,原来的 bias 项 $w_0 = \overset{\sim}{w_0} - \mathbf{w}^T \overline{\mathbf{x}} = \overline{y} - \mathbf{w}^T \overline{\mathbf{x}}$。
# 因此,在训练正则化模型时我们可以首先把特征 $x$ 和标签 $y$ 中心化(相当于把坐标原点移动到均值点,这样在拟合模型时就不需要截距项了),然后忽略 bias 项,解一个最优化问题得出权重$\mathbf{w}$,最后再根据权重$\mathbf{w}$ 、标签均值$\overline{y}$和特征均值 $\overline{\mathbf{x}}$ 计算 bias 项 $w_0$。
# #### 标准化
# 为什么要进行标准化?这就要回到正则化的目的是什么。从统计的角度看,正则化是为了进行**变量选择**:加入正则化项后,一些对被解释变量 $y$ 影响较小(系数较小)的变量 $x$ 就被剔除掉了,只保留系数较大的变量。而为了保证变量之间**比较的公平性**,所有变量都必须是同一尺度。如果一个变量是以万为单位,而另一个变量只有可怜的零点几,则第二个变量在正则化模型中几乎不可能被选出来(它要对 $y$ 产生影响的话必须乘上一个很大的系数但是这样反过来会极大地增加惩罚项)。因此需要对所有变量进行标准化处理。接下来我们想知道将 $x$ 标准化后参数的估计有哪些变化。
# 将变量 $x$ 标准化,我们要最小化目标函数 $h(w_0^{*}, \mathbf{w}^*)$:
# \begin{eqnarray}
# h(w_0^{*}, \mathbf{w}^*)
# &=& \frac{1}{n} \sum \limits_{i=1}^{n} \left(y_i - w_0^* - \sum \limits_{j=1}^{p} \frac{x_{ij} - \overline{x}_j}{\sigma_j} w_j^* \right)^2 + \lambda \lVert \mathbf{w}^* \rVert \\
# &=& \frac{1}{n} \sum \limits_{i=1}^{n} \left(y_i - w_0^* - \mathbf{w}^{*T}\Sigma^{-1}(\mathbf{x}_i - \overline{\mathbf{x}})\right)^2 + \lambda \lVert \mathbf{w}^* \rVert
# \end{eqnarray}
# 其中,$\Sigma$ 为 $p \times p$ 对角矩阵 $\text{Diag}(\sigma_1, \cdots, \sigma_p)$,$\sigma_i$ 为第 $i$ 个变量的标准差。若令 $\mathbf{x}_i^* = \Sigma^{-1} (\mathbf{x}_i - \overline{\mathbf{x}})$,则上式可进一步化简为:
# $$
# h(w_0^{*}, \mathbf{w}^*) = \frac{1}{n} \sum \limits_{i=1}^{n} \left(y_i - w_0^* - \mathbf{w}^{*T} \mathbf{x}_i^* \right)^2 + \lambda \lVert \mathbf{w}^* \rVert.
# $$
# 对比函数 $g(\overset{\sim}{w_0}, \mathbf{w})$ 的表达式,可以发现它们的结构是类似的,因此可以得到 $w_0^*$ 的估计为 $\overline{y}$。如果我们再令 $n \times p$ 矩阵 $X^*$ 为
# $$
# X^* =
# \begin{bmatrix}
# {\mathbf{x}_1^{*T}} \\
# \vdots \\
# {\mathbf{x}_n^{*T}}
# \end{bmatrix},
# $$
# 那么仿照求解 $\mathbf{w}$ 的过程就可以得到 $\mathbf{w}^*$ 的估计为 $\left(X^{*T} X^* + n \lambda I \right)^{-1} X^{*T} \overset{\sim}{\mathbf{y}}$,即
# \begin{eqnarray}
# w_0^* &=& \overline{y} \\
# \mathbf{w}^* &=& \left(X^{*T} X^* + n \lambda I \right)^{-1} X^{*T} \overset{\sim}{\mathbf{y}}.
# \end{eqnarray}
# 又因为 $X^* = \overset{\sim}{X} \Sigma^{-1}$,所以
# \begin{eqnarray}
# \mathbf{w}^*
# &=& \left(X^{*T} X^* + n \lambda I \right)^{-1} X^{*
# T} \overset{\sim}{\mathbf{y}} \\
# &=& (\Sigma^{-1} \overset{\sim}{X}^T \overset{\sim}{X} \Sigma^{-1} + n \lambda I)^{-1} \Sigma^{-1} \overset{\sim}{X}^T \overset{\sim}{\mathbf{y}}.
# \end{eqnarray}
# 对比 $\mathbf{w}$ 的表达式
# $$
# \mathbf{w} = \left(\overset{\sim}{X}^T \overset{\sim}{X} + n \lambda I \right)^{-1} \overset{\sim}{X}^T \overset{\sim}{\mathbf{y}},
# $$
# 可以发现虽然它们的表达式不同,但 $\mathbf{w}^*$ 也只是多了将 $x$ 中心化后再除上标准差这一步。
# 在实际建模中,如果特征之间的尺度差异不大,则一般只需要进行中心化(特征、标签都要),分两步求得权重 $\mathbf{w}$ 和 bias 项 $w_0$;而如果特征之间的尺度相差很大,则需要把特征标准化,标签中心化,也分两步求得权重 $\mathbf{w}^*$ 和 bias 项 $w_0^*$。注意,这里我们得到的是关于 $y$ 的两个不同的估计式,也就是两个不同的模型!
# ## 困惑二:为什么对y一般只需要中心化而不需要标准化?
# 前面已经说明了将标签 $y$ 中心化后可以配合特征的中心化分两步求解各参数的估计。至于不需要对 $y$ 标准化的原因,目前为止我的想法是:中心化是为了进行**公平比较**,而这里我们并没有拿 $y$ 和其它变量去比,因此也就没有标准化的必要。
# ## 困惑三:为什么不对bias项进行惩罚?
# 因为进行正则化的目的是为了减少参数个数,也就是让我们的损失函数更加平滑(直线相比曲线更平滑),而 bias 项对函数的平滑程度没有影响,它只是进行上下平移!因此也就没有惩罚的必要了。
| 7,543 |
/03Employee_Retention/Employee_retention.ipynb
|
9f7aa347046528f80c1d72c7002cfc0ac03af868
|
[] |
no_license
|
jh3898/data-science-take-home-challenge
|
https://github.com/jh3898/data-science-take-home-challenge
| 1 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 146,339 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: hera
# language: python
# name: hera
# ---
# +
import os
import sys
from copy import deepcopy
import numpy as np
from matplotlib import pyplot as plt
from hera_cal.io import HERACal, HERAData
from hera_cal.noise import interleaved_noise_variance_estimate
from hera_cal.utils import join_bl, split_bl
from simpleredcal.red_utils import find_zen_file
# +
import matplotlib as mpl
plot_figs = False
if plot_figs:
mpl.rcParams['figure.dpi'] = 300
mpl.rc('font',**{'family':'serif','serif':['cm']})
mpl.rc('text', usetex=True)
mpl.rc('text.latex', preamble=r'\usepackage{amssymb} \usepackage{amsmath}')
# -
# # Noise estimates from the hera_cal pipeline
# Using calibrated autocorrelations, we can predict the noise variance on visibilities, $\sigma_{ij}^2$. Namely,
#
# $\sigma_{ij}^2 = V_{ii} V_{jj}$ $ / $ $B t$
#
# where $B$ is the bandwidth of a channel and $t$ is the integration time. Instead of computing this quantity for all baselines, we instead compute and save $\sigma_{ii}$ where
#
# $\sigma_{ij} \equiv \sqrt{\sigma_{ii} \sigma_{jj}} = \left(V_{ii} / \sqrt{Bt}\right) \left( V_{jj} / \sqrt{Bt} \right)$.
#
# These quantities, $\sigma_{ii}$, are stored in `.noise_std.uvh5` files. Though they are technically per-antenna, we the collaboration felt it more sensible to store them as visibility data files (since the units are Jy) with autocorrelation keys instead of storing them in `.calfits` files.
# +
JD_time = 2458098.43869
bl = (25, 51, 'ee')
data_dir = '/Users/matyasmolnar/HERA_Data/sample_data'
if not os.path.exists(data_dir):
data_dir = os.path.join('/lustre/aoc/projects/hera/H1C_IDR2/IDR2_2', str(int(JD_time)))
# -
# ## Noise from raw autocorrelations
# +
ant1, ant2 = split_bl(bl)
auto_bl1 = join_bl(ant1, ant1)
auto_bl2 = join_bl(ant2, ant2)
# Load autocorrelation
autos_file = os.path.join(data_dir, 'zen.{}.HH.autos.uvh5'.format(JD_time))
hd_autos = HERAData(autos_file)
autos, auto_flags, _ = hd_autos.read(bls=[auto_bl1, auto_bl2])
# Load inferred noise on data
noise_file = os.path.join(data_dir, 'zen.{}.HH.noise_std.uvh5'.format(JD_time))
hd_noise = HERAData(noise_file)
noise, noise_flags, _ = hd_noise.read(bls=[auto_bl1, auto_bl2])
bl_noise = np.sqrt(noise[auto_bl1] * noise[auto_bl2])
bl_noise_flags = noise_flags[auto_bl1] | noise_flags[auto_bl2]
# -
# ## Noise from interleaved frequencies
# We can also check our inferred value for the noise on visibilities by checking them against a sequential difference of the data. In this case, we use hera_cal.noise.interleaved_noise_variance_estimate() to estimate the noise on the data by subtracting 0.5 times the next and previous channels from the data. Averaging in time over the file, we see that these two estimates of the noise agree.
hd = HERAData(find_zen_file(JD_time))
data, flags, nsamples = hd.read(bls=[bl])
# +
# Estimate noise from visibility data using interleaved frequencies
data_with_nans = deepcopy(data[bl])
data_with_nans[flags[bl]] = np.nan
noise_var_est = interleaved_noise_variance_estimate(data_with_nans, kernel=[[-.5, 1, -.5]])
with np.errstate(divide='ignore', invalid='ignore'):
interleaved_noise = np.sqrt(np.nanmean(noise_var_est, axis=0))
# Estimate noise on baseline using autocorrelations
var_with_nans = noise[auto_bl1] * noise[auto_bl2]
var_with_nans[flags[bl]] = np.nan
autocorrelation_noise = np.sqrt(np.abs(np.nanmean(var_with_nans, axis=0)))
noise_amp = np.nanmean((autocorrelation_noise / interleaved_noise)[420:900]) # good freq range
# Plot Results
fig, ax1 = plt.subplots(figsize=(11, 7))
ax1.plot(hd.freqs / 1e6, interleaved_noise * noise_amp, label='Interleaved Noise Estimate', lw=2)
ax2 = ax1.twinx()
ax2.plot(hd.freqs / 1e6, autocorrelation_noise, label='Noise Inferred from '\
'Autocorrelations', color='orange', lw=2)
plt.xlim(100, 200)
plt.xlabel('Frequency [MHz]')
ax2.set_ylabel('Amplitude [Jy]')
ax1.set_ylim(bottom=0, top=50)
ax2.set_ylim(bottom=0, top=20)
lines_1, labels_1 = ax1.get_legend_handles_labels()
lines_2, labels_2 = ax2.get_legend_handles_labels()
lines = lines_1 + lines_2
labels = labels_1 + labels_2
ax1.legend(lines, labels, loc=0)
fig.tight_layout()
plt.show()
_time'].isnull(),'salary'],kde_kws=kde_kws2,hist_kws = hist_kws)
ax.set_ylabel('pdf')
# # 1. Employees usually quit at work anniversaries to get their bonus before they left.
# ## 2. Employees will very high or very low salary have lower chances of quitting.
# 3. It is possible that for lower salary employees, it's hard to find other jobs if they quit.
# 4. For high salary employees, they are satisfied by their compensation, they are less inclined to quit.
# 5. If the employee is satu
| 4,877 |
/jupyter/totalDGEByTissue.ipynb
|
ecc15c40ae6ba4d9eb6be12c60816c15781eeddb
|
[] |
no_license
|
TheJacksonLaboratory/sexBiasedAlternativeSplicing
|
https://github.com/TheJacksonLaboratory/sexBiasedAlternativeSplicing
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.r
| 114,834 |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # **total significant gene expression events by Tissue Dot plot**
#
# This notebook generates a single figure, the reverse plot of showing the number of significant gene expression events by tissue.
#
# Only one input file required
# **"../data/Total_DGE_by_tissue.tsv"**
#
# Produces one figure
# **"../pdf/total_dge_by_tissue.pdf"**
# ## **Running this notebook**:
#
# See the README for setting up prerequisites for the notebook.
# ## 1. Setup
#
# Assumes the `countGenesAndEvents.ipynb` notebook was run -- unpacking the results from the differential Gene Expression Analysis as run in the `differentialGeneExpressionAnalysis.ipynb` notebook.
# +
defaultW <- getOption("warn") # suppress warnings for this cell
options(warn = -1)
library(stringr)
library(magrittr)
library(dplyr)
library(ggplot2)
library(scales)
library(viridis)
Sys.setenv(TAR = "/bin/tar") # for gzfile
options(warn = defaultW)
# -
# ## 2 read in the total DGE by tissue file.
# +
totals <- read.table("../data/Total_DGE_by_tissue.tsv", sep = "\t", header = T)
colnames(totals) <- c("Tissue", "Total")
totals_s <- totals %>% arrange(Total)
totals_s$Tissue <- factor(totals_s$Tissue, levels = totals_s$Tissue)
levels(totals_s$Tissue)
# -
# ## 3 create a function to be used by ggplot to create a reverse log10 scale for the x-axis
reverselog_trans <- function(base = exp(1)) {
trans <- function(x) -log(x, base)
inv <- function(x) base^(-x)
trans_new(paste0("reverselog-", format(base)), trans, inv,
log_breaks(base = base),
domain = c(1e-100, Inf))
}
# +
g<-ggplot(totals_s, aes(y = Tissue, x = Total, size = Total)) +
geom_point(color = "red") +
theme_bw() +
scale_x_continuous(trans=reverselog_trans(), breaks=c(1,10,100,1000,5000,10000)) +#breaks=c(10000, 5000,1000,100,10,1)) +
scale_y_discrete(position = "right") +
theme(axis.text.x = element_text(size=8, angle = 0, hjust = 0.0, vjust = 0.5),
axis.text.y = element_text(size=8),
axis.title.x = element_text(face="plain", colour="black",
size=8),
axis.title.y = element_blank(),
legend.title=element_blank(),
legend.text = element_text(face="plain", colour="black",
size=8)) +
xlab(paste("Number of sex-biased gene expression events")) +
ylab("Tissue") +
guides(size=FALSE)
g
ggsave("../pdf/total_dge_by_tissue.pdf",g, height = 4.5, width = 4)
# -
# ## Appendix - Metadata
#
# For replicability and reproducibility purposes, we also print the following metadata:
#
# ### Appendix.1. Checksums with the sha256 algorithm
# 1. Checksums of **'artefacts'**, files generated during the analysis and stored in the folder directory **`data`**
# 2. List of environment metadata, dependencies, versions of libraries using `utils::sessionInfo()` and [`devtools::session_info()`](https://devtools.r-lib.org/reference/session_info.html)
figure_id = "totalDGEByTissue"
# ### Appendix.2. Library metadata
# +
dev_session_info <- devtools::session_info()
utils_session_info <- utils::sessionInfo()
message("Saving `devtools::session_info()` objects in ../metadata/devtools_session_info.rds ..")
saveRDS(dev_session_info, file = paste0("../metadata/", figure_id, "_devtools_session_info.rds"))
message("Done!\n")
message("Saving `utils::sessionInfo()` objects in ../metadata/utils_session_info.rds ..")
saveRDS(utils_session_info, file = paste0("../metadata/", figure_id ,"_utils_info.rds"))
message("Done!\n")
dev_session_info$platform
dev_session_info$packages[dev_session_info$packages$attached==TRUE, ]
# -
| 3,892 |
/FlightPrice.ipynb
|
27b795d15d1e4ec8e4522ce3f70167527eefb6c4
|
[] |
no_license
|
codedr0id/Flight-Ticket-Price-Prediction
|
https://github.com/codedr0id/Flight-Ticket-Price-Prediction
| 2 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 790,024 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
person = {
'first_name': 'VISHAL',
'last_name': 'KUMAR',
'age': 27,
'city': 'KARACHI',
}
print(person['first_name'])
print(person['last_name'])
print(person['age'])
print(person['city'])
# +
person = {
'first_name': 'VISHAL',
'last_name': 'KUMAR',
'age': 27,
'city': 'KARACHI',
'Qualification': 'Bachelors',
}
print(person['first_name'])
print(person['last_name'])
print(person['age'])
print(person['city'])
print(person['Qualification'])
# +
cities = {
'Karachi': {
'country': 'Pakistan',
'population': 500000,
'nearby mountains': 'Kheerthar',
},
'Khatmandu': {
'country': 'Nepal',
'population': 203000,
'nearby mountains': 'Himalya range',
},
'Athens': {
'country': 'Greece',
'population': 1000022,
'nearby mountains': 'K2 Range',
}
}
for city, city_info in cities.items():
country = city_info['country'].title()
population = city_info['population']
mountains = city_info['nearby mountains'].title()
print("\n" + city.title() + " is in " + country + ".")
print(" It has a population of about " + str(population) + ".")
print(" The " + mountains + " mountains are nearby.")
# +
prompt = "How old are you?"
prompt += "\nEnter 'quit' when you are finished. "
while True:
age = input(prompt)
if age == 'quit':
break
age = int(age)
if age < 3:
print(" You get in free!")
elif age < 13:
print(" Your ticket is $10.")
else:
print(" Your ticket is $15.")
# +
def favorite_book(title):
print(title + " is one of my favorite books.")
favorite_book('The Girl in the River')
# +
import random
hidden = random.randrange(1, 30)
print(hidden)
guess = int(input("Please enter your guess: "))
if guess == hidden:
print("Hit!")
elif guess < hidden:
print("Your guess is Smaller")
else:
print("Your guess is Larger")
# -
()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="uYvhlUSP7Nzf" outputId="61287f8b-3bd3-45a5-f155-5d57c0c27b96"
# Addidng Date,Month and Year column
df['year'] = pd.DatetimeIndex(df['Date_of_Journey']).year
df['month'] = pd.DatetimeIndex(df['Date_of_Journey']).month
df['Day'] = pd.DatetimeIndex(df['Date_of_Journey']).day
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="z5SOJi1BRLR-" outputId="86a643cd-ec7a-4b19-d61d-26f72b1c2b06"
df['Date_of_Journey']=pd.to_datetime(df['Date_of_Journey'])
df['weekday']=df[['Date_of_Journey']].apply(lambda x:x.dt.day_name())
sns.barplot('weekday','Price',data=df)
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="JUMaohy17vrN" outputId="79c7cc8f-875e-4ff5-d21a-47bd0ec3e50f"
sns.countplot(df["month"])
# + colab={"base_uri": "https://localhost:8080/", "height": 81} id="lPG1YCNd_k_G" outputId="5731fb3a-ba76-4b03-f7b4-f1b91faeca16"
df[df['Total_Stops'].isnull()]
# + id="DX0lKZ-tIVMv"
# Dropping the Null valued row
df.dropna(axis=0, inplace=True)
# + id="QxPvINvlKE9V"
# + id="lwX6ZMLfKGSf"
def stops_into_number(x):
if x=='1 stop':
return 1
elif x=='2 stops':
return 2
elif x=='3 stops':
return 3
elif x=='4 stops':
return 4
else:
return 0
# + id="fsRvPDKma3XC"
df['Total_Stops']=df['Total_Stops'].apply(lambda x: stops_into_number(x))
# + colab={"base_uri": "https://localhost:8080/"} id="uog9jM2TbEkV" outputId="1bf802dc-01ab-4a08-aeca-e3a7fba27226"
df['Total_Stops']
# + colab={"base_uri": "https://localhost:8080/"} id="KA1A4CaSbI_G" outputId="98a79d0c-992f-4e63-cbef-ef2ce2997dc1"
df.fillna(0, inplace = True)
df['Total_Stops'] = df['Total_Stops'].apply(lambda x : int(x))
df['Total_Stops']
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="kWfMCTu9hRFT" outputId="49dd9c46-1566-49eb-b654-fea861e953be"
df.head()
# + id="0fQ-to6y1rGr"
# Classification of time in morning, evening, afternoon, mid-night
def conversion(x):
if int(x[:2]) >= 0 and int(x[:2]) < 6:
return 'mid_night'
elif int(x[:2]) >= 6 and int(x[:2]) < 12:
return 'morning'
elif int(x[:2]) >= 12 and int(x[:2]) < 18:
return 'afternoon'
elif int(x[:2]) >= 18 and int(x[:2]) < 24:
return 'evening'
# + id="z5cmiwTfDYWV"
df['Flight_Time']=df['Dep_Time'].apply(lambda x: conversion(x))
# + colab={"base_uri": "https://localhost:8080/", "height": 299} id="S3fODBOWFFGd" outputId="726d2b23-54a9-4e5c-c3bc-5dc00f0540f7"
sns.countplot(df['Flight_Time'])
# + colab={"base_uri": "https://localhost:8080/"} id="w3Axq6o6FdH4" outputId="e0e380fc-bc07-49b0-c283-09d0ec09f7b7"
df['Airline'].unique()
# + colab={"base_uri": "https://localhost:8080/", "height": 810} id="3tkGeWmPM2Vm" outputId="5b52e596-a526-4575-8d36-4825d5145004"
plt.figure(figsize = (10, 10))
sns.countplot(df['Airline'])
plt.xticks(rotation = 90)
# + id="GEPgbBpzM6a0"
df['Airline'].replace(['Trujet', 'Vistara Premium economy'], 'Another', inplace = True)
# + colab={"base_uri": "https://localhost:8080/", "height": 810} id="2wE1ahycNZZF" outputId="8d3d50f0-1803-4006-fdd4-f982b7753027"
plt.figure(figsize = (10, 10))
sns.countplot(df['Airline'])
plt.xticks(rotation = 90)
# + colab={"base_uri": "https://localhost:8080/"} id="CO4GjGIwp8k1" outputId="0e4b5aaa-df45-4755-803b-0577081b33ad"
df['Duration']
# + id="TdPb8brDNcv4"
# Duration to seconds
def duration_to_seconds(x):
a=x.split(' ')
count=0
for i in range(len(a)):
tp=''
for j in a[i]:
if j.isnumeric():
tp+=j
elif j=='h':
k=0
break
elif j=='m':
k=1
break
if k==0:
count+=int(tp)*60
elif k==1:
count+=int(tp)
return count
# + colab={"base_uri": "https://localhost:8080/"} id="jESm3E-trHwR" outputId="231227c5-47b0-4f88-cef2-eef6d5778f7f"
df['Duration']=df['Duration'].apply(lambda x: duration_to_seconds(x))
df['Duration']
# + id="_-vtREIdrVGh" colab={"base_uri": "https://localhost:8080/", "height": 258} outputId="86b4ad77-e6d6-409e-8a91-4ac5b5477620"
df.head()
# + id="zmmjf6IdrXP0" colab={"base_uri": "https://localhost:8080/", "height": 766} outputId="768c9d88-e5d7-48cc-9f4f-3dc36c84fc4e"
plt.figure(figsize = (10, 10))
sns.scatterplot(df['Additional_Info'],df['Price'])
plt.xticks(rotation = 90)
# + colab={"base_uri": "https://localhost:8080/"} id="pMYcheM45AyH" outputId="6cdfa97c-683f-4a52-9742-056bebe9ff7d"
df['Additional_Info'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="sHLImPJS7KWw" outputId="74ab2f65-16bd-42f5-cba7-13068eb964ec"
df['Additional_Info']=df['Additional_Info'].str.replace('No info','No Info')
df['Additional_Info'].unique()
# + id="J9vfEvNwK2s5" colab={"base_uri": "https://localhost:8080/", "height": 258} outputId="af11820f-96ef-4430-e2a9-2e0b31e620fc"
df.head()
# + [markdown] id="ioKx17TOFxWC"
# # Drping Some Columns
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="mUATc0G4GT38" outputId="604e8792-4a05-4ae9-dde9-7306d01bcd30"
df.drop(['Route','Arrival_Time','Date_of_Journey','Dep_Time'],axis=1,inplace=True)
df.head()
# + [markdown] id="MCCOzP6-wYs4"
# ### ***Co-relation Matrix***
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="RfyPMYOXwP6q" outputId="6006cc41-6ad9-4f9f-9a0a-f954b4c10b46"
corr=df.corr()
sns.heatmap(corr,annot=True,cmap='coolwarm')
# + [markdown] id="75G-VUkTHcjG"
# ### **Label** **Encoding**
# + colab={"base_uri": "https://localhost:8080/", "height": 290} id="xON9g9BcHVtc" outputId="91076bd0-bfca-4f50-86c7-8594ab897ebc"
#Now lets encode the inputs using label encoder
# from sklearn.preprocessing import LabelEncoder
# cat_col = ['Airline','Source','Destination','Duration','Additional_Info','Total_Stops','Day','month','year','Flight_Time']
# le = LabelEncoder()
# for i in cat_col:
# df[i] = le.fit_transform(df[i])
# df.head()
df = pd.get_dummies(df, columns = ['Airline', 'Source', 'Destination', 'Additional_Info', 'Flight_Time','month','weekday'])
pd.set_option('display.max_columns', 50)
df.head()
# + id="IGNOoVHyInn5"
# + colab={"base_uri": "https://localhost:8080/"} id="aramgqVVI__y" outputId="f6c06d17-8e57-4dab-b945-5e6f1852b8eb"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="11g9F8VSJdew" outputId="cc477f89-cb71-413d-d65a-79d32928e471"
# Dropping Duplicates
df = df.drop_duplicates()
df.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="DuPKGFP5JeUS" outputId="c2b835df-a928-4007-cac9-8c1f6b4f09a1"
# Box-plot
sns.boxplot(df['Price'])
# + id="XU81SYeuJyK-"
df.to_csv('final_data.csv', index = None)
# + [markdown] id="xzNZ_WeowyNd"
# ### **Data for Model**
# + colab={"base_uri": "https://localhost:8080/"} id="Uz3r4DicJ4MC" outputId="a7bf1337-7bcc-42f0-e813-957cbcaada16"
y = df['Price']
X = df.drop('Price', axis = 1)
X.info()
# + [markdown] id="sYFA8dpIdsmw"
# Data Scaling
# + colab={"base_uri": "https://localhost:8080/", "height": 290} id="2IP38Ym6w-9q" outputId="3c8092ab-9506-47cf-a884-60066132dbbd"
from sklearn.preprocessing import StandardScaler
s = StandardScaler()
X = pd.DataFrame(s.fit_transform(df),columns = df.columns)
X = X.drop('Price',axis=1)
X.head()
# + colab={"base_uri": "https://localhost:8080/"} id="4nPjokOi0-Rq" outputId="f8d101a8-26b5-4ace-8a1c-2fce23297b85"
X.info()
# + [markdown] id="Gupm37ikg5a7"
# ### **Model Testing**
# + id="dO47BjLc09Sa"
# Splitting data into train and test data
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import r2_score
from math import sqrt
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
# + id="Puwc2HdjesY_"
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="pfowvazIe_cQ" outputId="71d03c07-a90c-4b4f-f8a9-adf7beae6757"
print("Train Results for Random Forest Regressor Model:")
print(50 * '-')
print("Root mean squared error: ", sqrt(mse(y_train.values, y_train_pred)))
print("R-squared: ", r2_score(y_train.values, y_train_pred))
# + id="xQ1vEBiFg3-O"
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_squared_error
def train(model,X,y):
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
print("Train Results for Model:")
print(50 * '-')
print("Root mean squared error: ", sqrt(mse(y_train.values, y_train_pred)))
print("R-squared: ", r2_score(y_train.values, y_train_pred))
print(50*'#')
print("Test Results for Decision Tree Regressor Model:")
print(50 * '-')
print("Root mean squared error: ", sqrt(mse(y_test, y_test_pred)))
print("R-squared: ", r2_score(y_test, y_test_pred))
# + colab={"base_uri": "https://localhost:8080/"} id="IRqG2VsjdvX8" outputId="09868132-b652-4c62-c089-a269b2493fb5"
X.shape
# + colab={"base_uri": "https://localhost:8080/"} id="reKaXf2RhCrS" outputId="6915e15e-0941-4a33-bccf-000b2f459d14"
print(type(X))
print(type(y))
# + colab={"base_uri": "https://localhost:8080/"} id="8gGaxAkghD9x" outputId="6d0b2056-7059-4a1a-e74d-1b6bd4cefb11"
y.shape
# + colab={"base_uri": "https://localhost:8080/"} id="p-GLhf-OhXic" outputId="504c02a1-e78d-4f9f-c90b-217ea4caa071"
y.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="NGDuMHpKhY8M" outputId="87ddb4ec-abbc-40d6-8f19-d067be495830"
from sklearn.linear_model import LinearRegression, Ridge, Lasso
model = LinearRegression()
train(model, X, y)
plt.figure(figsize = (10, 10))
coef = pd.Series(model.coef_, X.columns).sort_values()
coef.plot(kind='bar', title="Model Coefficients")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="IhV8l1OLuE5G" outputId="d87732a0-7f7a-49c3-b9ff-ad2cb45118bc"
model = Lasso()
train(model, X, y)
coef = pd.Series(model.coef_, X.columns).sort_values()
plt.figure(figsize = (10, 10))
coef.plot(kind='bar', title="Model Coefficients")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="L5-NdbS_1cWE" outputId="76cd189b-1127-440e-c633-3c4f060cfc95"
model = Ridge(normalize=True)
train(model, X, y)
coef = pd.Series(model.coef_, X.columns).sort_values()
plt.figure(figsize = (10, 10))
coef.plot(kind='bar', title="Model Coefficients")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="EPi0s3Ac0erQ" outputId="e24153da-a7ef-48bd-c30d-3e49e83068ab"
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor()
train(model, X, y)
coef = pd.Series(model.feature_importances_, X.columns).sort_values(ascending=False)
plt.figure(figsize = (10, 10))
coef.plot(kind='bar', title="Feature Importance")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="T3RM9ARC1hvX" outputId="14d5b3f9-d277-4f4c-f9aa-526cffcbd9bf"
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
train(model, X, y)
coef = pd.Series(model.feature_importances_, X.columns).sort_values(ascending=False)
plt.figure(figsize = (10, 10))
coef.plot(kind='bar', title="Feature Importance")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="0bUqww1m1mZs" outputId="20b06c35-ad75-4bd1-8a3a-9c266b0450a0"
from sklearn.ensemble import ExtraTreesRegressor
model = ExtraTreesRegressor()
train(model, X, y)
coef = pd.Series(model.feature_importances_, X.columns).sort_values(ascending=False)
plt.figure(figsize = (10, 10))
coef.plot(kind='bar', title="Feature Importance")
# + id="nJwrJJNq1vZa"
# + [markdown] id="N8m_Z48guUfd"
# ### **Hyperparameter Tuning**
# + colab={"base_uri": "https://localhost:8080/"} id="5V68DBBludUj" outputId="16a6f9bd-e6c2-4938-e08a-156eeb397a1f"
from sklearn.model_selection import RandomizedSearchCV
tuned_params = {'max_depth': [int(x) for x in np.linspace(5, 30, num = 6)],
'n_estimators': [int(x) for x in np.linspace(start = 100, stop = 1200, num = 12)],
'max_features': ['auto', 'sqrt'],
'min_samples_split': [2, 5, 10, 15, 100],
'min_samples_leaf': [1, 2, 5, 10]
}
model = RandomizedSearchCV(RandomForestRegressor(), tuned_params, n_iter=10, scoring = 'neg_mean_absolute_error', cv=5, n_jobs=1,verbose=2,random_state=42)
train(model, X, y)
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="wl5u5Kqau_z0" outputId="2dd87689-182e-48c8-8135-71c2753f374d"
pred=model.predict(X_test)
plt.figure(figsize = (8,8))
sns.distplot(y_test-pred)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="M-YpxPkV0mGG" outputId="e4149a92-b682-4198-8b1d-36edb9da7379"
plt.figure(figsize = (8,8))
plt.scatter(y_test, pred, alpha = 0.5)
plt.xlabel("y_test")
plt.ylabel("y_pred")
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="FcGcfFvZ0x_v" outputId="1bd00db7-0c4e-40cb-d0e9-de0e0275f88f"
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, pred))
print('MSE:', metrics.mean_squared_error(y_test, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, pred)))
# + [markdown] id="FlV-DDl21XKa"
# ### **Saving the Model**
# + id="V-H_NLCt1aLC"
import pickle
# open a file, where you ant to store the data
file = open('flight_rate_rf.pkl', 'wb')
# dump information to that file
pickle.dump(model, file)
# + id="23ssvM7S1oSK"
temp = open('flight_rate_rf.pkl','rb')
fin = pickle.load(temp)
# + id="F4XYs5Wn2K2f"
pred_fin = fin.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="X9eeRd162VrI" outputId="266cac91-bf5c-4f13-96a2-741779185786"
metrics.r2_score(y_test, pred_fin)
| 16,057 |
/Notebooks/py/maheshchincholikar/logistic-regression-on-titanic-dataset/logistic-regression-on-titanic-dataset.ipynb
|
6b28756425a6ca0333af2358134d412df8e658a0
|
[] |
no_license
|
nischalshrestha/automatic_wat_discovery
|
https://github.com/nischalshrestha/automatic_wat_discovery
| 2 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 10,816 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 1. Data Exploration
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# -
# To stop potential randomness
seed = 42
rng = np.random.RandomState(seed)
data = pd.read_csv('../datasets/blood_cell_detection/blood_cell_detection.csv')
data.head()
data.shape
data.filename.unique().shape
data.cell_type.unique()
data.cell_type.value_counts()
# +
# take random index
idx = rng.choice(range(data.shape[0]))
# get corresponding image
image = plt.imread('../datasets/blood_cell_detection/images/' + data.iloc[idx].filename.split('.')[0] + '.jpg')
# draw emtpy figure
fig = plt.figure()
# define axis
ax = fig.add_axes([0, 0, 1, 1])
# plot image
plt.imshow(image)
# for each row
for _, row in data[data.filename == data.iloc[idx].filename].iterrows():
# get actual coordinates
xmin = row.xmin
xmax = row.xmax
ymin = row.ymin
ymax = row.ymax
# find width and height
width = xmax - xmin
height = ymax - ymin
# set different bounding box colors
if row.cell_type == 'RBC':
edgecolor = 'r'
elif row.cell_type == 'WBC':
edgecolor = 'b'
elif row.cell_type == 'Platelets':
edgecolor = 'g'
# create rectangular patch
rect = patches.Rectangle((xmin, ymin), width, height, edgecolor=edgecolor, facecolor='none')
# add patch
ax.add_patch(rect)
# print image shape
print('Image is of shape', image.shape)
# show figure
plt.show()
# -
# ## 2. Data Loading and Preprocessing
# keep only wbc's
data = data.loc[data.cell_type == 'WBC'].copy()
# +
# drop images having more than one wbc
data = data.drop_duplicates(subset=['filename', 'cell_type'], keep=False)
# +
idx = rng.choice(range(data.shape[0]))
image = plt.imread('../datasets/blood_cell_detection/images/' + data.iloc[idx].filename.split('.')[0] + '.jpg')
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
plt.imshow(image)
for _, row in data[data.filename == data.iloc[idx].filename].iterrows():
xmin = row.xmin
xmax = row.xmax
ymin = row.ymin
ymax = row.ymax
width = xmax - xmin
height = ymax - ymin
if row.cell_type == 'RBC':
edgecolor = 'r'
elif row.cell_type == 'WBC':
edgecolor = 'b'
elif row.cell_type == 'Platelets':
edgecolor = 'g'
rect = patches.Rectangle((xmin, ymin), width, height, edgecolor=edgecolor, facecolor='none')
ax.add_patch(rect)
print('Image is of shape', image.shape)
plt.show()
# -
row = data.iloc[idx]
row
# +
patch_1_xmin, patch_1_ymin, patch_1_xmax, patch_1_ymax = 0, 0, 320, 240
patch_2_xmin, patch_2_ymin, patch_2_xmax, patch_2_ymax = 320, 0, 640, 240
patch_3_xmin, patch_3_ymin, patch_3_xmax, patch_3_ymax = 0, 240, 320, 480
patch_4_xmin, patch_4_ymin, patch_4_xmax, patch_4_ymax = 320, 240, 640, 480
patch_5_xmin, patch_5_ymin, patch_5_xmax, patch_5_ymax = 160, 120, 480, 360
patch_1 = image[patch_1_ymin:patch_1_ymax, patch_1_xmin:patch_1_xmax, :]
patch_2 = image[patch_2_ymin:patch_2_ymax, patch_2_xmin:patch_2_xmax, :]
patch_3 = image[patch_3_ymin:patch_3_ymax, patch_3_xmin:patch_3_xmax, :]
patch_4 = image[patch_4_ymin:patch_4_ymax, patch_4_xmin:patch_4_xmax, :]
patch_5 = image[patch_5_ymin:patch_5_ymax, patch_5_xmin:patch_5_xmax, :]
# -
patch_1.shape, patch_2.shape, patch_3.shape, patch_4.shape, patch_5.shape
plt.imshow(image)
plt.imshow(patch_1)
plt.imshow(patch_2)
plt.imshow(patch_3)
plt.imshow(patch_4)
plt.imshow(patch_5)
# +
# for patch_1
Irect_xmin, Irect_ymin = max(row.xmin, patch_1_xmin), max(row.ymin, patch_1_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_1_xmax), min(row.ymax, patch_1_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_1 = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_1_xmax - patch_1_xmin)*(patch_1_ymax - patch_1_ymin)
target_1 = Iarea / Parea
target_1 = int(target_1 > 0.1)
# -
target_1
# +
# for patch_2
Irect_xmin, Irect_ymin = max(row.xmin, patch_2_xmin), max(row.ymin, patch_2_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_2_xmax), min(row.ymax, patch_2_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_2 = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_2_xmax - patch_2_xmin)*(patch_2_ymax - patch_2_ymin)
target_2 = Iarea / Parea
target_2 = int(target_2 > 0.1)
# +
# for patch_3
Irect_xmin, Irect_ymin = max(row.xmin, patch_3_xmin), max(row.ymin, patch_3_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_3_xmax), min(row.ymax, patch_3_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_3 = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_3_xmax - patch_3_xmin)*(patch_3_ymax - patch_3_ymin)
target_3 = Iarea / Parea
target_3 = int(target_3 > 0.1)
# +
# for patch_4
Irect_xmin, Irect_ymin = max(row.xmin, patch_4_xmin), max(row.ymin, patch_4_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_4_xmax), min(row.ymax, patch_4_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_4 = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_4_xmax - patch_4_xmin)*(patch_4_ymax - patch_4_ymin)
target_4 = Iarea / Parea
target_4 = int(target_4 > 0.1)
# +
# for patch_5
Irect_xmin, Irect_ymin = max(row.xmin, patch_5_xmin), max(row.ymin, patch_5_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_5_xmax), min(row.ymax, patch_5_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_5 = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_5_xmax - patch_5_xmin)*(patch_5_ymax - patch_5_ymin)
target_5 = Iarea / Parea
target_5 = int(target_5 > 0.1)
# -
target_1, target_2, target_3, target_4, target_5
patch_2.shape
patch_2
from skimage.transform import resize
patch_2 = resize(patch_2, (224, 224, 3), preserve_range=True)
patch_2
patch_2.shape
# +
# create empty lists
X = []
y = []
# set patch co-ordinates
patch_1_xmin, patch_1_xmax, patch_1_ymin, patch_1_ymax = 0, 320, 0, 240
patch_2_xmin, patch_2_xmax, patch_2_ymin, patch_2_ymax = 320, 640, 0, 240
patch_3_xmin, patch_3_xmax, patch_3_ymin, patch_3_ymax = 0, 320, 240, 480
patch_4_xmin, patch_4_xmax, patch_4_ymin, patch_4_ymax = 320, 640, 240, 480
patch_5_xmin, patch_5_xmax, patch_5_ymin, patch_5_ymax = 160, 480, 120, 360
for idx, row in data.iterrows():
# read image
image = plt.imread('../datasets/blood_cell_detection/images/' + row.filename)
# extract patches
patch_1 = image[patch_1_ymin:patch_1_ymax, patch_1_xmin:patch_1_xmax, :]
patch_2 = image[patch_2_ymin:patch_2_ymax, patch_2_xmin:patch_2_xmax, :]
patch_3 = image[patch_3_ymin:patch_3_ymax, patch_3_xmin:patch_3_xmax, :]
patch_4 = image[patch_4_ymin:patch_4_ymax, patch_4_xmin:patch_4_xmax, :]
patch_5 = image[patch_5_ymin:patch_5_ymax, patch_5_xmin:patch_5_xmax, :]
# set default values
target_1 = target_2 = target_3 = target_4 = target_5 = Iarea = 0
# figure out if the patch contains the object
## for patch_1
Irect_xmin, Irect_ymin = max(row.xmin, patch_1_xmin), max(row.ymin, patch_1_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_1_xmax), min(row.ymax, patch_1_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_1 = Iarea = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_1_xmax - patch_1_xmin)*(patch_1_ymax - patch_1_ymin)
target_1 = Iarea / Parea
target_1 = int(target_1 > 0.1)
## for patch_2
Irect_xmin, Irect_ymin = max(row.xmin, patch_2_xmin), max(row.ymin, patch_2_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_2_xmax), min(row.ymax, patch_2_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_2 = Iarea = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_2_xmax - patch_2_xmin)*(patch_2_ymax - patch_2_ymin)
target_2 = Iarea / Parea
target_2 = int(target_2 > 0.1)
## for patch_3
Irect_xmin, Irect_ymin = max(row.xmin, patch_3_xmin), max(row.ymin, patch_3_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_3_xmax), min(row.ymax, patch_3_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_3 = Iarea = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_3_xmax - patch_3_xmin)*(patch_3_ymax - patch_3_ymin)
target_3 = Iarea / Parea
target_3 = int(target_3 > 0.1)
## for patch_4
Irect_xmin, Irect_ymin = max(row.xmin, patch_4_xmin), max(row.ymin, patch_4_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_4_xmax), min(row.ymax, patch_4_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_4 = Iarea = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_4_xmax - patch_4_xmin)*(patch_4_ymax - patch_4_ymin)
target_4 = Iarea / Parea
target_4 = int(target_4 > 0.1)
## for patch_5
Irect_xmin, Irect_ymin = max(row.xmin, patch_5_xmin), max(row.ymin, patch_5_ymin)
Irect_xmax, Irect_ymax = min(row.xmax, patch_5_xmax), min(row.ymax, patch_5_ymax)
if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin:
target_5 = Iarea = 0
else:
Iarea = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin))
Parea = (patch_5_xmax - patch_5_xmin)*(patch_5_ymax - patch_5_ymin)
target_5 = Iarea / Parea
target_5 = int(target_5 > 0.1)
# resize the patches
patch_1 = resize(patch_1, (224, 224, 3), preserve_range=True)
patch_2 = resize(patch_2, (224, 224, 3), preserve_range=True)
patch_3 = resize(patch_3, (224, 224, 3), preserve_range=True)
patch_4 = resize(patch_4, (224, 224, 3), preserve_range=True)
patch_5 = resize(patch_5, (224, 224, 3), preserve_range=True)
# create final input data
X.extend([patch_1, patch_2, patch_3, patch_4, patch_5])
# create target data
y.extend([target_1, target_2, target_3, target_4, target_5])
# convert these lists to single numpy array
X = np.array(X)
y = np.array(y)
# -
from keras.applications.vgg16 import preprocess_input
X_preprocessed = preprocess_input(X, mode='tf')
from sklearn.model_selection import train_test_split
X_train, X_valid, Y_train, Y_valid=train_test_split(X_preprocessed, y, test_size=0.3, random_state=42)
# ## 3. Model Building
# +
from keras.applications.vgg16 import VGG16
from keras.models import Sequential
from keras.layers import Dense, Dropout, InputLayer
# +
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
features_train = base_model.predict(X_train)
features_valid = base_model.predict(X_valid)
# +
max_val = features_train.max()
features_train /= max_val
features_valid /= max_val
# -
features_train = features_train.reshape(features_train.shape[0],7*7*512)
features_valid = features_valid.reshape(features_valid.shape[0],7*7*512)
# +
model=Sequential()
model.add(InputLayer((7*7*512, )))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer="adam", metrics=['accuracy'])
# -
model.fit(features_train, Y_train, epochs=15, batch_size=512,validation_data=(features_valid,Y_valid))
# +
# get predictions
predictions = model.predict_classes(features_valid).ravel()
prediction_probabilities = model.predict(features_valid).ravel()
# extract validation images
_, valid_x, _, _ = train_test_split(X,y,test_size=0.3, random_state=42)
# get a random index
index = rng.choice(range(len(valid_x)))
# get the corresponding image
img = valid_x[index]
# get the corresponding probability
prob = (prediction_probabilities * 100).astype(int)[index]
# print this probability
print(prob , '% sure that it is WBC')
# show image
plt.imshow(img)
# +
# extract index of patch
for i in range(X.shape[0]):
if np.array_equal(X[i, :], img):
break
# get the patch number
patch_num = (i % 5) + 1
# read the corresponding image
image = plt.imread('../datasets/blood_cell_detection/images/' + data.iloc[int(i / 5)].filename)
# plot an empty figure and define axis
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
# plot image
ax.imshow(image)
# get minimum and maximum co-ordinates
xmin = eval('patch_' + str(patch_num) + '_xmin')
xmax = eval('patch_' + str(patch_num) + '_xmax')
ymin = eval('patch_' + str(patch_num) + '_ymin')
ymax = eval('patch_' + str(patch_num) + '_ymax')
# get width and height
width = xmax - xmin
height = ymax - ymin
# define a rectangular patch
rect = patches.Rectangle((xmin, ymin), width, height, edgecolor='b', facecolor='none')
# annotate the patch
ax.annotate(xy=(xmin, ymin), s='prob: ' + str(prob) + "%")
# add the rectangular patch
ax.add_patch(rect)
# show figure
plt.show()
# -
| 13,578 |
/vision model/DL_model.ipynb
|
4ebfaa9455f36e4b30eb57a11853e0720d15d228
|
[] |
no_license
|
AntaraB1005/Autonomous-Waste-Segregation-Robotics
|
https://github.com/AntaraB1005/Autonomous-Waste-Segregation-Robotics
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 2,242,909 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="UwS2ggW6RJ7V" outputId="dc14a53e-537b-4646-a8f3-ffe369af1785"
from google.colab import drive
drive.mount('/content/drive')
# + id="SmzQmZdKDfSF"
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import optimizers
import os
import glob
import shutil
import sys
import numpy as np
from skimage.io import imread
import matplotlib.pyplot as plt
from IPython.display import Image
# %matplotlib inline
# + id="mp4TOw5rJs0M"
batch_size = 32
width = 128
height = 128
epochs = 40
NUM_TRAIN = 2000
NUM_TEST = 1000
dropout_rate = 0.2
input_shape = (height, width, 1)
# + colab={"base_uri": "https://localhost:8080/", "height": 574} id="wpJ7Lc6GaHPk" outputId="3cd6637b-c339-45fd-d074-89ce5bee56e8"
import cv2
from matplotlib import pyplot as plt
# !sudo ls "/content/drive/My Drive/training data/0"
# %cd /content/drive/My\ Drive/training\ data
img=plt.imread('0/download (11).jpg')
plt.imshow(img)
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="LW7z-nEbD6Ho" outputId="ed62f7f3-88bc-46cf-cc61-61dd27e651d2"
datadir='/content/drive/My Drive/training data'
path='/content/drive/My Drive/training data'
CATEGORIES=['0','1','2','3']
imgsize=331
train=[]
train=[]
def training_data():
for category in CATEGORIES:
path=os.path.join(datadir,category)
class_num=CATEGORIES.index(category)
for img in os.listdir(path):
img_arr=cv2.imread(os.path.join(path,img))
new_arr=cv2.resize(img_arr,(imgsize,imgsize))
train.append([new_arr,class_num])
training_data()
print(len(train))
# + colab={"base_uri": "https://localhost:8080/"} id="QSDxrUu4GrTr" outputId="bcdf4a21-a17b-40cb-aaef-e8f832be2423"
test=[]
path='/content/drive/My Drive/testing_data'
def testing_data():
for img in os.listdir(path):
img_arr=cv2.imread(os.path.join(path,img))
new_arr=cv2.resize(img_arr,(imgsize,imgsize))
test.append(new_arr)
testing_data()
print(len(test))
# + colab={"base_uri": "https://localhost:8080/"} id="ffzk-radG7M6" outputId="f3417b9c-1f85-496f-b7fd-d4b27e777dc2"
import random
random.shuffle(train)
x=[]
y=[]
for features,label in train:
x.append(features)
y.append(label)
x=np.array(x)
y=np.array(y)
print(x.shape)
print(len(np.unique(y)))
# + id="dfJrsvpYZ9UN"
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,Dropout,MaxPooling2D,Activation,Flatten,BatchNormalization
x=np.array(x)
y=np.array(y)
x=x.astype('float32')/255
# + id="JfGmOFoUaBbH"
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, shuffle=True)
x_train,x_valid, y_train, y_valid= train_test_split(x_train, y_train, test_size=0.25, shuffle=True)
# + [markdown] id="-IRZoPflv7fw"
# # New Section
# + colab={"base_uri": "https://localhost:8080/"} id="zHa3B2bkFxaq" outputId="83bcadd3-3ac9-4ab5-a913-b97f285b8762"
batch_size=64
print(x_train.shape)
print(x_valid.shape)
# + id="D9MM2WA49zyj"
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rotation_range=180,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.1,
horizontal_flip=True,
vertical_flip=True,
fill_mode='nearest')
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator()
x_train=np.reshape(x_train,(238,331,331,3))
train_generator = train_datagen.flow(x_train,y_train)
x_valid=np.reshape(x_valid,(80,331,331,3))
validation_generator = test_datagen.flow(x_valid,y_valid)
# + [markdown] id="Y1nznF2EJUnq"
# Own model
# + colab={"base_uri": "https://localhost:8080/"} id="HNRAsIEvIb7B" outputId="76fb832a-e073-4b17-e372-390814663ec7"
from tensorflow.keras.applications import VGG16
num_classes = 4
model = Sequential()
model.add(VGG16(include_top=False, pooling='avg', weights='imagenet'))
model.add(Dense(409, activation='relu'))
#model.add(BatchNormalisation())
model.add(Dense(409, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.layers[0].trainable = False
# + colab={"base_uri": "https://localhost:8080/"} id="4iAX7AegDHUS" outputId="e68bf3c6-a10e-46fa-c3a0-1c5791dfc319"
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="_N-F9Z-uDJAi" outputId="16bd0395-5bcb-45bf-b1dd-0f4544d35152"
print('This is the number of trainable layers '
'before freezing the conv base:', len(model.trainable_weights))
# + id="-RI5T5g6fvFF"
import tensorflow
import datetime
# %load_ext tensorboard
# + id="bW4RMTnEGFUi"
# rm -rf ./logs/
# + colab={"base_uri": "https://localhost:8080/"} id="FMjt8tCcDOoC" outputId="a7684186-51e6-49f7-ecce-4678e3f5e5de"
opt = tensorflow.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['acc'])
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath='transfermodel.weights.best.hdf5', verbose=1,
save_best_only=True)
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard = tensorflow.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
history = model.fit_generator(
generator=train_generator,
validation_data=validation_generator, validation_steps=1,
epochs=15,
verbose=1,
callbacks=[checkpointer,tensorboard],
use_multiprocessing=True)
# + id="7XkBqcCOdnDv"
# %tensorboard --logdir logs/fit
# + id="hCOVlRi13Anv"
score = model.evaluate(x_test, y_test,verbose=0)
print('\n', 'Test accuracy:{0:.2f}'.format(score[1]))
# + [markdown] id="UMYdtbTP66yV"
# # # !cp '/gdrive/My Drive/my_file' 'my_file'!cp '/gdrive/My Drive/my_file' 'my_file'!cp '/gdrive/My Drive/my_file' 'my_file'!cp '/gdrive/My Drive/my_file' 'my_file'!cp '/gdrive/My Drive/my_file' 'my_file'!cp '/gdrive/My Drive/my_file' 'my_file'!cp '/gdrive/My Drive/my_file' 'my_file'!cp '/gdrive/My Drive/my_file' 'my_file'!!!!!!!!!!!!!!!!!!!!!!!dc124cs1d15c1d591d!cassaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaadgrgrggrv 33ewsdaasqdwdadasasdsac1# New Section
# + id="HvVt5TwoEDz6"
os.makedirs("./models", exist_ok=True)
model.save('./models/model01.h5')
# + id="SLDHANW_HPVv"
from tensorflow.keras.preprocessing import image
def predict_image(arr):
# Read the image and resize it
#img = image.load_img(img_path, target_size=(height, width))
# Convert it to a Numpy array with target shape.
#x = image.img_to_array(img)
# Reshape
#x = x.reshape((1,) + x.shape)
arr=np.reshape(arr,(1,331,331,3))
arr=arr.astype('float32')/255
result = model.predict(arr)
j=np.argmax(result)
return j
# + colab={"background_save": true} id="tP4a1i7pI1au" outputId="de926273-7b82-4b5f-8519-1769af16f9fc"
from PIL import Image
temp=0
for i in range(31):
temp=predict_image(test[i])
if temp==0:
text="polystyrene"
elif temp==1:
text="LDPE or PET"
elif temp==2:
text="HDPE or PVC"
elif temp==3:
text="LDPE or HDPE or PET"
print(" ",text)
img = Image.fromarray(test[i], 'RGB')
plt.imshow(img)
plt.show()
| 7,544 |
/For_CS3100_Fall2020/11_CFG/RE2_NFA_PT.ipynb
|
c889819ba28135138d439ee4cff735b28dbaf85f
|
[] |
no_license
|
altajack11/Jove
|
https://github.com/altajack11/Jove
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 18,929 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ___
#
# <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
# ___
# # Seaborn Exercises
#
# Time to practice your new seaborn skills! Try to recreate the plots below (don't worry about color schemes, just the plot itself.
# ## The Data
#
# We will be working with a famous titanic data set for these exercises. Later on in the Machine Learning section of the course, we will revisit this data, and use it to predict survival rates of passengers. For now, we'll just focus on the visualization of the data with seaborn:
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
sns.set_style('whitegrid')
titanic = sns.load_dataset('titanic')
titanic.head()
# # Exercises
#
# ** Recreate the plots below using the titanic dataframe. There are very few hints since most of the plots can be done with just one or two lines of code and a hint would basically give away the solution. Keep careful attention to the x and y labels for hints.**
#
# ** *Note! In order to not lose the plot image, make sure you don't code in the cell that is directly above the plot, there is an extra cell above that one which won't overwrite that plot!* **
sns.jointplot(x='fare',y='age',data=titanic)
sns.distplot(titanic.fare,kde=False,bins=30,color='red')
sns.boxplot(x='class',y='age',data=titanic,palette='rainbow')
sns.swarmplot(x='class',y='age',data=titanic,palette='Set2')
sns.countplot(x='sex',data=titanic)
sns.heatmap(titanic.corr(),cmap='coolwarm')
plt.title('titanic.corr()')
g = sns.FacetGrid(data=titanic,col='sex')
g = g.map(plt.hist, 'age')
# # Great Job!
#
# ### That is it for now! We'll see a lot more of seaborn practice problems in the machine learning section!
~~~~~~~~~~~~~
# + run_control={"frozen": false, "read_only": false}
tokens = ('EPS','STR','LPAREN','RPAREN','PLUS','STAR')
# Tokens in our RE are these
t_EPS = r'\'\'|\"\"' # Not allowing @ for empty string anymore! # t_EPS = r'\@'
# The following allows one lower-case, one upper-case or one digit to be used in our REs
t_STR = r'[a-zA-Z0-9]'
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_PLUS = r'\+'
t_STAR = r'\*'
# Ignored characters
t_ignore = " \t"
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count("\n")
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
def p_expression_plus(t):
'expression : expression PLUS catexp'
#
nfa = mk_plus_nfa(t[1]['nfa'], t[3]['nfa']) # Union of the two NFAs is returned
tree = attrDyadicInfix("+", t[1], t[3])
tree.update({'nfa':nfa})
t[0] = tree
def p_expression_plus1(t):
'expression : catexp'
#
t[0] = t[1]
def p_expression_cat(t):
'catexp : catexp ordyexp'
#
nfa = mk_cat_nfa(t[1]['nfa'], t[2]['nfa'])
#--insert new field 'nfa'
tree = attrDyadicInfix(".", t[1], t[2])
tree.update({'nfa':nfa})
t[0] = tree
def p_expression_cat1(t):
'catexp : ordyexp'
#
t[0] = t[1]
# We employ field 'ast' of the dict to record the abstract syntax tree.
# Field 'dig' holds a digraph. It too is a dict.
# Its fields are nl for the node list and el for the edge list
def p_expression_ordy_star(t):
'ordyexp : ordyexp STAR'
#
nfa = mk_star_nfa(t[1]['nfa'])
ast = ('*', t[1]['ast'])
nlin = t[1]['dig']['nl']
elin = t[1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("R*_")
right = NxtStateStr("*_")
t[0] = {'nfa' : nfa,
'ast' : ast,
'dig' : {'nl' : [root] + nlin + [right], # this order important for proper layout!
'el' : elin + [ (root, rootin),
(root, right) ]
}}
def p_expression_ordy_paren(t):
'ordyexp : LPAREN expression RPAREN'
#
nfa = t[2]['nfa']
ast = t[2]['ast']
nlin = t[2]['dig']['nl']
elin = t[2]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("(R)_")
left = NxtStateStr("(_")
right= NxtStateStr(")_")
t[0] = {'nfa' : nfa,
'ast' : ast,
'dig' : {'nl' : [root, left] + nlin + [right], #order important f. proper layout!
'el' : elin + [ (root, left),
(root, rootin),
(root, right) ]
}}
def p_expression_ordy_eps(t):
'ordyexp : EPS'
#
strn = '@'
ast = ('@', strn)
t[0] = { 'nfa' : mk_eps_nfa(),
'ast' : ast,
'dig' : {'nl' : [ strn + NxtStateStr("_") ],
'el' : []
}}
def p_expression_ordy_str(t):
'ordyexp : STR'
#
str = t[1]
nfa_STR = mk_symbol_nfa(t[1])
ast = ('str', str)
t[0] = {'nfa' : nfa_STR,
'ast' : ast,
'dig' : {'nl' : [ str + NxtStateStr("_") ],
'el' : []
}}
def p_error(t):
print("Syntax error at '%s'" % t.value)
#--
def parseRE(s):
"""In: a string s containing a regular expression.
Out: An attribute quadruple (nfa,ast,nodelist,edgelist)
"""
mylexer = lex()
myparser = yacc()
#-- pass the right lexer into the parser
p = myparser.parse(s, lexer = mylexer)
return (p['nfa'], p['ast'], p['dig']['nl'], p['dig']['el'])
#--
def mk_plus_nfa(N1, N2):
"""Given two NFAs, return their union.
"""
delta_accum = dict({})
delta_accum.update(N1["Delta"])
delta_accum.update(N2["Delta"]) # Simply accumulate the transitions
# The alphabet is inferred bottom-up; thus we must union the Sigmas
# of the NFAs!
return mk_nfa(Q = N1["Q"] | N2["Q"],
Sigma = N1["Sigma"] | N2["Sigma"],
Delta = delta_accum,
Q0 = N1["Q0"] | N2["Q0"],
F = N1["F"] | N2["F"])
def mk_cat_nfa(N1, N2):
'''Given two NFAs, return their concatenation.
'''
delta_accum = dict({})
delta_accum.update(N1["Delta"])
delta_accum.update(N2["Delta"])
# Now, introduce moves from every one of N1's final states
# to the set of N2's initial states.
for f in N1["F"]:
# However, N1's final states may already have epsilon moves to
# other N1-states!
# Expand the target of such jumps to include N2's Q0 also!
if (f, "") in N1["Delta"]:
delta_accum.update({ (f,""):(N2["Q0"] | N1["Delta"][(f, "")])
})
else:
delta_accum.update({ (f, ""): N2["Q0"] })
# In syntax-directed translation, it is impossible
# that N2 and N1 have common states. Check anyhow
# in case there are bugs elsewhere that cause it.
assert((N2["F"] & N1["F"]) == set({}))
return mk_nfa(Q = N1["Q"] | N2["Q"],
Sigma = N1["Sigma"] | N2["Sigma"],
Delta = delta_accum,
Q0 = N1["Q0"],
F = N2["F"])
def mk_star_nfa(N):
'''Given an NFA, make its star.
'''
# Follow construction from Kozen's book:
# 1) Introduce new (single) start+final state IF
# 2) Let Q0 = set({ IF })
# 2) Move on epsilon from IF to the set N[Q0]
# 3) Make N[F] non-final
# 4) Spin back from every state in N[F] to Q0
#
delta_accum = dict({})
IF = NxtStateStr()
Q0 = set({ IF }) # new set of start + final states
# Jump from IF to N's start state set
delta_accum.update({ (IF,""): N["Q0"] })
delta_accum.update(N["Delta"])
#
for f in N["F"]:
# N's final states may already have epsilon moves to
# other N-states!
# Expand the target of such jumps to include Q0 also.
if (f, "") in N["Delta"]:
delta_accum.update({ (f, ""): (Q0 | N["Delta"][(f, "")]) })
else:
delta_accum.update({ (f, ""): Q0 })
#
return mk_nfa(Q = N["Q"] | Q0,
Sigma = N["Sigma"],
Delta = delta_accum,
Q0 = Q0,
F = Q0)
def mk_eps_nfa():
"""An nfa with exactly one start+final state, which is the NFA for Epsilon.
"""
Q0 = set({ NxtStateStr() })
F = Q0
return mk_nfa(Q = Q0,
Sigma = set({}),
Delta = dict({}),
Q0 = Q0,
F = Q0)
def mk_symbol_nfa(a):
"""The NFA for a single re letter.
"""
# Make a fresh initial state
q0 = NxtStateStr()
Q0 = set({ q0 })
# Make a fresh final state
f = NxtStateStr()
F = set({ f })
return mk_nfa(Q = Q0 | F,
Sigma = set({a}),
Delta = { (q0,a): F },
Q0 = Q0,
F = F)
# +
def attrDyadicInfix(op, attr1, attr3): # <== this is what prints the parse-tree
ast = (op, (attr1['ast'], attr3['ast'])) # <== for an infix operator
nlin1 = attr1['dig']['nl']
nlin3 = attr3['dig']['nl']
nlin = nlin1 + nlin3
elin1 = attr1['dig']['el']
elin3 = attr3['dig']['el']
elin = elin1 + elin3
rootin1 = nlin1[0]
rootin3 = nlin3[0]
root = NxtStateStr("R1"+op+"R2"+"_") # NxtStateStr("$_")
left = rootin1
middle = NxtStateStr(op+"_")
right = rootin3
return {'ast' : ast,
'dig' : {'nl' : [ root, left, middle, right ] + nlin,
'el' : elin + [ (root, left),
(root, middle),
(root, right) ]
}}
def drawPT(nfa_ast_nl_el, comment="PT"):
"""Given an (nfa, ast, nl, el) quadruple where nl is the node and el the edge-list,
draw the Parse Tree by returning a dot object. Also return the NFA dot object.
"""
(nfa, ast, nl, el) = nfa_ast_nl_el
print("Drawing AST for ", ast)
dotObj_pt = Digraph(comment)
dotObj_pt.graph_attr['rankdir'] = 'TB'
for n in nl:
prNam = n.split('_')[0]
dotObj_pt.node(n, prNam, shape="oval", peripheries="1")
for e in el:
dotObj_pt.edge(e[0], e[1])
return (dotObj_nfa(nfa), dotObj_pt)
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Now answer these questions
# -
parseRE("''")
(n,t) = drawPT(parseRE("''"))
n
t
(n1,t1) = drawPT(parseRE("(a*b*+cc)*"))
(n2,t2) = drawPT(parseRE("(a*b)*+cc*"))
n1
t1
n2
t2
#
# # YOUR QUESTIONS
# # Q1: Run this notebook as follows
#
# ## 1) Restart remove the parsetab.py file and __pycache__/
#
# ## 2) Run all the cells (ignore warnings such as this)
#
# #### WARNING: ../../../../jove/TransitionSelectors.py:22: Possible grammar rule 'fn_range' defined without p_ prefix
#
# ## 3) Look at the productions (the ```p_``` functions) and write down a context-free grammar for parsing regular expressions
#
# ### YOUR ANSWER may please be written in this style, (inventing suitable abbreviations to express the High-level Rule)
#
# * expression : expression PLUS catexpression
# - High-level Rule: R -> R + C
#
# * Do this for all the rules
#
# * When you have things like LPAREN, look up how the token was encoded, and write `(` instead in the High-level Rule
#
#
# # Q2: Execute these commands:
#
# * (n1,t1) = drawPT(parseRE("(a*b*+cc)*"))
#
# * (n2,t2) = drawPT(parseRE("(a*b)*+cc*"))
#
# ## By comparing n1 and t1, justify that the correct NFA formation rules have been followed
#
# ## Repeat for n2 and t2
#
#
# # Q3: Explain the workings of all the mk_X_nfa functions by arguing that they are making the right output NFA from the input NFA.
#
# ## In your explanation, explain how Q, Sigma, Delta, q0, and F are formed
| 12,122 |
/workspace/V-REP P3DX/V-REP P3DX Python.ipynb
|
7973fae42c93b1fea944cadeff7ecf7a8d19e9d8
|
[] |
no_license
|
cirocavani/MO651-Robotics
|
https://github.com/cirocavani/MO651-Robotics
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 133,565 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
import pandas as pd
import os
# + pycharm={"name": "#%%\n"}
class company_prerocessing():
# def __init__()
# 파일 열기
def open_df(folder_path):
df = pd.read_csv(folder_path)
return df
# 재무 상태표
def bs_preprocessing(df_bs, cmp_list):
# 행 3번째가 숫자면 0, 2, 3, 4 행만 가져오기
if df_bs.columns[2].isdigit() == True:
# index 오버 처리(4년치 데이터)
try:
if df_bs.columns[5].isdigit() == True:
df_bs_1 = df_bs.iloc[:, [0, 2, 3, 4, 5]]
except:
df_bs_1 = df_bs.iloc[:, [0, 2, 3, 4]]
# 행 6번째가 숫자면 1, 6, 7, 8 행만 가져오기
elif df_bs.columns[6].isdigit() == True:
# index 오버 처리
try:
if df_bs.columns[9].isdigit() == True:
df_bs_1 = df_bs.iloc[:, [1, 6, 7, 8, 9]]
except:
df_bs_1 = df_bs.iloc[:, [1, 6, 7, 8]]
# 행 7번째가 숫자면 1, 7, 8, 9 행만 가져오기
elif df_bs.columns[7].isdigit() == True:
# index 오버 처리
try:
if df_bs.columns[10].isdigit() == True:
df_bs_1 = df_bs.iloc[:, [1, 7, 8, 9, 10]]
except:
df_bs_1 = df_bs.iloc[:, [1, 7, 8, 9]]
# 행 8번째가 숫자면 1, 8, 9, 10 행만 가져오기
elif df_bs.columns[8].isdigit() == True:
# index 오버 처리
try:
if df_bs.columns[11].isdigit() == True:
df_bs_1 = df_bs.iloc[:, [1, 8, 9, 10, 11]]
except:
df_bs_1 = df_bs.iloc[:, [1, 8, 9, 10]]
# 위에 모든 조건이 맞지 않다면 0:4 가져오기
else:
df_bs_1 = df_bs.iloc[:, 0:4]
# 컬럼 이름 지정
if len(df_bs_1.columns) == 5:
df_bs_1.columns = ['', df_bs_1.columns[1], df_bs_1.columns[2], df_bs_1.columns[3], df_bs_1.columns[4]]
else:
df_bs_1.columns = ['', df_bs_1.columns[1], df_bs_1.columns[2], df_bs_1.columns[3]]
# 데이터 프레임 전치
df_bs_1 = df_bs_1.T
# 행 이름을 0번째 줄로 바꿈
df_bs_1 = df_bs_1.rename(columns=df_bs_1.iloc[0])
# 0번째 열 제거
df_bs_1 = df_bs_1.drop(df_bs_1.index[0])
# 'label_ko' 행 제거
df_bs_1.drop('label_ko', inplace=True, axis =1)
# 인덱스 리셋, index 행이름 바꾸기
df_bs_1 = df_bs_1.reset_index().rename(columns={'index': 'date'})
# company 행 추가 후 회사 이름 넣기
df_bs_1['company'] = cmp_list
print('bs_preprocessing complete')
return df_bs_1
# 포괄손익 계산서
def cis_preprocessing(df_cis, cmp_list):
# 1번째 행에 'KRW'가 포함되어 있지 않을 때
if 'KRW' not in df.columns[1]:
# 3년치 데이터가 있을때 try 실행 없을땐 except 실행
try:
if 'KRW' not in df.columns[4]:
df_cis_1 = df_cis.iloc[:, 0:5]
except:
df_cis_1 = df_cis.iloc[:, 0:4]
# 7번째 행에 'KRW'가 포함되어 있지 않을 때
elif 'KRW' not in df.columns[6]:
# 3년치 데이터가 있을때 try 실행 없을땐 except 실행
try:
if 'KRW' not in df.columns[8]:
df_cis_1 = df_cis.iloc[:, [1, 6, 7, 8]]
except:
df_cis_1= df_cis.iloc[:, [1, 6, 7]]
# 8번째 행에 'KRW'가 포함되어 있지 않을 때
elif 'KRW' not in df.columns[7]:
# 3년치 데이터가 있을때 try 실행 없을땐 except 실행
try:
if 'KRW' not in df.columns[9]:
df_cis_1 = df_cis.iloc[:, [1, 7, 8, 9]]
except:
df_cis_1= df_cis.iloc[:, [1, 7, 8]]
# 컬럼 이름 지정
if len(df_cis_1.columns) == 4:
df_cis_1.columns = ['', df_cis_1.columns[1], df_cis_1.columns[2], df_cis_1.columns[3]]
else:
df_cis_1.columns = ['', df_cis_1.columns[1], df_cis_1.columns[2]]
# 데이터 프레임 전치
df_cis_1 = df_cis_1.T
# 행 이름을 0번째 줄로 바꿈
df_cis_1 = df_cis_1.rename(columns=df_cis_1.iloc[0])
# 0번째 열 제거
df_cis_1 = df_cis_1.drop(df_cis_1.index[0])
# 'label_ko' 행 제거
df_cis_1.drop('label_ko', inplace=True, axis =1)
# 인덱스 리셋, index 행이름 바꾸기
df_cis_1 = df_cis_1.reset_index().rename(columns={'index': 'date'})
# company 행 추가 후 회사 이름 넣기
df_cis_1['company'] = cmp_list
# date의 9번째 줄 부터 끝까지 바꾸기
df_cis_1['date'] = df_cis_1['date'].str[9:]
# 관리비 와 판매비가 따로 있을 때
try:
if '관리비' and '판매비' in df_cis_1.columns:
df_cis_1['관리비'] = df_cis_1['관리비'].astype('float')
df_cis_1['판매비'] = df_cis_1['판매비'].astype('float')
df_cis_1['판매비와관리비'] = df_cis_1['관리비'] + df_cis_1['판매비']
df_cis_1.drop(['관리비', '판매비'], inplace=True, axis=1)
except:
pass
print('cis_preprocessing complete')
return df_cis_1
# 데이터 프레임 저장
def save_df(folder_path, df, cmp_list):
df.to_csv(folder_path + cmp_list + '.csv', index=False)
# + pycharm={"name": "#%%\n"}
# if '__main__' == __name__:
FOLDER_PATH = 'data/'
# 원본 PATH
PATH1 = '재무상태표/'
PATH2 = '포괄손익계산서/'
# 전처리 PATH
PATH3 = 'test/재무재표 전처리/'
PATH4 = 'test/포괄손익계산서 전처리/'
# 파일 리스트화
file_list1 = os.listdir(FOLDER_PATH +'재무상태표/')
file_list2 = os.listdir(FOLDER_PATH +'포괄손익계산서/')
# 변수 지정
var_1_dic = {'자본총계': 4, '자산총계': 4, '매입채무': 13, '매출채권': 13, '부채총계': 4, '유동부채': 4, '유동자산': 4, '재고자산': 4}
var_2_dic = {'매출원가':4, '매출총이익': 5, '법인세비용차감': 15, '당기순이익': 10, '판매비와관리비':7}
# 빈 데이터프레임 생성
bs_a = pd.DataFrame(columns=['date', 'company'])
cis_a = pd.DataFrame(columns=['date', 'company'])
for cmp_list1, cmp_list2 in zip(file_list1, file_list2):
print(cmp_list1)
# 재무 상태표
df = company_prerocessing.open_df(os.path.join(FOLDER_PATH + PATH1 + cmp_list1))
df_bs_1 = company_prerocessing.bs_preprocessing(df, cmp_list1[:-13])
company_prerocessing.save_df(os.path.join(FOLDER_PATH + PATH3), df_bs_1, cmp_list1[:-13])
# 변수 전처리
list1 = ['date', 'company']
for var1, var1_ in var_1_dic.items():
for bs in df_bs_1.columns:
if var1 in bs:
if var1_ >= len(bs):
list1.append(bs)
print(df_bs_1[list1])
# 재무상태표 데이터프레임 합치기
bs_a = pd.concat([bs_a, df_bs_1[list1]])
# 포괄 손익 계산서
df = company_prerocessing.open_df(os.path.join(FOLDER_PATH + PATH2 + cmp_list2))
df_cis_1 = company_prerocessing.cis_preprocessing(df, cmp_list2[:-15])
company_prerocessing.save_df(os.path.join(FOLDER_PATH + PATH4), df_cis_1, cmp_list2[:-15])
# 변수 전처리
list2 = ['date', 'company']
for var2, var2_ in var_2_dic.items():
for cis in df_cis_1.columns:
if var2 in cis:
if var2_ >= len(cis):
list2.append(cis)
print(df_cis_1[list2])
# 포괄손익 계산서 데이터프레임 합치기
cis_a = pd.concat([cis_a, df_cis_1[list2]])
# 인덱스 리셋 및 저장
bs_a.reset_index(inplace=True, drop=True)
cis_a.reset_index(inplace=True, drop=True)
company_prerocessing.save_df(os.path.join(FOLDER_PATH + PATH3), bs_a, '재무상태표 통합')
company_prerocessing.save_df(os.path.join(FOLDER_PATH + PATH4), cis_a, '포괄손익계산서 통합')
# -
# 전기자산, 자본 컬럼 만듬
bs_a['전기자산총계'] = 0
bs_a['전기자본총계'] = 0
# 전기 데이터 넣음 ( 마지막년도는 없어지기 때문에 상관없이 넣을 수 있음)
for i in range(len(bs_a) - 1):
bs_a['전기자산총계'][i] = bs_a['자산총계'][i+1]
bs_a['전기자본총계'][i] = bs_a['자본총계'][i+1]
bs_a
result = pd.merge(bs_a, cis_a, on=['date','company'])
result.drop('date', axis=1, inplace=True)
import string
import numpy as np
result.columns
string.ascii_uppercase
asdf = []
# + jupyter={"outputs_hidden": true}
# 기업 이름과 각각의 컬럼 수 만큼 랜덤 값으로 기업명 A~Z 2번씩 뽑음
for k in range(2):
for j in string.ascii_uppercase:
asdf.append(j)
for i in range(1, len(result.columns)):
result[result.columns[i]] = pd.to_numeric(result[result.columns[i]])
asdf.append(rnd.randint(int(result[result.columns[i]].min()), int(result[result.columns[i]].max())))
# -
len(data)
# 넘파이 배열로 만듬
data = np.array(asdf)
data.shape
# data 리쉐이프
data = data.reshape(-1,16)
# data를 데이터프레임으로 만들고
s = pd.DataFrame(data, columns=result.columns)
# 합침
fi = pd.concat([result, s])
fi.to_csv('data/부실기업.csv', index=False)
fi
| 8,588 |
/.ipynb_checkpoints/Aug31_deConvolution_noise-checkpoint.ipynb
|
b1f4aa8622b1d9c4a869830c16ff56bee45a4536
|
[] |
no_license
|
toej93/thesis_work_daily
|
https://github.com/toej93/thesis_work_daily
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 20,441 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import h5py
import matplotlib.pyplot as plt
from Functions import *
# %matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# %load_ext autoreload
# %autoreload 2
# +
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
train_x = train_x_flatten/255.#标准化
test_x = test_x_flatten/255.
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
# -
# # 单隐层神经网络
def L_layer_model(layer_dims,X,Y,print_cost = True,learning_rate = 0.0075,iteration_number = 3000):
np.random.seed(1)
parameters = initialize_parameters_deep(layer_dims)
caches = []
costs = []
grads = {}
for i in range(iteration_number):
AL,caches = L_model_forward(X,parameters)
cost = compute_cost(AL,Y)
grads = L_model_backward(AL,Y,caches)
if i % 100 == 0 and print_cost == True:
print("costs after " + str(i) + " iterations:" + str(cost))
costs.append(cost)
parameters = update_parameters(parameters, grads, learning_rate)
##画出cost-iteration_number图像:
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
layers_dims = [12288, 7, 1]
parameters = L_layer_model(layer_dims=layers_dims,X = train_x,Y=train_y,iteration_number = 2500)
# # 深层神经网络
pred_test = predict(test_x, test_y, parameters)
layers_dims = [12288, 20, 7, 5, 1]
# **函数没变,只需将神经网络层的维度数组进行变化,调用相同函数,传入不同参数,即可实现**
parameters = L_layer_model(layer_dims=layers_dims,X = train_x,Y=train_y,iteration_number = 2500)
pred_test = predict(test_x, test_y, parameters)
print_mislabeled_images(classes, test_x, test_y, pred_test)#打印出判断错误的几个图像
ependent directional response.
For given angles and polarization direction, use the model of the
directional and polarization gains of the antenna to generate a
function for the interpolated response of the antenna with respect to
frequency. Used with the `frequency_response` method to calculate
effective heights.
Parameters
----------
theta : float
Polar angle (radians) from which a signal is arriving.
phi : float
Azimuthal angle (radians) from which a signal is arriving.
polarization : array_like
Normalized polarization vector in the antenna coordinate system.
Returns
-------
function
A function which returns complex-valued voltage gains for given
frequencies, using the values of incoming angle and polarization.
See Also
--------
ARAAntenna.frequency_response : Calculate the (complex) frequency
response of the antenna.
"""
# e_theta = [np.cos(theta) * np.cos(phi),
# np.cos(theta) * np.sin(phi),
# -np.sin(theta)]
# e_phi = [-np.sin(phi), np.cos(phi), 0]
# theta_factor = np.dot(polarization, e_theta)
# phi_factor = np.dot(polarization, e_phi)
theta_factor = 1
phi_factor = 1
theta_gains = complex_bilinear_interp(
x=np.degrees(theta), y=np.degrees(phi),
xp=response_zens,
yp=response_azis,
fp=theta_response,
method='cartesian'
)
phi_gains = complex_bilinear_interp(
x=np.degrees(theta), y=np.degrees(phi),
xp=response_zens,
yp=response_azis,
fp=phi_response,
method='cartesian'
)
freq_interpolator = lambda frequencies: complex_interp(
x=frequencies, xp=response_freqs,
fp=theta_factor*theta_gains + phi_factor*phi_gains,
method='euler', outer=0
)
return freq_interpolator
def frequency_response(frequencies):
"""
Calculate the (complex) frequency response of the antenna.
Rather than handling the entire frequency response of the antenna, this
method is being used to convert the frequency-dependent gains from the
`directional_response` method into effective heights.
Parameters
----------
frequencies : array_like
1D array of frequencies (Hz) at which to calculate gains.
Returns
-------
array_like
Complex gains in voltage for the given `frequencies`.
See Also
--------
ARAAntenna.directional_response : Generate the (complex) frequency
dependent directional response.
"""
# From AraSim GaintoHeight function, with gain calculation moved to
# the directional_response method.
# gain=4*pi*A_eff/lambda^2 and h_eff=2*sqrt(A_eff*Z_rx/Z_air)
# Then 0.5 to calculate power with heff (cancels 2 above)
heff = np.zeros(len(frequencies))
# The index of refraction in this calculation should be the index of
# the ice used in the production of the antenna model.
n = 1.78
heff[frequencies!=0] = np.sqrt((3e8/frequencies[frequencies!=0]/n)**2
* n*50/377 /(4*np.pi))
return heff
# -
# ## Debug FFT
# ### FFT function
def doFFT(time, volts):
fft = scipy.fft.rfft(np.array(volts))
dT = abs(time[1]-time[0])
freq = 1000*scipy.fft.rfftfreq(n=len(time), d=dT)
return fft, freq, dT
# ### Inverse FFT
def doInvFFT(spectrum):
fft_i_v= scipy.fft.irfft(spectrum)
return fft_i_v
# ## Get filter response
def interpolate_filter(frequencies):
"""
Generate interpolated filter values for given frequencies.
Calculate the interpolated values of the antenna system's filter gain
data for some frequencies.
Parameters
----------
frequencies : array_like
1D array of frequencies (Hz) at which to calculate gains.
Returns
-------
array_like
Complex filter gain in voltage for the given `frequencies`.
"""
ARAfilter = ara.antenna.ALL_FILTERS_DATA
filt_response = ARAfilter[0]
filt_freqs = ARAfilter[1]
return complex_interp(
x=frequencies, xp=filt_freqs, fp=filt_response,
method='euler', outer=0
)
# ## Define de-dispersion function
def deDisperse_filter(time, voltage):
fft_v, fft_f, dT = doFFT(time,voltage)
response = np.array(interpolate_filter(fft_f*1E6))
response = np.divide(response,abs(response))
deDis_wf = np.divide(fft_v,response)
deDis_wf = np.nan_to_num(deDis_wf)
deDis_wf = doInvFFT(deDis_wf)
return time, deDis_wf
def deDisperse_antenna(time, voltage, theta, phi):
fft_v, fft_f, dT = doFFT(time, voltage)
dir_res = directional_response(theta,phi)(fft_f*1E6)
heff = dir_res * frequency_response(fft_f*1E6)
response = dir_res*heff
response = np.divide(response,abs(response))
deDis_wf = np.divide(fft_v,response)
deDis_wf = np.nan_to_num(deDis_wf)
deDis_wf = doInvFFT(deDis_wf)
return time, deDis_wf
# ## Define function
# ## Simultaneous de-dispersion
def deDisperse(time, voltage, theta, phi):
sampRate = len(time)/(max(time)-min(time))
b,a = signal.bessel(4, [0.15,0.4], 'bandpass', analog=False, fs=sampRate)
voltage = signal.lfilter(b, a, voltage)
fft_v, fft_f, dT = doFFT(time,voltage)
response_filter = np.array(interpolate_filter(fft_f*1E6))
dir_res = directional_response(theta,phi)(fft_f*1E6)
heff = dir_res * frequency_response(fft_f*1E6)
response_antenna = dir_res*heff
response = response_filter + response_antenna
response = np.divide(response,abs(response))
deDis_wf = np.divide(fft_v,response)
deDis_wf = np.nan_to_num(deDis_wf)
deDis_wf = doInvFFT(deDis_wf)
return time, deDis_wf
# # Debug using PyREx event
# ### Create impulse
det = ara.HexagonalGrid(station_type=ara.RegularStation,
stations=1, lowest_antenna=-100)
det.build_antennas(power_threshold=-6.15)
p = pyrex.Particle(particle_id=pyrex.Particle.Type.electron_neutrino,
vertex=[1002.65674195, -421.95118348, -586.0953201],
direction=[-0.90615395, -0.41800062, -0.06450191],
energy=1e9)
p.interaction.kind = p.interaction.Type.charged_current
p.interaction.em_frac = 1
p.interaction.had_frac = 0
gen = pyrex.ListGenerator(pyrex.Event(p))
kern = pyrex.EventKernel(antennas=det, generator=gen)
det.clear(reset_noise=True)
kern.event()
plt.plot(output.times,output.values)
len(output.times)
# +
phi = np.deg2rad(0)
theta = np.deg2rad(0)
wform = pd.read_pickle("./wform_forDebug_PyREx.pkl")
sig = pyrex.Signal(wform["time"]*1E-9,wform.voltage*1E-3,'voltage') # times in seconds
ant = ara.VpolAntenna(name="Dummy Vpol", position=(0, 0, 0), power_threshold=0)
sig = ant.apply_response(sig, direction=np.array([1,0,0]), polarization=np.array([0,0,-1]), force_real=True)
output = ant.front_end(sig)
plt.figure(figsize=(10,5))
plt.xlabel("Time [ns]", fontsize=12)
plt.ylabel("Amplitude [mV]", fontsize=12)
b,a = signal.bessel(4, [0.15,0.4], 'bandpass', analog=False, fs = len(wform["time"])/(max(wform["time"])-min(wform["time"])))
filtered_noise = signal.lfilter(b, a, wform.voltage*5E2)
deD_t, deD_v = deDisperse(wform["time"], wform.voltage, theta, phi)
time = output.times*1E9 #in ns
voltage = output.values*1E3 #in mV
deD_t, deD_v = deDisperse(time,voltage, theta, phi)
plt.plot(wform["time"],wform.voltage*1E2, "-.",lw=3,color="C3", label="Original impulse [arb. units]")
plt.plot(time, voltage,"--",lw=3,color="C1", label = "Signal after electronics response")
plt.plot(deD_t,deD_v,lw=3,color="C0",label ="De-dispersed signal (antenna+filter)")
plt.plot(output.times*1E9,filtered_noise, "-.",lw=3,color="C2", label="Bandpassed (Bessel, 200-500 MHz) impulse signal [arb.]")
plt.xlim(-20,40)
plt.ylim(-250,250)
plt.legend()
plt.grid(True)
plt.title("AraSim neutrino event with noise", fontsize=15)
plt.tight_layout()
# plt.savefig("./plots/deDispersion/DDP_PyREx_0.pdf")
# -
# !tar czf myfiles.tar.gz ./plots/deDispersion/*.pdf
from scipy import signal
imp = signal.unit_impulse(256, 125)
b, a = signal.butter(4, 0.2)
response = 1E3*signal.lfilter(b, a, imp)
t = np.linspace(-50, 50, 256)
# plt.plot(np.arange(-50, 50), imp)
plt.plot(t, response)
plt.xlabel('Time [ns]')
plt.ylabel('Amplitude [$\mu$V/m]')
plt.grid(True)
fs_imp = 256/(100*1E-9)
sig = pyrex.Signal(t*1E-9,response*1E-6,'voltage') # times in seconds
ant = ara.VpolAntenna(name="Dummy Vpol", position=(0, 0, 0), power_threshold=0)
sig = ant.apply_response(sig, direction=np.array([1,0,0]), polarization=np.array([0,0,-1]), force_real=True)
output = ant.front_end(sig)
b,a = signal.bessel(4, [0.15,0.4], 'bandpass', analog=False, fs=2.56)
filtered = signal.lfilter(b, a, response)
# +
plt.figure(figsize=(10,5))
plt.xlabel("Time [ns]", fontsize=12)
plt.ylabel("Amplitude [mV]", fontsize=12)
time = output.times*1E9 #in ns
voltage = output.values*1E3 #in mV
deD_t, deD_v = deDisperse(time,voltage, theta, phi)
plt.plot(deD_t,deD_v,lw=3,label = "De-dispersed signal (antenna+filter)")
plt.plot(time, voltage,"--",lw=3, label = "Signal after electronics response")
plt.plot(t, response/1E1, "-.",lw=3,color="C3", label="Original impulse [arb. units]")
plt.plot(output.times*1E9,filtered, "--",lw=3, label="Bandpassed (Bessel, 200-500 MHz) impulse signal")
plt.xlim(-20,20)
plt.legend()
plt.grid(True)
plt.title("AraSim neutrino event with noise", fontsize=15)
plt.tight_layout()
plt.show()
# -
| 12,165 |
/FraudDetectionNotebook.ipynb
|
acfc55d665bd03bfff0324705b936dc890b509b7
|
[] |
no_license
|
adinalini/P2.5-Financial-Data-Fraud
|
https://github.com/adinalini/P2.5-Financial-Data-Fraud
| 3 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 34,945 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import spacy
from spacy import displacy
from spacy.tokens import Span
from spacy.matcher import PhraseMatcher
from my_nlp_lib import spacy_utilities as su
from my_nlp_lib import text_utilities as tu
su.print_versions()
# ## First let us get the first chapter of Victor Hugo "Les Misérables"
first_section_chapters = tu.get_chapters_dic_of_first_section_of_french_book()
first_section_chapter_1 = first_section_chapters['01) Chapitre I']
my_offset = 5
print(f"## OK : let's print {my_offset} first and last lines of the first chapter of the section :")
tu.print_head_and_tail(first_section_chapter_1, offset=my_offset)
# ## let's now use spacy to analyse the first chapter
# load the normal french model
nlp = spacy.load('fr_core_news_sm')
nlp.pipeline
chapter_1 = " ".join(first_section_chapter_1)
chapter_1 = chapter_1.replace('\n\n', '\n')
# print(chapter_1)
doc = nlp(chapter_1)
# ### let's list the named entities found by spacy
for e in doc.ents:
print(f'{e.text:{25}} ({e.start:{5}},{e.end:{5}}) {e.label_:{10}} {spacy.explain(e.label_):{40}}')
# #### we can see some wrong named entities
# * *Vrai* is obviously not a location
# * *Fut* is not a person
# * *Myriel* is a person and not a location
# * *Baptistine* is also a person and not a location
# * *M.* it's not a person but should be used as an indicator that next token could be a person
# +
def print_entities_in_sentence(spacy_doc):
for i,sentence in enumerate(spacy_doc.sents):
phrase = nlp(sentence.text)
if phrase.ents:
print(f'##### [{i:0>{3}}] ENTITY FOUND IN THIS SENTENCE :')
displacy.render(phrase, style='ent', jupyter=True)
else:
print(f'##### [{i:0>{3}}] NO ENTITY FOUND IN THIS SENTENCE :{phrase.text}')
print_entities_in_sentence(doc)
# -
# ### Adding a Named Entity to a Span
# let's generate the code and store entities to remove
entities_to_remove = []
for e in doc.ents:
if e.label_ == 'LOC' and e.text == 'Myriel':
entities_to_remove.append(e)
print(f'new_entities.append(Span(doc,{e.start:{3}},{e.end:{3}}, label=PER))')
#let's remove the 'bad' Myriel POS entities
doc.ents = [e for e in doc.ents if e not in entities_to_remove]
# +
# Get the hash value of the ORG entity label
PER = doc.vocab.strings[u'PER']
new_entities = []
new_entities.append
# Create a Span for the new entity and add it to the list
new_entities.append(Span(doc,308,309, label=PER))
new_entities.append(Span(doc,455,456, label=PER))
new_entities.append(Span(doc,581,582, label=PER))
new_entities.append(Span(doc,624,625, label=PER))
new_entities.append(Span(doc,664,665, label=PER))
new_entities.append(Span(doc,678,679, label=PER))
# let'signore some of them on purpose
# new_entities.append(Span(doc,685,686, label=PER))
# new_entities.append(Span(doc,841,842, label=PER))
# new_entities.append(Span(doc,1150,1151, label=PER))
# Add the entity to the existing Doc object
doc.ents = list(doc.ents) + new_entities
# -
for e in doc.ents:
print(f'{e.text:{25}} {e.label_:{10}} {spacy.explain(e.label_):{40}}')
# there is a couple of Myriel that where not found by the end of the text
displacy.render(doc, style='ent', jupyter=True)
#but the change is not durable... as soon as you parse again the text BADABOUM
doc = nlp(chapter_1)
for e in doc.ents:
if e.text == 'Myriel':
print(f'{e.text:{25}} {e.label_:{10}} {spacy.explain(e.label_):{40}}')
# ### let's do it in a more durable way !
ruler = nlp.add_pipe("entity_ruler",before='ner')
patterns = [{"label": "PER", "pattern": "M. Myriel"},
{"label": "PER", "pattern": [{"LOWER": "mademoiselle"}, {"LOWER": "Baptistine"}]}]
ruler.add_patterns(patterns)
doc = nlp(chapter_1)
for e in doc.ents:
print(f'{e.text:{25}} {e.label_:{10}} {spacy.explain(e.label_):{40}}')
print_entities_in_sentence(doc)
# ### Much better but, what if we used the large model as is ?
# load the big fat french model
# https://spacy.io/models/fr#fr_core_news_lg
nlp_large = spacy.load('fr_core_news_lg')
doc_large = nlp_large(chapter_1)
for e in doc_large.ents:
print(f'{e.text:{25}} {e.label_:{10}} {spacy.explain(e.label_):{40}}')
print_entities_in_sentence(doc_large)
| 4,494 |
/云计算与容器/docker基本命令.ipynb
|
2e32a91436141ca560ac5bf8b45617c07fbc86a0
|
[] |
no_license
|
yangchnet/linote
|
https://github.com/yangchnet/linote
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,439 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Docker基本命令
# * 启动docker服务
# sudo systemctl start docker
# * 查看本地镜像
# sudo docker images
# * 查看正在运行的镜像
# sudo docker ps
# * 查看所有镜像
# sudo docker ps -a
# * 停止正在运行的镜像
# sudo docker stop container_name
# * 开始运行某个镜像
# sudo docker start container_name
# * 删除某个镜像
# sudo docker rmi container_name
# * 进入某个正在运行的镜像
# sudo docker attach container_name
# * 导出容器
# sudo docker export container_id > name.tar
# * 导入容器
# # cat name.tar | sudo docker import -test/buntu:v1.0
# * 从网络导入
# sudo docker import http://example.com/exampleimage.tgz example/imagerepo
#
ted last 5 rows
# + id="2Hmi1rXXXWX2" outputId="826bf963-a289-4227-959b-649e5851b224" colab={"base_uri": "https://localhost:8080/"}
#Understand shape and size of the dataset
print(logi.shape)
print(logi.size)
# + id="lDLwgqhsX3yc" outputId="4231c58c-bbf1-47bc-d0d0-acbe76cefffc" colab={"base_uri": "https://localhost:8080/", "height": 297}
#Summery of the numerical colums
logi.describe()
# + id="YZuJrRq2YGAm" outputId="0da19c64-ec1a-40b0-d5e9-a0cf74aaae54" colab={"base_uri": "https://localhost:8080/"}
#More info about the dataset
logi.info()
# + id="t3q1AtiGYPc0" outputId="d63cbd12-a522-469f-8df3-d520a3ba444e" colab={"base_uri": "https://localhost:8080/"}
#Number of missing values in each colum
logi.isnull().sum()
# + id="ta7O4UT5YiwQ"
df=logi.dropna() #Drop null values
# + id="GOsU661yYzmb" outputId="913d25d3-2830-4cdf-f439-570fc96dc8f2" colab={"base_uri": "https://localhost:8080/"}
df.shape #Shape of dataframe after null values removed
# + [markdown] id="UZs46pntZg8P"
# DESTINATION STARTING AND STOPPING
# + id="x45CR1WOZDTV" outputId="468191fa-e46c-4664-cbd2-dca5d8d15641" colab={"base_uri": "https://localhost:8080/"}
print(df['START*'].unique())#Name of unique start points
print(len(df['START*'].unique()))#Count of unique start points
# + id="-lt6BH5QerMF" outputId="aed35549-06b5-4e71-9c25-d6dc4b68c819" colab={"base_uri": "https://localhost:8080/"}
print(df['STOP*'].unique())#Name of unique stop points
print(len(df['STOP*'].unique()))#Count of unique stop points
# + id="RL2koev7e-M-" outputId="93d65c1d-55c3-41ad-97f9-9a49e17fba98" colab={"base_uri": "https://localhost:8080/"}
df['START*'].value_counts().head(5)
# + id="146PnWemfjQs" outputId="f067df74-de8a-4fa9-b1d8-43192c3a5163" colab={"base_uri": "https://localhost:8080/"}
df['STOP*'].value_counts().head(5)
# + id="1OC4hXvCfnhy" outputId="edfd3d1e-8b46-4ed6-8342-6fd2f5333a63" colab={"base_uri": "https://localhost:8080/"}
# farthest start stop pair
# Drop unknown location
df2=df[df['START*'] != 'Unknown Location']
df2=df2[df2['STOP*'] != 'Unknown Location']
df2.groupby(['START*' , 'STOP*'])['MILES*'].sum().sort_values(ascending=False).head(10)
# + id="QfFSp7ckKQeI" outputId="a6aa98d7-a5c1-4725-8b67-24649c6ba770" colab={"base_uri": "https://localhost:8080/"}
df2.groupby(['START*' , 'STOP*'])['MILES*'].size().sort_values(ascending=False).head(10) #Most popular start stop pair
# + id="6jF8bjNhLS5j" outputId="ffd55060-ae0d-4586-e044-70e337902a82" colab={"base_uri": "https://localhost:8080/"}
df.loc[:, 'START_DATE*'] = df['START_DATE*'].apply(lambda x: pd.datetime.strptime(x, '%m/%d/%Y %H:%M'))
df.loc[:, 'END_DATE*'] = df['END_DATE*'].apply(lambda x: pd.datetime.strptime(x, '%m/%d/%Y %H:%M'))
# + id="nBpVUwQ9cKHZ" outputId="28d1bfe3-8a8a-4e67-db1b-d8ea65a250aa" colab={"base_uri": "https://localhost:8080/"}
df.info()
# + id="0amfA0m7cbeV" outputId="2b050038-b1b2-4256-9d2d-8fe72a74324a" colab={"base_uri": "https://localhost:8080/"}
#Calculate duration of rides
df['DIFF']=df['END_DATE*']-df['START_DATE*']
# + id="qdB4oQ3qc7xj" outputId="e96ed137-eee3-4db0-bafd-5126d794ddee" colab={"base_uri": "https://localhost:8080/"}
#CONVERT DURATION TO NUMBERS
df.loc[:,'DIFF']=df['DIFF'].apply(lambda x: pd.Timedelta.to_pytimedelta(x).days/(24*60) + pd.Timedelta.to_pytimedelta(x).seconds/60)
# + id="T7s7efqAdxDz" outputId="fec57c34-e0a6-4943-c0bc-06974996d97c" colab={"base_uri": "https://localhost:8080/"}
df['DIFF'].head()
# + id="Vtzao2Ixd7Rv" outputId="86174962-03b1-4675-cf42-7fe7ae4b3da4" colab={"base_uri": "https://localhost:8080/"}
df['DIFF'].describe()
# + id="ENUS6u9NeG72" outputId="be6c3053-786b-4afa-9053-3fd16a5e12f6" colab={"base_uri": "https://localhost:8080/"}
#Capture hour day, month
df['month']=pd .to_datetime(df['START_DATE*']).dt.month
df['day']=pd .to_datetime(df['START_DATE*']).dt.day
df['hour']=pd .to_datetime(df['START_DATE*']).dt.hour
df['year']=pd .to_datetime(df['START_DATE*']).dt.year
# + id="MnhpnbV-zjce" outputId="62f2c06d-7603-4683-ea64-4934191c7458" colab={"base_uri": "https://localhost:8080/", "height": 214}
#Capture day and week and rename to week day
df['day_of_week'] = pd.to_datetime(df['START_DATE*']).dt.day_of_week
days={0:'Mon',1:'Tue',2:'Wed',3:'Thrus',4:'Fri',5:'Sat',6:'Sun'}
df['day_of_week'] = df['day_of_week'].apply(lambda x:days[x])
# + id="_6CgycdVaJRK" outputId="a7dbf4f0-43eb-4f77-c2d2-e4d314553934" colab={"base_uri": "https://localhost:8080/", "height": 350}
#Rename the nos. in month colum to calender months
import calender
df['month']=df['month'].apply(lambda x:calender.month_abbr[x])
df.head()
# + id="kqiHN2UNfioc" outputId="f5a31ede-1426-4cbd-e0f7-054b59cfc56c" colab={"base_uri": "https://localhost:8080/"}
#getting avg distance covered per month
df.groupby('month').mean()['MILES*'].sort_values(ascending=False)
# + id="uf-V4JUcgA60" outputId="c9655796-e0a5-4bb6-d3cb-b7a72f52c316" colab={"base_uri": "https://localhost:8080/"}
#Calculate trip speed for each trip
df['Duration_hours']=df['DIFF']/60
df['Speed_KM']=df['MILES*']/df['Duration_hours']
# + [markdown] id="kgksWZmnhcNd"
# CATAGORY AND PURPOSE
# + id="bty5lFQIhbY0" outputId="252d721d-bb45-4da3-d4f8-2562e50776b9" colab={"base_uri": "https://localhost:8080/"}
df['CATEGORY*'].value_counts()
# + id="-gLxqcpyhp6i" outputId="94185496-1554-46a9-9d08-a5d772b91b7b" colab={"base_uri": "https://localhost:8080/"}
#purpose
df['PURPOSE*'].value_counts()
# + id="yOdZ8eKah0zN" outputId="2c4e388c-44f0-40fa-9ad1-8f73a1c1a081" colab={"base_uri": "https://localhost:8080/"}
#Average distance travelled for each activity
df.groupby('PURPOSE*').mean()['MILES*'].sort_values(ascending=False)
# + id="DNeES5tPi-j1" outputId="e88f190f-fdc2-4583-d3ad-a1539cc575f7" colab={"base_uri": "https://localhost:8080/"}
#Miles earned per Purpose
df.groupby('PURPOSE*').sum()['MILES*'].sort_values(ascending=False)
# + id="1EEvmpnijNIh" outputId="7d9c33e5-f54b-4211-e5a8-8afa9c455bcb" colab={"base_uri": "https://localhost:8080/"}
#Miles earned per Catagory
df.groupby('CATEGORY*').sum()['MILES*'].sort_values(ascending=False)
| 6,901 |
/5.0 Stock Price Prediction/11_0_Stock_Price_Prediction_by_Stack_LSTM.ipynb
|
fb3ea35583e64d56beb0b6f566d7d8e979b1c76f
|
[] |
no_license
|
SmitaPaul7000/All-NLP
|
https://github.com/SmitaPaul7000/All-NLP
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 203,082 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
def roll_futures(df1, df2):
'''Given a dataframe of front month contract and second month contract, return
one dataframe of front month contract observations, where rows are replaced with values
from the second month for dates that fall between my desired "rolling period" and the expiration
of the 'front month' contract'''
def find_turning_point(df):
'''Find tops and bottoms of cycles, defined as local maxima/minima'''
def record_trend(start, stop, direction):
'''Returns a dictionary with date, price tuples for start and end dates of trends'''
if direction == up:
trend = {
'Start': (start, start['Low'])
'Stop': (stop, stop['Low'])
'Start Price':
'End Price':
}
# +
# for local maxima
maxs = argrelextrema(decomp.trend.values, np.greater, order=5)
max_dates = [wheat['2016'].iloc[i].index for i in maxs]
maxes = [date for date in wheat['2016'].index if date in max_dates[0]]
# for local minima
mins = argrelextrema(decomp.trend.values, np.less, order=5)
min_dates = [wheat['2016'].iloc[i].index for i in mins]
mines = [date for date in wheat['2016'].index if date in min_dates[0]]
# -
def record_trend(start, stop, direction):
'''Returns a dictionary with date, price tuples for start and end dates of trends'''
if direction == up:
trend = {
'Start': (start, start['Low'])
'Stop': (stop, stop['Low'])
'Start Price':
'End Price':
}
def identify_trend(df, period=30):
'''Given time series price data, identify upward and downward trending markets'''
# Calculate change in moving average
df['{}dma_slope'.format(period)] = df['{}dma'.format(period)].pct_change()
for i, row in df.iterrows():
if is_up == False:
if row['{}dma_slope'.format(period)] > 0: # check for the start of an uptrend
uptrends += 1
is_up = True
is_down = False
is_flat = False
is_unclassified = False
uptrend_starts.append(i)
elif is_up == True:
if row['{}dma_slope'.format(period)] <= 0: # check for the end of an uptrend
is_up = False
uptrend_stops.append(i)
if is_down == False:
if row['{}dma_slope'.format(period)] < 0: # check for the start of a downtrend
downtrends += 1
is_up = False
is_down = True
downtrend_starts.append(i)
elif is_down == True:
if row['{}dma_slope'.format(period)] >= 0: # check for the end of a downtrend
is_down = False
downtrend_stops.append(i)
def trend_tracker(df, period=30):
'''Given time series price data, identify upward and downward trending markets'''
# Calculate change in moving average
df['{}dma_slope'.format(period)] = df['{}dma'.format(period)].pct_change()
# Initialize counters for uptrends and downtrends...
uptrends = 0
downtrends = 0
ranges = 0
# ...lists to contain start and stop dates...
uptrend_starts = []
uptrend_stops = []
downtrend_starts = []
downtrend_stops = []
flat_starts = []
flat_stops = []
# ...and booleans to indicate if the current market is in an uptrend or downtrend
is_up = False
is_down = False
is_flat = False
for i, row in df.iterrows():
if is_flat == False:
if (row['{}dma_slope'.format(period)] < 0.5) & (row['{}dma_slope'.format(period)] > -0.5):
ranges += 1
is_flat = True
is_up = False
is_down = False
flat_starts.append(i)
elif is_flat == True:
if (row['{}dma_slope'.format(period)] < 0.5) | (row['{}dma_slope'.format(period)] > -0.5):
flat_stops.append(i)
if is_up == False:
if row['{}dma_slope'.format(period)] > 0.5: # check for the start of an uptrend
uptrends += 1
is_up = True
is_down = False
is_flat = False
is_unclassified = False
uptrend_starts.append(i)
elif is_up == True:
if row['{}dma_slope'.format(period)] <= 0: # check for the end of an uptrend
is_up = False
uptrend_stops.append(i)
if is_down == False:
if row['{}dma_slope'.format(period)] < -0.5: # check for the start of a downtrend
downtrends += 1
is_up = False
is_down = True
is_flat = False
downtrend_starts.append(i)
elif is_down == True:
if row['{}dma_slope'.format(period)] >= 0: # check for the end of a downtrend
is_down = False
downtrend_stops.append(i)
return uptrends, downtrends, ranges, uptrend_starts, uptrend_stops, downtrend_starts, downtrend_stops, flat_starts, flat_stops
# # Code from DY
# +
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
import math
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.cross_validation import train_test_split
# %matplotlib inline
# -
def create_window(data, window_size = 1):
data_s = data.copy()
for i in range(window_size):
data = pd.concat([data, data_s.shift(-(i + 1))],
axis = 1)
data.dropna(axis=0, inplace=True)
return(data)
# +
window_size = 1
casual_window = create_window(casual, window_size = window_size)
X = casual_window.iloc[:, 0:1]
y = casual_window.iloc[:, 1]
X = np.reshape(X.values, (X.shape[0], 1, X.shape[1]))
# Will use internal validation inside of Keras -- this is the equivalent
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .3)
# +
# create and fit the LSTM network
model = Sequential()
# model.add(LSTM(4, input_shape=(1, window_size-1)))
model.add(LSTM(input_shape=(X.shape[1], X.shape[2]), units=25, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_squared_logarithmic_error', optimizer='adam')
history = model.fit(x=X, y=y.values, epochs=800, batch_size=70, verbose=0, validation_split=.3)
# -
metrics = pd.DataFrame(history.history, columns=['val_loss', 'loss'])
# +
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
fig.suptitle("LSTM: Timestep $-1$, Dropout: .2, units: 25, epochs: 1300, validation: .3, loss: RMSLogE")
y_hat = model.predict(X)
results = pd.DataFrame(list(zip(y_hat, X)), columns=["y_hat", "var"])
results['y_hat'] = results['y_hat'].map(lambda y_hat: y_hat[0])
results['var'] = results['var'] = results['var'].map(lambda var: var[0][0])
results.plot(ax=ax[1])
metrics.plot(ax=ax[0])
# -
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
| 7,574 |
/MachineLearning/How_to_predict_cancer_survival_with_BigQueryML.ipynb
|
eaecabbf1e8b3048868395ea6a2a8ed4a653a79c
|
[
"Apache-2.0"
] |
permissive
|
isb-cgc/Community-Notebooks
|
https://github.com/isb-cgc/Community-Notebooks
| 21 | 17 |
Apache-2.0
| 2023-07-18T17:09:28 | 2023-06-26T13:16:52 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 102,825 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3.9.5 64-bit
# name: python3
# ---
# importing modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandas_profiling import ProfileReport
# creating DataFrame
df = pd.read_csv('titanic.csv')
df.head()
df.shape
df.info()
# Cleaning
df.isnull().sum()
df['Cabin'].unique()
# Dealing with NaN values
df['Cabin'].fillna('Not Available', inplace=True)
df['Age'].fillna(df['Age'].mean(), inplace=True)
df['Embarked'].fillna('not available', inplace=True)
df.info()
# Generates a file called report.html which have a great analysis report
profile = ProfileReport(df, title="testreport")
profile.to_file(output_file='report.html')
# +
# For Horizontal Bar Projection
features_dict = {'Female Survived': pd.crosstab(df['Sex'], df['Survived'])[1][0],
'Female Died': pd.crosstab(df['Sex'], df['Survived'])[0][0],
'Male Survived': pd.crosstab(df['Sex'], df['Survived'])[1][1],
'Male Died': pd.crosstab(df['Sex'], df['Survived'])[0][1],
'Age < MeanAge Survived': pd.crosstab(df['Age'] < df.Age.mean(), df['Survived'])[1][1],
'Age < MeanAge Died': pd.crosstab(df['Age'] < df.Age.mean(), df['Survived'])[0][1],
'Age > MeanAge Survived': pd.crosstab(df['Age'] > df.Age.mean(), df['Survived'])[1][1],
'Age > MeanAge Died': pd.crosstab(df['Age'] > df.Age.mean(), df['Survived'])[0][1],
'Have Sibling/Spouse Survived': pd.crosstab(df['SibSp'] > 0, df['Survived'])[1][1],
'Have Sibling/Spouse Died': pd.crosstab(df['SibSp'] > 0, df['Survived'])[0][1],
'Have Parents/Children Survived': pd.crosstab(df['Parch'] > 0, df['Survived'])[1][1],
'Have Parents/Children Died': pd.crosstab(df['Parch'] > 0, df['Survived'])[0][1]
}
features_dict_x = list(features_dict.keys())
features_dict_y = list(features_dict.values())
fig, ax = plt.subplots(figsize =(16, 9))
ax.barh(features_dict_x, features_dict_y)
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_tick_params(pad = 5)
ax.yaxis.set_tick_params(pad = 10)
ax.grid(b = True, color ='grey', linestyle ='-.', linewidth = 0.5, alpha = 0.2)
for i in ax.patches:
plt.text(i.get_width()+0.2, i.get_y()+0.5,
str(round((i.get_width()), 2)),
fontsize = 10, fontweight ='bold',
color ='grey')
ax.set_title('Features vs Survived',
loc ='left', )
plt.show()
following parameters based on your notebook, execution environment, or project. BigQuery ML must create and store classification models, so be sure that you have write access to the locations stored in the "bq_dataset" and "bq_project" variables.
# + id="s1lPYIPhdd29"
# set the google project that will be billed for this notebook's computations
google_project = 'google-project' ## CHANGE ME
# bq project for storing ML model
bq_project = 'bq-project' ## CHANGE ME
# bq dataset for storing ML model
bq_dataset = 'scratch' ## CHANGE ME
# name of temporary table for data
bq_tmp_table = 'tmp_kirc_data'
# name of ML model
bq_ml_model = 'tcga_kirc_ml_reg_model'
# in this example, we'll be using the Ovarian cancer TCGA dataset
cancer_type = 'TCGA-KIRC'
# 14 genes used for prediction model, taken from Li et al.
# Note: one gene (ZIC2) was omitted from the original set of 15 genes due to
# missing values in some TCGA samples.
genes = [
'CCDC137', 'KL', 'FBXO3', 'CDC7', 'IL20RB', 'CDCA3', 'ANAPC5',
'OTOF', 'POFUT2', 'ATP13A1', 'MC1R', 'BRD9', 'ARFGAP1', 'COL7A1'
]
# clinical data table
clinical_table = 'isb-cgc-bq.TCGA_versioned.clinical_gdc_r29'
# RNA seq data table
rnaseq_table = 'isb-cgc-bq.TCGA_versioned.RNAseq_hg38_gdc_r28'
# + [markdown] id="vK3Gmfv6dgL_"
# ## BigQuery Client
#
# Create the BigQuery client.
# + id="vanz3H-vdism"
# Create a client to access the data within BigQuery
client = bigquery.Client(google_project)
# + [markdown] id="SFJaQOPRdksH"
# ## Create a Table with a Subset of the Gene Expression Data
#
# Pull RNA-seq gene expression data from the TCGA RNA-seq BigQuery table, join it with clinical labels, and pivot the table so that it can be used with BigQuery ML. In this example, we will label each sample with the patient's survival time. This prepares the data for linear regression.
#
# Prediction modeling with RNA-seq data typically requires a feature selection step to reduce the dimensionality of the data before training a regression model. However, to simplify this example, we will use a pre-identified set of 14 genes from the study by Li et al. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6113007/). Note that one gene was omitted due to missing values.
#
# Creation of a BQ table with only the data of interest reduces the size of the data passed to BQ ML and can significantly reduce the cost of running BQ ML queries. This query also randomly splits the dataset into "training" and "testing" sets with roughly equal sample sizes using the "FARM_FINGERPRINT" hash function in BigQuery. "FARM_FINGERPRINT" generates an integer from the input string. More information can be found [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/hash_functions).
# + id="dDaCJKAtIh7P"
# construct and display the query
tmp_table_query = ("""
BEGIN
CREATE OR REPLACE TABLE `{bq_project}.{bq_dataset}.{bq_tmp_table}` AS
SELECT * FROM (
SELECT
labels.submitter_id AS sample,
labels.data_partition AS data_partition,
MAX(labels.survival) AS survival,
ge.gene_name AS gene_name,
-- Multiple samples may exist per case, take the max value
MAX(LOG(ge.HTSeq__FPKM_UQ+1)) AS gene_expression
FROM `{rnaseq_table}` AS ge
INNER JOIN (
SELECT
*
FROM (
SELECT
submitter_id,
demo__days_to_death AS survival,
CASE
WHEN MOD(ABS(FARM_FINGERPRINT(case_id)), 10) < 5 THEN 'training'
WHEN MOD(ABS(FARM_FINGERPRINT(case_id)), 10) >= 5 THEN 'testing'
END AS data_partition
FROM `{clinical_table}`
WHERE
proj__project_id = '{cancer_type}'
AND demo__days_to_death IS NOT NULL
ORDER BY demo__days_to_death DESC
)
) labels
ON labels.submitter_id = ge.case_barcode
WHERE gene_name IN ({genes})
GROUP BY sample, data_partition, gene_name
)
PIVOT (
MAX(gene_expression) FOR gene_name IN ({genes})
);
END;
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_tmp_table=bq_tmp_table,
rnaseq_table=rnaseq_table,
clinical_table=clinical_table,
cancer_type=cancer_type,
genes=''.join(["'","','".join(genes),"'"])
)
print(tmp_table_query)
# + id="y6Ecs0nUcnK0"
# this query processes 20 GB and costs approximately $0.10
tmp_table_result = client.query(tmp_table_query)
# + [markdown] id="R1kzKTheG5t6"
# Let's take a look at this subset table. The data has been pivoted such that each of the 14 genes is available as a column that can be "SELECTED" in a query. In addition, the "survival" and "data_partition" columns simplify data handling for model training and evaluation.
# + id="bVK2IOr9KZcc"
# construct and display the query
tmp_table_query = ("""
SELECT
* --usually not recommended to use *, but in this case, we want to see all 14 genes
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_tmp_table=bq_tmp_table
)
print(tmp_table_query)
# + id="ClmfENZShXOx" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="82af2c58-cf92-42d1-c9aa-5ed27e1b1abb"
# this query processes 24 KB of data and costs approximately <$.01
tmp_table_data = client.query(tmp_table_query).result().to_dataframe()
print(tmp_table_data.info())
tmp_table_data
# + [markdown] id="CZq3qQfxHDIp"
# # Train the Machine Learning Model
#
# Now we can train a linear model using BigQuery ML with the data stored in the subset table. This model will be stored in the location specified by the "bq_ml_model" variable, and can be reused to predict samples in the future.
#
# We pass two options to the BQ ML model: model_type and input_label_cols. Model_type specifies the machine learning model type. In this case, we use "LINEAR_REG" to train a linear regression model. Other model options are documented [here](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create). Input_label_cols tells BigQuery that the "survival" column should be used to determine each sample's prediction target.
#
# **Warning**: BigQuery ML models can be very time-consuming and expensive to train. Please check your data size before running BigQuery ML commands. Information about BigQuery ML costs can be found [here](https://cloud.google.com/bigquery-ml/pricing).
# + id="oXrTgwJYLP3n"
# construct and display the query
ml_model_query = ("""
CREATE OR REPLACE MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`
OPTIONS
(
model_type='LINEAR_REG',
input_label_cols=['survival']
) AS
SELECT * EXCEPT(sample, data_partition) -- when training, we only the labels and feature columns
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'training' -- using training data only
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_model=bq_ml_model,
bq_tmp_table=bq_tmp_table
)
print(ml_model_query)
# + id="3Fviu3X11o6X"
# create ML model using BigQuery
# this query processes 21.6 KB of data and costs approximately <$0.01
ml_model_result = client.query(ml_model_query).result()
print(ml_model_result)
# now get the model metadata
ml_model = client.get_model('{}.{}.{}'.format(bq_project, bq_dataset, bq_ml_model))
print(ml_model)
# + [markdown] id="-2MjxYK6HiSU"
# # Evaluate the Machine Learning Model
# Once the model has been trained and stored, we can evaluate the model's performance using the "testing" dataset from our subset table. Evaluating a BQ ML model is generally less expensive than training.
#
# Use the following query to evaluate the BQ ML model. Note that we're using the "data_partition = 'testing'" clause to ensure that we're only evaluating the model with test samples from the subset table.
#
# BigQuery's ML.EVALUATE function returns several performance metrics: mean absolute error, mean squared error, mean squared log error, median absolute error, r-squared score, and explained variance.
# + id="w4jZUokiMAyj"
# construct and display the query
ml_eval_query = ("""
SELECT * FROM ML.EVALUATE (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`,
(
SELECT * EXCEPT(sample, data_partition)
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'testing'
)
)
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_model=bq_ml_model,
bq_tmp_table=bq_tmp_table
)
print(ml_eval_query)
# + id="27QZR6K42vlE"
# this query processes 21.7 KB and costs approximately <$0.01
ml_eval = client.query(ml_eval_query).result().to_dataframe()
# + colab={"base_uri": "https://localhost:8080/", "height": 145} id="X9VaEaoz24Ip" outputId="e87bf3e9-0087-433b-e3e2-a8e697400e6f"
# Display the table of evaluation results
ml_eval
# + [markdown] id="NN71_WC2H1s8"
# The evaluation metrics indicate that the model does not fit the data very well. We can visualize the model to verify. In order to do this, we extract the predicted survival times for each training sample.
# + id="IKPjQdzxNKGF"
# construct and display the query
ml_train_predict_query = ("""
SELECT
predicted_survival,
survival
FROM ML.PREDICT (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`,
(
SELECT * EXCEPT(sample, data_partition)
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'training' -- Use the training dataset
)
)
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_model=bq_ml_model,
bq_tmp_table=bq_tmp_table
)
print(ml_train_predict_query)
# + id="BOIDzP59A7ju"
# this query processes 21.7 KB and costs approximately <$0.01
ml_train_predict = client.query(ml_train_predict_query).result().to_dataframe()
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="S59c7fuLBEPx" outputId="9f1852fb-3e20-4be6-f4fb-24e9a470f11c"
# display the table comparing predicted survival times to actual survival times
ml_train_predict
# + [markdown] id="KU5YazXyH_Jt"
# # Predict Outcome for One or More Samples
# ML.EVALUATE evaluates a model's performance, but does not produce actual predictions for each sample. In order to do that, we need to use the ML.PREDICT function. The syntax is similar to that of the ML.EVALUATE function and returns "predicted_survival" and "survival".
#
# Note that the input dataset can include one or more samples, and must include the same set of features as the training dataset.
# + id="m9bJbryhNgBO"
# construct and display the query
ml_test_predict_query = ("""
SELECT
predicted_survival,
survival
FROM ML.PREDICT (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`,
(
SELECT * EXCEPT(sample, data_partition)
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'testing' -- Use the testing dataset
)
)
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_model=bq_ml_model,
bq_tmp_table=bq_tmp_table
)
print(ml_test_predict_query)
# + id="Lwao4yWh3E4j"
# this query processes 21.7 KB and costs approximately <$0.01
ml_test_predict = client.query(ml_test_predict_query).result().to_dataframe()
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="yaXORFBx3V1m" outputId="a8908330-0b3e-4b42-eb6c-6cc83fdba1fb"
# Display the table of prediction results
ml_test_predict
# + [markdown] id="7Szy1KxyIOpc"
# ## Visualize the Linear Regression Model
#
# Now we can visualize the linear regression model overlayed on the training and testing sets.
# + colab={"base_uri": "https://localhost:8080/", "height": 458} id="od2xmz_yAz-h" outputId="a05b6b85-5b57-47ec-ce1b-eb99417ccd2a"
# combine training and testing results into single data frame
ml_train_predict['group'] = 'training'
ml_test_predict['group'] = 'testing'
combined = pd.concat([ml_train_predict, ml_test_predict])
# use seaborn to generate a scatter plot of linear regression results
sns.set_theme(style='ticks', palette='pastel')
ax = sns.lmplot(data=combined, hue='group', x='survival', y='predicted_survival', height=6, aspect=1)
ax.set(xlim=(0,4000), ylim=(0,2000),xlabel='Survival',ylabel='Predicted Survival')
# + [markdown] id="g1sg3dNMISWV"
# # Conclusion
#
# Although there is a small positive correlation between predicted survival time and actual survival time, the relationship is weak (this is also reflected in the very small R-squared metric for the testing set). The variance of the predicted survival is large, but decreases as actual survival time increases. Note that even for the training samples (orange points) the model does not predict survival well, suggesting that it is underfitting.
#
# This example used a pre-selected set of 14 genes from a previous study. However, performing additional feature selection or increasing the feature size (e.g., increase to 30+ features) may improve model performance.
| 15,375 |
/pymaceuticals_starter.ipynb
|
6f5d7cde44948720a9c0b944e52cafa9c17d725b
|
[] |
no_license
|
galkaren/matplotlib
|
https://github.com/galkaren/matplotlib
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 165,603 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # ETL Pipeline Preparation
# Follow the instructions below to help you create your ETL pipeline.
# ### 1. Import libraries and load datasets.
# - Import Python libraries
# - Load `messages.csv` into a dataframe and inspect the first few lines.
# - Load `categories.csv` into a dataframe and inspect the first few lines.
# import libraries
import pandas as pd
import numpy as np
# load messages dataset
messages = pd.read_csv("messages.csv")
messages.head()
# load categories dataset
categories = pd.read_csv("categories.csv")
categories.head()
# ### 2. Merge datasets.
# - Merge the messages and categories datasets using the common id
# - Assign this combined dataset to `df`, which will be cleaned in the following steps
# merge datasets
df = messages.merge(categories, on='id')
df.head()
# ### 3. Split `categories` into separate category columns.
# - Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.
# - Use the first row of categories dataframe to create column names for the categories data.
# - Rename columns of `categories` with new column names.
# create a dataframe of the 36 individual category columns
categories = df['categories'].str.split(";", expand=True)
categories.head()
# +
# select the first row of the categories dataframe
row = categories.loc[0,:]
# use this row to extract a list of new column names for categories.
# one way is to apply a lambda function that takes everything
# up to the second to last character of each string with slicing
category_colnames = row.apply(lambda x:x.split('-')[0]).values.tolist()
print(category_colnames)
# -
# rename the columns of `categories`
categories.columns = category_colnames
categories.head()
# ### 4. Convert category values to just numbers 0 or 1.
# - Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.
# - You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`.
for column in categories:
# set each value to be the last character of the string
categories[column] = categories[column].apply(lambda x:x.split('-')[1])
# convert column from string to numeric
categories[column] = categories[column].astype(int)
categories.head()
# Check non one-hot ecoding columns
columns=(categories.max()>1)[categories.max()>1].index
# Check number not in (0,1) and update other value to 1
for col in columns:
print(categories[col].value_counts())
categories.loc[categories[col]>1,col] = 1
print(categories[col].value_counts())
# ### 5. Replace `categories` column in `df` with new category columns.
# - Drop the categories column from the df dataframe since it is no longer needed.
# - Concatenate df and categories data frames.
# +
# drop the original categories column from `df`
df.drop('categories',axis=1, inplace=True)
df.head()
# -
# concatenate the original dataframe with the new `categories` dataframe
df = pd.concat([df,categories], axis=1)
df.head()
# ### 6. Remove duplicates.
# - Check how many duplicates are in this dataset.
# - Drop the duplicates.
# - Confirm duplicates were removed.
# check number of duplicates
df.duplicated().sum()
# drop duplicates
df.drop_duplicates(inplace=True)
# check number of duplicates
df.duplicated().sum()
# ### 7. Save the clean dataset into an sqlite database.
# You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below.
from sqlalchemy import create_engine
engine = create_engine('sqlite:///all_messages.db')
df.to_sql('all_messages', engine, index=False)
# ### 8. Use this notebook to complete `etl_pipeline.py`
# Use the template file attached in the Resources folder to write a script that runs the steps above to create a database based on new datasets specified by the user. Alternatively, you can complete `etl_pipeline.py` in the classroom on the `Project Workspace IDE` coming later.
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
def etl_pipeline(db_path,csv1="messages.csv", csv2="categories.csv",tablename='disastertab'):
# load messages dataset
messages = pd.read_csv(csv1)
# load categories dataset
categories = pd.read_csv(csv2)
# merge datasets
df = messages.merge(categories, on='id')
# create a dataframe of the 36 individual category columns
categories = df['categories'].str.split(";", expand=True)
# select the first row of the categories dataframe
row = categories.loc[0,:]
# use this row to extract a list of new column names for categories.
category_colnames = row.apply(lambda x:x.split('-')[0]).values.tolist()
# rename the columns of `categories`
categories.columns = category_colnames
for column in categories:
# set each value to be the last character of the string
categories[column] = categories[column].apply(lambda x:x.split('-')[1])
# convert column from string to numeric
categories[column] = categories[column].astype(int)
# Check number not in (0,1) and update other value to 1
categories.loc[categories[column]>1,column] = 1
# drop the original categories column from `df`
df.drop('categories',axis=1, inplace=True)
# concatenate the original dataframe with the new `categories` dataframe
df = pd.concat([df,categories], axis=1)
df.head()
# drop duplicates
df.drop_duplicates(inplace=True)
engine = create_engine('sqlite:///'+db_path)
df.to_sql(tablename, engine, index=False)
etl_pipeline('workspace/data/DisasterResponse.db')
engine = create_engine('sqlite:///workspace/data/DisasterResponse.db')
tt=pd.read_sql('SELECT * from disastertab', engine)
tt.head(10)
# +
etl_pipeline(db_path='workspace/data/DisasterResponse.db',csv1="./workspace/data/disaster_messages.csv", csv2="./workspace/data/disaster_categories.csv",)
# -
engine = create_engine('sqlite:///workspace/data/DisasterResponse.db')
tt=pd.read_sql('SELECT * from disastertab', engine)
tt.head(10)
tt.genre.value_counts()
tt.max()
essor(verbose=0)
model.fit(self.latents, self.labels[:,factor])
importance_matrix_factor = np.abs(model.feature_importances_)
return model, importance_matrix_factor
def get_trained_gbts(self, latents, labels):
self.latents, self.labels = latents, labels
num_codes, num_factors = latents.shape[-1], labels.shape[-1]
importance_matrix = np.zeros(shape=[num_codes, num_factors])
models = []
with Pool(10) as p: out = p.map(self.get_factor_gbt, list(range(num_factors)))
for i,gbt_ret in enumerate(out):
model, importance_matrix_factor = gbt_ret
models.append(model)
importance_matrix[:,i] = importance_matrix_factor
return importance_matrix, models
def _compute_dci(mus_train, ys_train, mus_test, ys_test):
"""Computes score based on both training and testing codes and factors."""
scores = {}
importance_matrix, train_err, test_err = compute_importance_gbt(mus_train, ys_train, mus_test, ys_test)
assert importance_matrix.shape[0] == mus_train.shape[0]
assert importance_matrix.shape[1] == ys_train.shape[0]
scores["informativeness_train"] = train_err
scores["informativeness_test"] = test_err
scores["disentanglement"] = disentanglement(importance_matrix)
scores["completeness"] = completeness(importance_matrix)
return scores
latents,labels,path=data
dci = DCI()
importance_matrix, models = dci.get_trained_gbts(latents, labels)
cmap = sns.cubehelix_palette(start=2, rot=0, dark=0.3, light=.95, reverse=False, as_cmap=True)
ax = sns.heatmap(np.transpose(importance_matrix), cmap=cmap, yticklabels=[
"cube color", "scale", "overall inscribed color", 'inscribed color 1', 'inscribed color 2', 'inscribed color 3', 'inscribed color 4',
'floor color', 'wall color', 'azimuth'])
plt.title("Importance Matrix")
plt.xlabel("latents")
plt.ylabel("labels")
plt.show()
# + jupyter={"outputs_hidden": true, "source_hidden": true}
def disentanglement_per_code(importance_matrix):
"""Compute disentanglement score of each code."""
# importance_matrix is of shape [num_codes, num_factors].
return 1. - scipy.stats.entropy(importance_matrix.T + 1e-11,
base=importance_matrix.shape[1])
def disentanglement(importance_matrix):
"""Compute the disentanglement score of the representation."""
per_code = disentanglement_per_code(importance_matrix)
if importance_matrix.sum() == 0.:
importance_matrix = np.ones_like(importance_matrix)
code_importance = importance_matrix.sum(axis=1) / importance_matrix.sum()
return np.sum(per_code*code_importance)
def completeness_per_factor(importance_matrix):
"""Compute completeness of each factor."""
# importance_matrix is of shape [num_codes, num_factors].
return 1. - scipy.stats.entropy(importance_matrix + 1e-11,
base=importance_matrix.shape[0])
def completeness(importance_matrix):
""""Compute completeness of the representation."""
per_factor = completeness_per_factor(importance_matrix)
if importance_matrix.sum() == 0.:
importance_matrix = np.ones_like(importance_matrix)
factor_importance = importance_matrix.sum(axis=0) / importance_matrix.sum()
return np.sum(per_factor*factor_importance)
child_factors = [3,4,5,6]
parent_factor = [2]
overall_factors = [i for i in range(importance_matrix.shape[-1]) if not i in child_factors]
child_latents = np.max(importance_matrix[:,parent_factor+overall_factors],axis=-1)<=np.max(importance_matrix[:,child_factors],axis=-1)
overall_latents = np.logical_not(child_latents)
print("overall disentanglement: ",disentanglement(importance_matrix[overall_latents][:, overall_factors]))
print("child factor disentanglement: ",disentanglement(importance_matrix[child_latents][:, child_factors]))
#print("overall completeness: ",completeness(importance_matrix[overall_latents][:, overall_factors]))
print("child factor completeness: ",completeness(importance_matrix[child_latents][:, child_factors]))
# why completeness?
# - we want to know if information has been learned,
# - if the children isn't learned, then we need to make that clear
# - cons: completeness is affected by empty factors
# + jupyter={"outputs_hidden": true, "source_hidden": true}
import numpy as np
import seaborn as sns
import os
import matplotlib.pyplot as plt
from metrics import disentanglement, completeness
# get results:
def get_results():
data = np.load("results_dci.npz", allow_pickle=True)
paths = data["path"]
modelspecs = np.asarray([path.rstrip("/mutual_info_sample/final.npz").split(os.path.sep) for path in data["path"]])
return data, modelspecs
rdata, model_specs = get_results()
modelnum = 1064
importance_matrix = np.transpose(rdata["importance_matrix"][modelnum])
overall_disentanglement = rdata["overall_disentanglement"][modelnum]
overall_completeness = rdata["overall_completeness"][modelnum]
child_disentanglement = rdata["child_disentanglement"][modelnum]
print((1-importance_matrix[:,[2]])*importance_matrix[:,3:7])
child_completeness = np.where(np.isnan(rdata["child_completeness"]), 0, rdata["child_completeness"])[modelnum]
print("overall_disentanglement: ", overall_disentanglement)
#print("overall_completeness: ", overall_completeness)
print("child_disentanglement: ", child_disentanglement)
print("child_completeness: ", child_completeness)
print(model_specs[modelnum])
cmap = sns.cubehelix_palette(start=2, rot=0, dark=0.3, light=.95, reverse=False, as_cmap=True)
ax = sns.heatmap(np.transpose(importance_matrix), cmap=cmap, yticklabels=[
"cube color", "scale", "overall inscribed color", 'inscribed color 1', 'inscribed color 2', 'inscribed color 3', 'inscribed color 4',
'floor color', 'wall color', 'azimuth'])
plt.title("Importance Matrix")
plt.xlabel("latents")
plt.ylabel("labels")
plt.show()
# +
import numpy as np
import os
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
import pandas as pd
from metrics import disentanglement, completeness
def get_results():
data = np.load("results_dci_with_elbo.npz", allow_pickle=True)
paths = data["path"]
modelspecs = np.asarray([path.rstrip("/run_set/sample.npz").split(os.path.sep)+[path] for path in data["path"]])
return data, modelspecs
def get_scores(importance_matrix):
child_factors = [3,4,5,6]
parent_factor = [2]
overall_factors = [i for i in range(importance_matrix.shape[-1]) if not i in child_factors]
#child_latents = np.max(importance_matrix[:,overall_factors],axis=-1)<=np.max(importance_matrix[:,child_factors],axis=-1)
#overall_latents = np.logical_not(child_latents)
if sum(overall_factors)>2:
overall_disentanglement = disentanglement(
importance_matrix[overall_latents]#[:, overall_factors]
)
overall_completeness = completeness(
importance_matrix[overall_latents]#[:, overall_factors]
)
else:
overall_disentanglement = 0
overall_completeness = 0
if sum(child_latents)>2:
child_disentanglement = disentanglement(
importance_matrix[child_latents]#[:, child_factors]
)
child_completeness = completeness(
importance_matrix[child_latents]#[:, child_factors]
)
else:
child_disentanglement = 0
child_completeness = 0
return overall_disentanglement, overall_completeness, child_disentanglement, child_completeness
def format_data(modelspec, model_types, dataset_types):
# get the model
model = [i for i in model_types if i in modelspec]
if not len(model): return None
assert len(model) == 1, f"{len(model)}, {modelspec}"
model = model[0]
# get dataset
dataset = [i for i in dataset_types if i in modelspec]
if not len(dataset): return None
assert len(dataset) == 1, f"{len(dataset)}, {modelspec}"
dataset = dataset[0]
# beta values
beta = [i for i in modelspec if "beta_" in i][0]
# gamma values
gamma = [i for i in modelspec if "gamma_" in i]
gamma = np.nan if not len(gamma) else gamma[0]
# random seeds
random_seed = [i for i in modelspec if "random_seed_" in i]
path = modelspec[-1]
# add to data
return {"Model":model, "beta":beta, "gamma":gamma, "Dataset":dataset, "random seed": random_seed, "path":path}
def get_data(data, modelspecs, dataset_types, model_types, use_default_scores=False):
dataframe = pd.DataFrame()
if use_default_scores:
overall_disentanglement_scores = data["overall_disentanglement"]
overall_completeness_scores = data["overall_completeness"]
child_disentanglement_scores = data["child_disentanglement"]
elbo_losses = data["elbo_loss"]
child_completeness_scores = np.where(np.isnan(data["child_completeness"]), 0, data["child_completeness"])
for modelnum in range(len(modelspecs)):
modelspec, importance_matrix = modelspecs[modelnum], data["importance_matrix"][modelnum]
# get scores by recalculation
if not use_default_scores:
overall_disentanglement, overall_completeness, child_disentanglement, child_completeness = get_scores(np.transpose(importance_matrix))
else: # use default scores
overall_disentanglement = overall_disentanglement_scores[modelnum]
overall_completeness = overall_completeness_scores[modelnum]
child_disentanglement = child_disentanglement_scores[modelnum]
child_completeness = child_completeness_scores[modelnum]
elbo_loss = elbo_losses[modelnum]
data_record = format_data(modelspec=modelspec, model_types=model_types, dataset_types=dataset_types)
if data_record is None: continue
data_record.update({
"overall_disentanglement":overall_disentanglement,
"overall_completeness":overall_completeness,
"child_disentanglement":child_disentanglement,
"child_completeness":child_completeness,
"combined_child":child_disentanglement*child_completeness,
"elbo_loss":elbo_loss,
"elbo":-elbo_loss
})
dataframe = dataframe.append(data_record, ignore_index=True)
return dataframe
model_types = ["betavae","betatcvae","vlae","lvae"]+["betavae_larger2","betatcvae_larger2","vlae_larger2","lvae_larger2"]
#model_types = ["betavae","betavae_larger2","betatcvae","betatcvae_larger2","vlae","vlae_larger2","lvae","lvae_larger2"]
dataset_types = ["boxhead_07", "boxheadsimple", "boxheadsimple2"]
results, modelspecs = get_results()
data = get_data(data=results, modelspecs=modelspecs, dataset_types=dataset_types, model_types=model_types, use_default_scores=True)
# +
cur_data = data
cur_data = cur_data.loc[
False
|pd.isna(cur_data["gamma"])
|(cur_data["gamma"]=="gamma_1")
#|(cur_data["gamma"]=="gamma_5")
]
# get single layer beta models and ladder models
# cur_data = cur_data.loc[
# False
# |(data["beta"]=="beta_1_annealed")&(np.logical_not(cur_data["Model"].str.contains("beta")))
# |(cur_data["beta"]=="beta_5_annealed")&(np.logical_not(cur_data["Model"].str.contains("beta")))
# |(cur_data["beta"]=="beta_10_annealed")&(np.logical_not(cur_data["Model"].str.contains("beta")))
# |(cur_data["beta"]=="beta_20_annealed")&(np.logical_not(cur_data["Model"].str.contains("beta")))
# |(data["beta"]=="beta_1")&(cur_data["Model"].str.contains("beta"))
# |(cur_data["beta"]=="beta_5")&(cur_data["Model"].str.contains("beta"))
# |(cur_data["beta"]=="beta_10")&(cur_data["Model"].str.contains("beta"))
# |(cur_data["beta"]=="beta_20")&(cur_data["Model"].str.contains("beta"))
# ]
cur_data = cur_data.loc[
False
|(cur_data["beta"]=="beta_1_annealed")
|(cur_data["beta"]=="beta_5_annealed")
|(cur_data["beta"]=="beta_10_annealed")
|(cur_data["beta"]=="beta_20_annealed")
]
cur_data = cur_data.loc[
False
|(cur_data["Dataset"]=="boxhead_07")
|(cur_data["Dataset"]=="boxheadsimple")
|(cur_data["Dataset"]=="boxheadsimple2")
]
score_types = [
"overall_disentanglement",
#"overall_completeness",
#"child_completeness",
"child_disentanglement",
#"combined_child",
"elbo"
]
# get top n values given a column for each model.
# new_data = []
# for model in model_types:
# new_data.append(cur_data.loc[cur_data["Model"]==model].sort_values(by=['overall_disentanglement'])[-1:])
# new_data = pd.concat(new_data)
# print(new_data)
new_dataset_types = ["boxhead 1", "boxhead 2", "boxhead 3"]
reference_labels = {
"boxhead 1":"(1)",
"boxhead 2":"(2)",
"boxhead 3":"(3)",
"betavae":"(a)",
"betatcvae":"(b)",
"vlae":"(c)",
"lvae":"(d)",
"betavae_larger2":"(e)",
"betatcvae_larger2":"(f)",
"vlae_larger2":"(g)",
"lvae_larger2":"(h)",
}
#data if beta_vals is None else data.loc[data["beta"]==beta_vals]
plt.rcParams["figure.figsize"]=15,4.5
sns.set(font_scale = 2.5, style="white")
for score in score_types:
plt.clf()
plt.title(score)
plot_data = cur_data.replace({"boxhead_07":"boxhead 1", "boxheadsimple":"boxhead 2","boxheadsimple2":"boxhead 3"})
sns.boxplot(x='Model', y=score, hue="Dataset", order=[reference_labels[i] for i in model_types], hue_order=[reference_labels[i] for i in new_dataset_types],
data=plot_data.replace(reference_labels),
#data=cur_data,
#bw=.2,
#cut=0,
#inner="stick"
)
# sns.boxplot(x='Dataset', y=score, hue="Model", order=new_dataset_types, hue_order=model_types,
# data=cur_data.replace({"boxhead_07":"boxhead 1", "boxheadsimple":"boxhead 2","boxheadsimple2":"boxhead 3"}),
# #data=cur_data,
# #bw=.2,
# #cut=0,
# #inner="stick"
# )
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
if score == "child_disentanglement":
plt.ylim(0,0.6)
if score == "overall_disentanglement":
plt.ylim(0.35,1)
if score == "elbo":
plt.ylim(-5250,-5000)
plt.ylabel("Score")
#plt.savefig(os.path.join("dev_paper", "images", score.replace(" ","_")+"_above_beta_1"+".png"))
plt.show()
# # Compute the correlation matrix
# corr = cur_data.loc[:,["child_disentanglement", "overall_disentanglement", "elbo"]].corr()
# mask = np.triu(np.ones_like(corr, dtype=bool))
# f, ax = plt.subplots(figsize=(5, 5))
# cmap = sns.diverging_palette(230, 20, as_cmap=True)
# sns.heatmap(corr,
# mask=mask,
# cmap=cmap, center=0,
# square=True, linewidths=.5, cbar_kws={"shrink": .5},
# vmax=.5, vmin=-0.5,)
# plt.show()
# +
from IPython.display import Image
from hsr.visualize import lvae_traversal, vlae_traversal
from hsr.save import ModelSaver
import metrics as mt
# correlation plots between various models for disentanglement scores
new_data = []
print(cur_data.columns)
print(cur_data.Dataset.unique())
def retrieve_model(path):
modelsaver = ModelSaver(path)
model = modelsaver.load()
assert not model is None, f"No model found in {path}"
return model
all_paths = mt.get_model_paths()
for model_type in model_types[2:]:
print(model_type)
model_data = cur_data.loc[cur_data["Model"]==model_type][score_types+["path"]]
model_data.sort_values(by=["child_disentanglement"])
path = model_data.iloc[-1]["path"]
base_path = os.path.join("experiments",path.split("/hsr/results/")[1].rstrip("run_set/sample.npz"))# hard coded!
image_path = os.path.join(base_path,"images","200000.gif")
model_path = os.path.join(path.rstrip("run_set/sample.npz").replace("/results/","/models/"), "model")
#model = retrieve_model(model_path)
#dataset = mt.get_dataset(base_path)
# image = dataset.preprocess(dataset.train(64).__iter__().__next__()[0][0:1])
print(base_path)
# num_steps = 5
# if not "lvae" in model_type:
# traversal = vlae_traversal(model,image,num_steps=num_steps)
# else:
# traversal = lvae_traversal(model,image,num_steps=num_steps)
# plt.imshow(traversal)
# plt.axis('off')
# plt.savefig(os.path.join("/home/ychen/",model_type+"_traversal.png"), bbox_inches='tight', dpi=700)
# #with open(path,'rb') as f:
# # display(Image(data=f.read(), format='png'))
# -
| 23,614 |
/notebooks/tensorflow/Softmax.ipynb
|
90571203858f63cf29f009c17e7adbb3c3a3295e
|
[] |
no_license
|
mllog/machine-learning-journey
|
https://github.com/mllog/machine-learning-journey
| 2 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 24,697 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python [py35]
# language: python
# name: Python [py35]
# ---
# # Softmax
# Function to change scores to probability: $$S(y_j) = \frac{e^{y_j}}{\sum_je^{y_j}}$$
import numpy as np
def softmax(x):
return np.exp(x) / np.sum(np.exp(x), axis=0)
scores = [1.0, 2.0, 3.0]
print(softmax(scores))
# Plot softmax curves
import matplotlib.pyplot as plt
# %matplotlib inline
x = np.arange(-2.0, 6.0, 0.1)
scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)])
plt.plot(x, softmax(scores).T, linewidth=2)
plt.show()
scores = np.array([[1, 2, 3, 6],
[2, 4, 5, 6],
[3, 8, 7, 6]])
print(softmax(scores))
# ### When you multiply scores by 10, the probabilities get close to either 0.0 or 1.0
scores = np.array([3.0, 1.0, 0.2])
print(softmax(scores * 10))
# ### Whe you devide scores by 10, the probabilities get close the the uniform distribution
print(softmax(scores / 10))
# ### In other words, if you increase the size of the output your classifier becomes very confident about its prediction, but you reduce the size of the output your classifier become very unsure!!
# 
YViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="8c7d3113-d18c-4448-bf04-8a2284f7f0c4"
import pandas as pd
import itertools
from sklearn.metrics import confusion_matrix
from tqdm import tqdm
import os
from imutils import paths
tqdm.pandas()
# + [markdown] id="tAphnaNNR1CZ"
# #Dataset
# + id="9WTBNzSTQcCy" executionInfo={"status": "ok", "timestamp": 1616941262485, "user_tz": -180, "elapsed": 619, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
dataset = 'dataset4m'
# + colab={"base_uri": "https://localhost:8080/"} id="ybZp_QxFrP3g" executionInfo={"status": "ok", "timestamp": 1616941263488, "user_tz": -180, "elapsed": 752, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="a8dc5673-737c-46f0-b767-dab38e770bd3"
names = os.listdir(dataset)
n_classes = len(names)
n_classes
# + id="CRielnkzsMhJ" executionInfo={"status": "ok", "timestamp": 1616941264497, "user_tz": -180, "elapsed": 411, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
imagePaths = list(paths.list_images(dataset))
idendities = {names[0]:[],names[1]:[], names[2]:[],names[3]:[],names[4]:[]}
for (i, imagePath) in enumerate(imagePaths):
idendities[imagePath.split(os.path.sep)[-2]].append(imagePath)
# + [markdown] id="9FcK4gWpR98i"
# # Positive samples
# + id="RX2y6vUrsgb0" executionInfo={"status": "ok", "timestamp": 1616941266404, "user_tz": -180, "elapsed": 618, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
positives = []
for key, values in idendities.items():
#print(key)
for i in range(0, len(values)-1):
for j in range(i+1, len(values)):
#print(values[i], " and ", values[j])
positive = []
positive.append(values[i])
positive.append(values[j])
positives.append(positive)
# + id="mxbfUFa416yk" executionInfo={"status": "ok", "timestamp": 1616941267101, "user_tz": -180, "elapsed": 766, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
positives = pd.DataFrame(positives, columns = ["file_x", "file_y"])
positives["decision"] = "Yes"
# + [markdown] id="okAo8ctpSDWo"
# ## Negative samples
# + id="--XaE4ty2Buq" executionInfo={"status": "ok", "timestamp": 1616941268010, "user_tz": -180, "elapsed": 453, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
samples_list = list(idendities.values())
# + id="cSt6zL802EMv" executionInfo={"status": "ok", "timestamp": 1616941268801, "user_tz": -180, "elapsed": 727, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
negatives = []
for i in range(0, len(idendities) - 1):
for j in range(i+1, len(idendities)):
#print(samples_list[i], " vs ",samples_list[j])
cross_product = itertools.product(samples_list[i], samples_list[j])
cross_product = list(cross_product)
#print(cross_product)
for cross_sample in cross_product:
#print(cross_sample[0], " vs ", cross_sample[1])
negative = []
negative.append(cross_sample[0])
negative.append(cross_sample[1])
negatives.append(negative)
# + id="ho4CLcCU2G8a" executionInfo={"status": "ok", "timestamp": 1616941270766, "user_tz": -180, "elapsed": 950, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
negatives = pd.DataFrame(negatives, columns = ["file_x", "file_y"])
negatives["decision"] = "No"
# + [markdown] id="4w7tm-2wSHvM"
# ## Merge Positives and Negative Samples
# + id="aGap1pss2ITB" executionInfo={"status": "ok", "timestamp": 1616941271173, "user_tz": -180, "elapsed": 575, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
df = pd.concat([positives, negatives]).reset_index(drop = True)
# + colab={"base_uri": "https://localhost:8080/"} id="6qXqCFtt2KRk" executionInfo={"status": "ok", "timestamp": 1616941272435, "user_tz": -180, "elapsed": 1296, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="c60ea09f-3034-4bb8-8f93-4974314090f1"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="Vtd-auBe2LYH" executionInfo={"status": "ok", "timestamp": 1616941272435, "user_tz": -180, "elapsed": 677, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="144dfbd8-978d-425d-97a8-0e360996f837"
df.decision.value_counts()
# + id="irYDm8klQ94U" executionInfo={"status": "ok", "timestamp": 1616941272819, "user_tz": -180, "elapsed": 601, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
# + [markdown] id="kJMZsgEZSMn-"
# ## DeepFace
# + colab={"base_uri": "https://localhost:8080/"} id="8BQZLrn12RAN" executionInfo={"status": "ok", "timestamp": 1616941277251, "user_tz": -180, "elapsed": 3885, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="82ae8396-112a-4aa5-bd4e-a7ce493d193e"
pip install deepface
# + id="dFTGpUUDQz97" executionInfo={"status": "ok", "timestamp": 1616941277252, "user_tz": -180, "elapsed": 2747, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
from deepface import DeepFace
# + id="g1jiJepg2Tfs" executionInfo={"status": "ok", "timestamp": 1616941277252, "user_tz": -180, "elapsed": 2061, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
instances = df[["file_x", "file_y"]].values.tolist()
# + id="mqW3V_Mg2clN" executionInfo={"status": "ok", "timestamp": 1616941278448, "user_tz": -180, "elapsed": 438, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
model_name = "VGG-Face"
distance_metric = "cosine"
# + colab={"base_uri": "https://localhost:8080/"} id="3GFAc8Wx2eQe" executionInfo={"status": "ok", "timestamp": 1616941983285, "user_tz": -180, "elapsed": 704227, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="7d99df93-15ce-4454-f3d5-9cc8ab26d741"
resp_obj = DeepFace.verify(instances, model_name = model_name, distance_metric = distance_metric)
# + id="2vNDKFE72fiK" executionInfo={"status": "ok", "timestamp": 1616942196145, "user_tz": -180, "elapsed": 610, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
distances = []
for i in range(0, len(instances)):
distance = round(resp_obj["pair_%s" % (i+1)]["distance"], 4)
distances.append(distance)
# + id="twqh2ZkFSUEl" executionInfo={"status": "ok", "timestamp": 1616942198088, "user_tz": -180, "elapsed": 641, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
df["distance"] = distances
# + [markdown] id="ZYI0sR7dSRT7"
#
# ## Analyzing Distances
# + id="4XNw9ct6SQ6R" executionInfo={"status": "ok", "timestamp": 1616942199221, "user_tz": -180, "elapsed": 405, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
tp_mean = round(df[df.decision == "Yes"].mean().values[0], 4)
tp_std = round(df[df.decision == "Yes"].std().values[0], 4)
fp_mean = round(df[df.decision == "No"].mean().values[0], 4)
fp_std = round(df[df.decision == "No"].std().values[0], 4)
# + colab={"base_uri": "https://localhost:8080/"} id="dFM0bw8BSagY" executionInfo={"status": "ok", "timestamp": 1616942200110, "user_tz": -180, "elapsed": 1127, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="8893b436-a74b-40e0-ccb8-94859a4c351d"
print("Std of true positives: ", tp_std)
print("Mean of false positives: ", fp_mean)
print("Std of false positives: ", fp_std)
# + [markdown] id="GH3MM8tcScL_"
# ## Distribution
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="KoIKsfBgScAw" executionInfo={"status": "ok", "timestamp": 1616942200975, "user_tz": -180, "elapsed": 1622, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="af3cd388-435f-47a1-fd11-5fb8a23c0ea6"
df[df.decision == "Yes"].distance.plot.kde()
df[df.decision == "No"].distance.plot.kde()
# + [markdown] id="n2q_L7_HSipp"
# ## Best Split Point
# + colab={"base_uri": "https://localhost:8080/"} id="8Hf1RQGKRWHa" executionInfo={"status": "ok", "timestamp": 1616942309901, "user_tz": -180, "elapsed": 4196, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="a7683d90-2386-4da3-9b77-e3f0749403df"
pip install chefboost
# + id="iZDbFxW5SjdL" executionInfo={"status": "ok", "timestamp": 1616942312185, "user_tz": -180, "elapsed": 650, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
from chefboost import Chefboost as chef
# + id="S820gFvRSl45" executionInfo={"status": "ok", "timestamp": 1616942313733, "user_tz": -180, "elapsed": 678, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
config = {'algorithm': 'C4.5'}
# + colab={"base_uri": "https://localhost:8080/"} id="KzNPqtXlSl2M" executionInfo={"status": "ok", "timestamp": 1616942315882, "user_tz": -180, "elapsed": 1024, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="8523b519-2272-4093-db21-0d133cc33fcf"
tmp_df = df[['distance', 'decision']].rename(columns = {"decision": "Decision"}).copy()
model = chef.fit(tmp_df, config)
# + id="NjYOp_qRSrs7" executionInfo={"status": "ok", "timestamp": 1616942321943, "user_tz": -180, "elapsed": 728, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
# + [markdown] id="HkOa_qP5SuT3"
# ## Sigma
# + colab={"base_uri": "https://localhost:8080/"} id="RttrC0FBSrnI" executionInfo={"status": "ok", "timestamp": 1616942322390, "user_tz": -180, "elapsed": 585, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="39fdcc8c-e976-408c-a435-2bd9ce0decca"
sigma = 2
#2 sigma corresponds 95.45% confidence, and 3 sigma corresponds 99.73% confidence
#threshold = round(tp_mean + sigma * tp_std, 4)
threshold = 0.3147 #comes from c4.5 algorithm
print("threshold: ", threshold)
# + colab={"base_uri": "https://localhost:8080/"} id="tWrxcZuASxq_" executionInfo={"status": "ok", "timestamp": 1616942322997, "user_tz": -180, "elapsed": 997, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="3ce795aa-f45d-426e-a2ca-4a748932e3b0"
df[df.decision == 'Yes'].distance.max()
# + colab={"base_uri": "https://localhost:8080/"} id="RPiGGrmQS0Cr" executionInfo={"status": "ok", "timestamp": 1616942322998, "user_tz": -180, "elapsed": 811, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="836d1b46-67b3-4148-99ff-b2ad27730ca3"
df[df.decision == 'No'].distance.min()
# + [markdown] id="VJT2bqXKS0r0"
# ## Evaluation
# + id="jtJ9aq0_S3HK" executionInfo={"status": "ok", "timestamp": 1616942323296, "user_tz": -180, "elapsed": 727, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
df["prediction"] = "No"
# + id="BnQmBVDGS_0G" executionInfo={"status": "ok", "timestamp": 1616942323665, "user_tz": -180, "elapsed": 904, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
idx = df[df.distance <= threshold].index
df.loc[idx, 'prediction'] = 'Yes'
# + colab={"base_uri": "https://localhost:8080/", "height": 194} id="qFL8Jwi8TBYO" executionInfo={"status": "ok", "timestamp": 1616942323666, "user_tz": -180, "elapsed": 747, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="e0ca60fa-5a97-4009-c025-82da80e70f01"
df.sample(5)
# + id="uefSV-niTDMv" executionInfo={"status": "ok", "timestamp": 1616942323666, "user_tz": -180, "elapsed": 556, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
cm = confusion_matrix(df.decision.values, df.prediction.values)
# + colab={"base_uri": "https://localhost:8080/"} id="OalzeTDaTExo" executionInfo={"status": "ok", "timestamp": 1616942323976, "user_tz": -180, "elapsed": 445, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="a50a0ee2-f3da-4c3c-9e1c-59f413f741b9"
cm
# + id="xnQL527_TFMU" executionInfo={"status": "ok", "timestamp": 1616942324778, "user_tz": -180, "elapsed": 861, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
tn, fp, fn, tp = cm.ravel()
# + colab={"base_uri": "https://localhost:8080/"} id="XP_fHpJ6THy5" executionInfo={"status": "ok", "timestamp": 1616942325984, "user_tz": -180, "elapsed": 1018, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="44b92e78-74e3-483a-e180-ee7178d1827d"
tn, fp, fn, tp
# + id="ZnzrhGe7TK5Q" executionInfo={"status": "ok", "timestamp": 1616942326362, "user_tz": -180, "elapsed": 838, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
recall = tp / (tp + fn)
precision = tp / (tp + fp)
accuracy = (tp + tn)/(tn + fp + fn + tp)
f1 = 2 * (precision * recall) / (precision + recall)
# + colab={"base_uri": "https://localhost:8080/"} id="J4X-vTLRTLXy" executionInfo={"status": "ok", "timestamp": 1616942327680, "user_tz": -180, "elapsed": 836, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}} outputId="f7eff313-a709-49bf-8302-cf4a2327a764"
print("Precision: ", 100*precision,"%")
print("Recall: ", 100*recall,"%")
print("F1 score ",100*f1, "%")
print("Accuracy: ", 100*accuracy,"%")
# + id="PANhyww1TM_4" executionInfo={"status": "ok", "timestamp": 1616942329354, "user_tz": -180, "elapsed": 980, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
df.to_csv("threshold_pivot.csv", index = False)
# + id="LPDc8PYRRAAJ" executionInfo={"status": "ok", "timestamp": 1616942204188, "user_tz": -180, "elapsed": 1105, "user": {"displayName": "\u041b\u0435\u043d\u0430", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiMK7DQoNpvJJ9b_baTOhOl-0KsCYViUs3aLi0N=s64", "userId": "14206305893323667740"}}
| 19,684 |
/Phys476/lab3/Indian Diabetes.ipynb
|
34ffd06c27f44d29a03650d7f4fa2a7203ba9f12
|
[] |
no_license
|
nkasmanoff/MLstuff
|
https://github.com/nkasmanoff/MLstuff
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,383 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Calculating a security's rate of return in python
#
# Simple reurns
# import numpy as np
# from pandas_datareader import data as wb
# import matplotlib.pyplot as plt
PG = wb.DataReader('PG', data_source = 'yahoo', start = '1995-1-1')
PG.head()
PG.tail()
# ### Simple rate of return
#
# In Python, we will create a new column containing the simple rate of return
PG['simple_return'] = (PG['Adj Close'] / PG['Adj Close'].shift(1)) - 1
print(PG[ 'simple_return'])
# #### Ploting data
PG['simple_return'].plot(figsize = (8,5))
avg_return_d = PG['simple_return'].mean()
avg_return_d
# The number above is very small. Not possible to interpret meaningfully. Thus we will obtain the Average ANNUAL rate of return (250 business days)
avg_returns_a = PG['simple_return'].mean() * 250
avg_returns_a
print(str(round(avg_returns_a, 5) * 100) + '%')
# ### Log rates of return
import numpy as np
# vectorization - The ability to organize several kinds of data precessing tasks as array expressions. Using numpy may replace using loops.
PG. head()
PG['log_return'] = np.log(PG['Adj Close'] / PG['Adj Close'].shift(1))
print(PG['log_return'])
PG['log_return'].plot(figsize = (8,5))
log_return_d = PG['log_return'].mean()
log_return_d
log_return_a = PG['log_return'].mean() * 250
log_return_a
print(str(round(log_return_a, 5) * 100) + '%')
# ## Calculating the rates of returns of a portfolio
#
# Is equal to the sum of the annual rate of returns of each security multiplied by its weights in the portfolio
import numpy as np
import pandas as pd
from pandas_datareader import data as wb
import matplotlib.pyplot as lt
tickers = ['PG','MSFT','F','GE']
mydata = pd.DataFrame()
for t in tickers:
mydata[t] = wb.DataReader(t, data_source = 'yahoo', start = '1995-1-1')['Adj Close']
mydata.info()
mydata.head()
mydata.tail()
# Lets create a line chart. It will allows us to understand how the securities perform throughout the timeframe under consideration. we would need to normalize the data to 100: (P_1 / P_0) * 100.
mydata.iloc[0]
(mydata / mydata.iloc[0] * 100).plot(figsize = (15,6));
plt.show()
# we want to compare the behavior all all 4 stocks from the same starting point.
# ###### calculating the Return of Portfolio of securities
returns = (mydata / mydata.shift(1)) - 1
returns.head()
weights = np.array([0.25, 0.25, 0.25, 0.25])
np.dot(returns,weights)
# The np.dot helps us calculate the dot product of both matrices. However, we need to calculate the annual average returns to come up with a simgle answer.
annual_returns = returns.mean() * 250
annual_returns
np.dot(annual_returns,weights)
pfolio_1 = str(round(np.dot(annual_returns,weights), 5) * 100) + '%'
print(pfolio_1)
weights_2 = np.array([0.4, 0.4, 0.15, 0.05])
weights_2
np.dot(annual_returns, weights_2)
pfolio_2 = str(round(np.dot(annual_returns, weights_2), 5) * 100) + '%'
print(pfolio_1)
print(pfolio_2)
# We can observe that the second portfolio had greater returns and thus, greater performance relative to the first portfolio
# # Calculating the return of indices
import numpy as np
import pandas as pd
from pandas_datareader import data as wb
import matplotlib.pyplot as plt
# how to clalculate the indices perfromance in python. We will use the following indices
#
# S&P500: ^GSPC
# NASDAQ, ^IXIC
# German DAX, ^GDAXI
# London FTSE, ^FTSE
# +
tickers = ['^GSPC','^IXIC', '^GDAXI']
ind_data = pd.DataFrame()
for t in tickers:
ind_data[t] = wb.DataReader(t, data_source = 'yahoo', start = "1997-1-1")["Adj Close"]
ind_data.head()
# -
ind_data.tail()
# Now, lets plot this data on a graph
(ind_data / ind_data.iloc[0] * 100).plot(figsize =(15,8));
plt.show()
# Around the 2000, we obsver a general rise of the indeces value. this was the ".com" bubble. form 1997 until today, overall all indices preform really well. The german market demonstrates real persistance and stability.
#
#
# Now calculate the indices simple returns
ind_returns = (ind_data / ind_data.shift(1)) - 1
ind_returns.tail()
annual_ind_returns = ind_returns.mean() * 250
annual_ind_returns
# All three vlaues are positive. The implication is that the average of the companies listed under the 3 indicies had a positive rate of return over the past two decades.
#
#
# Now let's take adjusted closing price of a company and compare its performance to one of its indices
# +
tickers = ['PG', '^GSPC', '^DJI' ]
data_2 = pd.DataFrame()
for t in tickers:
data_2[t] = wb.DataReader(t, data_source = 'yahoo', start = '2007-1-1')['Adj Close']
data_2.tail()
# -
(data_2 / data_2.iloc[0]).plot(figsize = (15,8))
plt.show()
| 4,930 |
/pef/domace-naloge/2020/10 arboretum/resitev.ipynb
|
9c5f08f12cae02ca3adb8b3e623cdb5a62fe03c8
|
[] |
no_license
|
cugalord/predavanja
|
https://github.com/cugalord/predavanja
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 11,928 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Arboretum
#
# V (skoraj) vseh funkcijah bomo delali s seznami koordinat dreves v takšni obliki:
#
# ```python
# [('smreka', 4, 6), ('bukev', 1, 5), ('bukev', 8, 5), ('javor', -1, 4),
# ('bukev', 3, 4), ('javor', 7, 3), ('bukev', -2, 2), ('bukev', 2, 2),
# ('javor', 4, 2), ('smreka', -3, -1), ('bukev', 3, -1), ('javor', -1, -2),
# ('javor', 4, -3)]
# ```
#
# Za vsako oceno je potrebno napisati tudi vse funkcije, zahtevane za nižjo oceno.
#
#
# ### Ocena 6
#
# Napiši naslednje funkcije.
#
# - `razdalja(x1, y1, x2, y2)` prejme koordinati dveh točk in vrne (Evklidsko) razdaljo med njima.
# - `najblizji(x, y, drevesa)` prejme koordinati točke in seznam dreves; vrne vrsto najbližjega drevesa, npr. `"javor"`.
# - `najblizji_enak(x, y, vrsta, drevesa)` prejme koordinati točke in vrsto drevesa (npr. `"bukev"`) ter seznam dreves. Vrniti mora koordinato najbližjega drevesa te vrste (npr. najbližje bukve, če iščemo bukve). Če je enako oddaljenih več, sme vrniti koordinati kateregakoli od njih.
# - `vse_vrste(drevesa)` vrne množico z imeni vseh vrst dreves, ki nastopajo v seznamu, na primer `{"oreh", "hrast", "bukev"}`.
# - `koordinate_tipov(vrsta, drevesa)` prejme ime drevesne vrste (npr. `"smreka"`) in seznam dreves, vrniti pa mora množico koordinat dreves te vrste (npr. smreke).
# - `najblizji_po_vrstah(x, y, drevesa)` koordinate neke poljubne točke in seznam dreves; vrniti mora slovar, katerega ključi so vse drevesne vrste v seznamu, pripadajoča vrednost pa koordinati tistega drevesa te vrste, ki je najbližji koordinatam, podanim kot argument. Če je enako oddaljenih več, sme spet vrniti koordinate kateregakoli od njih. Funkcija mora biti napisana tako, da hitro deluje tudi na dolgih seznamih z veliko različnimi vrstami.
# - `najpogostejsa_vrsta(drevesa)` vrne ime najpogostejše drevesne vrste v podanem seznamu dreves. Če je enako pogostih vrst več, naj vrne poljubno od njih.
#
# #### Rešitev
#
# Pri razdalji nimamo kaj: samo formula. Funkciji `najblizji` in `najblizji_enak` pa sta tudi nekaj, kar meljemo že od začetka - iskanje največjega elementa po določenem kriteriju.
# +
def razdalja(x0, y0, x1, y1):
return math.sqrt((x0 - x1) ** 2 + (y0 - y1) ** 2)
def najblizji(x, y, drevesa):
naj_drevo = None
for drevo, x0, y0 in drevesa:
razd = razdalja(x, y, x0, y0)
if naj_drevo == None or razd < naj_razd:
naj_razd, naj_drevo = razd, drevo
return naj_drevo
def najblizji_enak(x, y, tip, drevesa):
naj_koord = None
for drevo, x0, y0 in drevesa:
razd = razdalja(x, y, x0, y0)
if tip == drevo and (naj_koord == None or razd < naj_razd):
naj_razd, naj_koord = razd, (x0, y0)
return naj_koord
# -
# Preostale tri vrste so preprosta ponovitev zank in pogojev ter množic in slovarjev. Predvsem pa lepo vodijo v izpeljane slovarje in množice pri nalogi za oceno 8.
# +
def vse_vrste(drevesa):
vrste = set()
for drevo, x, y in drevesa:
vrste.add(drevo)
return vrste
def koordinate_tipov(tip, drevesa):
koordinate = set()
for drevo, x, y in drevesa:
if drevo == tip:
koordinate.add((x, y))
return koordinate
def najblizji_po_vrstah(x, y, drevesa):
najblizji = {}
for vrsta in vse_vrste(drevesa):
najblizji[vrsta] = najblizji_enak(x, y, vrsta, drevesa)
return najblizji
# -
# Pri `koordinate_tipov` je potencialen kamen spotike dodajanje v množico: dodati želimo par `(x, y)`, torej ne moremo pisati `add(x, y)`, kar bi bilo videti, kot da dodajamo dve števili, namreč `x` in `y` in seveda ne deluje, temveč `add((x, y))`.
#
# V zadnji funkciji smo uporabili dve izmed prejšnjih funkcij: funkcijo, ki vrne vse drevesne vrste in funkcijo, ki za vsako od njih poišče najbližji primerek.
#
# Tole pravzaprav kar takoj prevedimo v rešitve v eni vrstici, saj so očitne.
# +
def vse_vrste(drevesa):
return {drevo for drevo, x, y in drevesa}
def koordinate_tipov(tip, drevesa):
return {(x, y) for drevo, x, y in drevesa if drevo == tip}
def najblizji_po_vrstah(x, y, drevesa):
return {vrsta: najblizji_enak(x, y, vrsta, drevesa) for vrsta in vse_vrste(drevesa)}
# -
# ### Ocena 7
#
# Podatke o drevesih pravzaprav dobimo v datoteki. Videti je tako.
#
# ```
# smreka: 4, 6
# bukev: 1, 5; 3, 4
# javor: -1, 4; 7, 3; -1, -2; 4, -3
# bukev: 8, 5; 2, 2; -2, 2
# javor: 4, 2
# smreka: -3, -1
# bukev: 3, -1
# ```
#
# Vsaka vrstica se očitno začne z imenom drevesne vrrste, sledi dvopičje in potem koordinate enega ali več dreves te vrste. Drevesa so ločena s podpičji, koordinati z vejico, decimalk ni.
#
# Napiši funkcijo `preberi(ime_datoteke)`, ki prejme ime datoteke s takšno vsebino, in vrne seznam, podoben gornjemu. Vrstni red elementov je lahko poljuben.
#
# #### Rešitev
#
# Rešitev je samo eno dolgo zaporedje `split`-ov. Vsako vrstico razdelimo glede na `:` da dobimo drevesno vrsto in seznam koordinat. Slednjega razbijemo glede na `;`, da dobimo koordinate posameznih dreves. Koordinate razdelimo na x in y z razbitjem po `,`.
def preberi(ime_datoteke):
drevesa = []
for vrstica in open(ime_datoteke):
vrsta, koordinate = vrstica.split(":")
for koordinati in koordinate.split(";"):
x, y = koordinati.split(",")
drevesa.append((vrsta, float(x), float(y)))
return drevesa
# (Upam, da ne prehuda) vaja iz programiranja je tule, kje je zanka in kje ne. Z zanko gremo po vrsticah datoteke. V razbijanju po `:` ni zanke, saj tu vedno dobimo dve stvari: ime drevesne vrste in koordinate. Razbijanje po `;` da spisek koordinat, ta je lahko ena, lahko pa jih je sto (ali karkoli vmes ali čez), torej spet zanka. Končno, ko razbijamo po `,`, dobimo dve koordinati, torej to spet le shranimo v dve spremenljivki, brez zanke.
# ### Ocena 8
#
# Funkcije `vse_vrste`, `koordinate_tipov` in `najblizji_po_vrstah` naj bodo napisane v eni vrstici.
#
# #### Rešitev
#
# Glej zgoraj.
#
# ### Ocena 9
#
# Napiši funkcije
#
# - `dolzina_poti(x, y, vrste, drevesa)`, ki prejme neke začetne koordinate in seznam vrst dreves, ki bi jih radi obiskali, na primer `["bukev", "javor", "bukev", "smreka"]`, ter, kot vedno, seznam dreves. Funkcija mora izračunati dolžino poti od začetnih koordinat do (če sledimo gornjemu primeru) najbližje bukve, potem do javorja, ki je najbližji tej bukvi, potem do bukve, ki je najbližja temu javorju (in morda ni ista kot prejšnja bukev!) in potem do smreke, ki je najbližja tej bukvi. Če v katerem izmed korakov obstajata več enako oddaljenih najbližjih dreves, lahko gre k poljubnemu od njih.
#
# - `najblizji_par_enakih(drevesa)` vrne razdaljo med najbližjim parom dreves iste vrste.
#
# - `najmanjsi_krog(x, y, drevesa)` prejme koordinati in seznam dreves. Vrniti mora polmer najmanjšega kroga s središčem v teh koordinatah, ki vsebuje vse različne drevesne vrste.
#
# #### Rešitev
#
# Za `dolzina_poti` potrebujemo spremenljivko, v katero seštevamo dolžino. V vsakem koraku poiščemo drevo, ki je najbližje danim koordinatam, prištejemo razdaljo k dolžini in si zapomnimo nove koordinate.
def dolzina_poti(x, y, vrstni_red, drevesa):
skupno = 0
for drevo in vrstni_red:
x0, y0 = najblizji_enak(x, y, drevo, drevesa)
skupno += razdalja(x, y, x0, y0)
x, y = x0, y0
return skupno
# Za `najblizji_par_enakih` potrebujemo gnezdeno zanko. Če imamo opravka z dvema istovrstnima drevesoma, izračunamo razdaljo med njima; če je ta večja od 0 (torej: ne gre za isto drevo), a manjša od najmanjše (doslej), je to nova najmanjša razdalja (doslej).
def najblizji_par_enakih(drevesa):
naj_razd = None
for drevo1, x1, y1 in drevesa:
for drevo2, x2, y2 in drevesa:
if drevo1 != drevo2:
continue
razd = razdalja(x1, y1, x2, y2)
if razd > 0 and (naj_razd == None or razd < naj_razd):
naj_razd = razd
return naj_razd
# Za vsako vrsto moramo ugotoviti, katero je najbližje drevo te vrste. Krog mora pokriti vsa ta drevesa, torej mora biti njegov polmer enak največji izmed teh razdalij.
#
# To ugotovivši lahko nalogo uženemo kar v eni vrstici. Pokličemo `najblizji_po_vrstah(x, y, drevesa)`; zanimajo nas le koordinate, torej pokličemo metodi `values()`, se pravi `najblizji_po_vrstah(x, y, drevesa).values()`. Tako dobimo koordinate vseh najbližjih dreves posameznih vrst. Izračunamo razdalje do njih: `razdalja(x, y, x1, y1) for x1, y1 in najblizji_po_vrstah(x, y, drevesa).values()`. Funkcija mora vrniti največjo med njimi.
def najmanjsi_krog(x, y, drevesa):
return max(razdalja(x, y, x1, y1) for x1, y1 in najblizji_po_vrstah(x, y, drevesa).values())
# ### Ocena 10
#
# Za oceno 10 ni posebnih nalog. Ocene 10 bodo dobili študenti, ki bodo zelo zgledno rešili vse naloge.
| 9,038 |
/classify_activity.ipynb
|
4c5786bb6addfb64340fdde0f8a3621e200f4435
|
[] |
no_license
|
zafartahirov/Random_Codes
|
https://github.com/zafartahirov/Random_Codes
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 197,479 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Skewness and Kurtosis
#
# ## Introduction
# We have previously identified a normal distribution to be symmetrical in shape. But this may not be the case most of the time when dealing real world data. This lesson will cover the attributes of a normal distribution that define how much "out-of-shape" it is.
#
# ## Objectives
# You will be able to:
#
# * Understand the concept of symmetrical distribution
# * Calculate and describe skewness and kurtosis as measures of non-symmetry
# ## Symmetric Distributions
#
# A distribution is symmetric if the relative frequency or probability is the same at equal distances from the point of symmetry . The point of symmetry for normal distributions would be the center of that distribution i.e. the mean. We can refer to the point of symmetry as **𝛼**.
#
# The mean and median are equal and occur at 𝛼 where 𝛼 represents the point that the distribution is symmetric about its mean. Also the mode is equal to the mean and median if the distribution is both symmetric and unimodal (unimodal means that the distribution only has one value that occurs the most, one peak).
#
# For example have a look at following histogram we saw when discussing measures of central tendency:
#
# 
#
# This distribution meets all of the conditions of being symmetrical.
#
# The most common symmetric distribution is the normal distribution, however there are a number of other distributions that are symmetric. [Here is a good article](chttps://www.statisticshowto.datasciencecentral.com/symmetric-distribution-2/) that looks into all sorts of symmetrical distributions. We shall focus on normal distributions here and see how these can lose symmetry.
# ## Skewness
#
# Skewness is the degree of distortion or deviation from the symmetrical bell curve that is a key characteristic of a normal distribution. Skewness can be seen as a measure to calculate lack of symmetry in data distribution.
#
# Skewness helps you identify extreme values in one versus the other tail. A symmetrical distribution will have a skewness of 0. There are two types of Skewness: Positive and Negative
# ### Positive Skewness
#
# Positive Skewness is present in a distribution when the tail on the right side of the distribution is longer (or fatter - as it is normally called). The mean and median will be greater than the mode.
#
# ### Negative Skewness
# Negative Skewness is present in a distribution when the tail of the left side of the distribution is longer or fatter than the tail on the right side. The mean and median will be less than the mode.
#
# This behavior is shown in the images below:
#
# 
#
# Here the fact that mean is not the same as median and mode anymore brings in consequences for data analysis. The normality assumption that we associate with a given data data for analysis does not hold true when these situations happen. Later we shall see how to deal with such distributions.
#
# ### Measuring Skewness
#
# For univariate data Y1, Y2, ..., YN, the formula for skewness is:
#
# **Skewness= ∑ N<sub>i=1</sub> (Yi−Y)<sup>3</sup> / N)/s<sup>3</sup>**
#
# where Y¯ is the mean, s is the standard deviation, and N is the number of data points. Note that in computing the skewness. The above formula for skewness is referred to as the **Fisher-Pearson coefficient of skewness**. There are also other ways to calculate skewness but this one is used mostly commonly.
#
# ### When is the skewness too much?
#
# The rule of thumb seems to be:
#
# * If the skewness is between -0.5 and 0.5, the data are fairly symmetrical.
# * If the skewness is between -1 and -0.5(negatively skewed) or between 0.5 and 1(positively skewed), the data are moderately skewed.
# * If the skewness is less than -1(negatively skewed) or greater than 1(positively skewed), the data are highly skewed.
#
# **Example**
#
# Suppose we have house values ranging from $100k to $1,000,000 with the average being $500,000 in a given house prices dataset.
#
#
#
# If the peak of the distribution was left of the average value, portraying a positive skewness in the distribution. It would mean that many houses were being sold for less than the average value, i.e. $500k. This could be for many reasons, but we are not going to interpret those reasons here.
#
# If the peak of the distributed data was right of the average value, that would mean a negative skew. This would mean that the houses were being sold for more than the average value as shown below.
# <img src = "homeskew.png" width = 400>
# ## Kurtosis
#
# Kurtosis deals with the lengths of tails in the distribution. There is general misconception that kurtosis is a measure of "peakedness" in a distributions. Well kurtosis is not really the peakedness or flatness. It is used to describe the extreme values in one versus the other tail. It is actually the **measure of outliers** present in the distribution.
# 
#
# Here we see that long tails are generally due to errors in the measurement as the peaks of the data (correct values) are all centred around the mean of the distribution. Long tails here are the signs that apart of correct measurements, we also have data about outliers present in our dataset.
#
# ### Measuring Kurtosis
#
# For univariate data Y1, Y2, ..., YN, the formula for kurtosis is:
#
# **kurtosis =∑ N<sub>(i=1)</sub> (Yi−Y¯)<sup>4</sup> / N / s<sup>4</sup>**
#
# If there is a high kurtosis, then, we need to investigate why do we have so many outliers.
# Presence of outliers could be indications of errors OR some interesting observations that may need to be explored further. Remember for banking transactions, an outliers might signify a possible fraudulent activity. How we deal with outliers depends mainly on the domain.
# So always Investigate!
#
# Low kurtosis in a data set is an indicator that data has light tails or lack of outliers. If we get low kurtosis(too good to be true), then also we need to investigate and trim the dataset of unwanted results.
#
# ### How much kurtosis is bad kurtosis ?
# 
# #### Mesokurtic
#
# This distribution has kurtosis statistic similar to that of the normal distribution. It means that the extreme values of the distribution are similar to that of a normal distribution characteristic. This definition is used so that the standard normal distribution has a kurtosis of three.
#
# #### Leptokurtic (Kurtosis > 3)
#
# Distribution is longer, tails are fatter. Peak is higher and sharper than Mesokurtic, which means that data are heavy-tailed or profusion of outliers.
# Outliers stretch the horizontal axis of the histogram graph, which makes the bulk of the data appear in a narrow (“skinny”) vertical range, thereby giving the “skinniness” of a leptokurtic distribution.
#
# #### Platykurtic: (Kurtosis < 3)
#
# Distribution is shorter, tails are thinner than the normal distribution. The peak is lower and broader than Mesokurtic, which means that data are light-tailed or lack of outliers.
# The reason for this is because the extreme values are less than that of the normal distribution.
#
#
# ## Summary
#
# In this lesson, we learned about the characteristics of distributions( specifically normal distribution) that identify the level of non-symmetry and presence of consistent outliers in the data. Next we shall see how to measure skewness and kurtosis in python.
with the output gate result, we can make a decision while remembering old information.
#
# #### Foreword about my Laziness
#
# Ideally we want to combine the data from all the sensors, and use one of the alignment algorithms (i.e. [DTW](https://en.wikipedia.org/wiki/Dynamic_time_warping)) such that we align the time series acquired from all sensors, and treat the inputs as combined input vector. However, I am a little short on time, given I have to finish my dissertation write up by tomorrow as well, I will use another set of steps:
#
# 1. Train several LSTMs -- 1 for every sensor.
# 2. During evaluation phase run all of LSTMs, and produce candidate results
# 3. Use majority vote to identify the most likely candidate
# - We might want to give sensors weights, which might be determined using the cross validation set. This might be needed because we don't want all the sensors to be equally important
# -
# ### Implementation
#
# The codes are located in the [`lstm.py`](lstm.py). The wrapper methods are shamelessly borrowed from [nivwusquorum](https://gist.github.com/nivwusquorum/)
#
# Below is a snippet of a cell. The code itself is very long, you can open it externally from [`lstm_test1.py`](lstm_test1.py)
# +
# # %load './lstm/lstm.py'
# #!/usr/bin/env python
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import collections
import tensorflow as tf
from tensorflow.python.ops import variable_scope as vs
from tensorflow.python.ops.math_ops import sigmoid
from tensorflow.python.ops.math_ops import tanh
from tensorflow.python.ops.rnn_cell_impl import _RNNCell as RNNCell
# from tensorflow.contrib.rnn import LSTMStateTuple
from tensorflow.contrib import layers
from tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl import _linear as linear
from tensorflow.python.ops import array_ops
_LSTMStateTuple = collections.namedtuple("LSTMStateTuple", ("c", "h"))
class LSTMStateTuple(_LSTMStateTuple):
"""Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order.
"""
__slots__ = ()
@property
def dtype(self):
(c, h) = self
if not c.dtype == h.dtype:
raise TypeError("Inconsistent internal state: %s vs %s" %
(str(c.dtype), str(h.dtype)))
return c.dtype
class LSTMCellOlah(tf.contrib.rnn.RNNCell):
"""
See the following:
- http://arxiv.org/abs/1409.2329
- http://colah.github.io/posts/2015-08-Understanding-LSTMs/
- http://karpathy.github.io/2015/05/21/rnn-effectiveness/
This is a simplified implementation -- it ignores most biases, and it somewhat simplifies the equations shown in the references above.
"""
def __init__(self, num_units, *args, **kwargs):
self._num_units = num_units
@property
def state_size(self):
return LSTMStateTuple(self._num_units, self._num_units)
@property
def output_size(self):
return self._num_units
def __call__(self, inputs, state, scope="LSTM"):
with tf.variable_scope(scope):
c, h = state
concat = linear([inputs, h], 4 * self._num_units, True)
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
i, j, f, o = array_ops.split(value=concat, num_or_size_splits=4, axis=1)
forget_bias = 1.0
new_c = (c * tf.nn.sigmoid(f + forget_bias)
+ tf.nn.sigmoid(i) * tf.nn.tanh(j))
new_h = tf.nn.tanh(new_c) * tf.nn.sigmoid(o)
return new_h, LSTMStateTuple(new_c, new_h)
def zero_state(self, batch_size, dtype=tf.float32, learnable=False, scope="LSTM"):
if learnable:
c = tf.get_variable("c_init", (1, self._num_units),
initializer=tf.random_normal_initializer(dtype=dtype))
h = tf.get_variable("h_init", (1, self._num_units),
initializer=tf.random_normal_initializer(dtype=dtype))
else:
c = tf.zeros((1, self._num_units), dtype=dtype)
h = tf.zeros((1, self._num_units), dtype=dtype)
c = tf.tile(c, [batch_size, 1])
h = tf.tile(h, [batch_size, 1])
c.set_shape([None, self._num_units])
h.set_shape([None, self._num_units])
return (c, h)
# -
# Here are the preliminary results **before cross validation is implemented**
#
# ```
# $> ./lstm_test1.py
# W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
# W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
# W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
# W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
# W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
# Iter 1280, Minibatch Loss= 0.432134, Training Accuracy= 0.91406
# Iter 2560, Minibatch Loss= 4.235562, Training Accuracy= 0.00000
# Iter 3840, Minibatch Loss= 3.661356, Training Accuracy= 0.00000
# Iter 5120, Minibatch Loss= 2.457803, Training Accuracy= 0.13281
# Iter 6400, Minibatch Loss= 0.385571, Training Accuracy= 1.00000
# Iter 7680, Minibatch Loss= 2.319496, Training Accuracy= 0.42188
# Iter 8960, Minibatch Loss= 0.956249, Training Accuracy= 0.56250
# Iter 10240, Minibatch Loss= 0.188828, Training Accuracy= 0.96875
# Iter 11520, Minibatch Loss= 0.368966, Training Accuracy= 0.96875
# Iter 12800, Minibatch Loss= 0.852489, Training Accuracy= 0.69531
# Iter 14080, Minibatch Loss= 1.551502, Training Accuracy= 0.28906
# Iter 15360, Minibatch Loss= 0.283216, Training Accuracy= 0.99219
# Iter 16640, Minibatch Loss= 0.096124, Training Accuracy= 1.00000
# Iter 17920, Minibatch Loss= 0.876593, Training Accuracy= 0.66406
# Iter 19200, Minibatch Loss= 0.153833, Training Accuracy= 0.96875
# Iter 20480, Minibatch Loss= 0.000833, Training Accuracy= 1.00000
# Iter 21760, Minibatch Loss= 1.047436, Training Accuracy= 0.54688
# Iter 24320, Minibatch Loss= 0.142767, Training Accuracy= 0.96875
# Iter 25600, Minibatch Loss= 0.150670, Training Accuracy= 0.97656
# Iter 26880, Minibatch Loss= 0.521354, Training Accuracy= 0.89062
# Iter 28160, Minibatch Loss= 1.010623, Training Accuracy= 0.59375
# Iter 29440, Minibatch Loss= 0.023121, Training Accuracy= 1.00000
# Iter 30720, Minibatch Loss= 0.965022, Training Accuracy= 0.66406
# Iter 32000, Minibatch Loss= 0.498216, Training Accuracy= 0.85938
# Iter 33280, Minibatch Loss= 0.133977, Training Accuracy= 0.95312
# Iter 34560, Minibatch Loss= 0.079636, Training Accuracy= 0.98438
# Iter 35840, Minibatch Loss= 0.511552, Training Accuracy= 0.82812
# Iter 37120, Minibatch Loss= 0.660538, Training Accuracy= 0.69531
# Iter 38400, Minibatch Loss= 0.019211, Training Accuracy= 1.00000
# Iter 39680, Minibatch Loss= 0.030286, Training Accuracy= 0.99219
# Iter 40960, Minibatch Loss= 0.420285, Training Accuracy= 0.89844
# Iter 42240, Minibatch Loss= 0.124476, Training Accuracy= 0.96875
# Iter 43520, Minibatch Loss= 0.000324, Training Accuracy= 1.00000
# Iter 44800, Minibatch Loss= 0.522065, Training Accuracy= 0.75000
# Iter 47360, Minibatch Loss= 0.050574, Training Accuracy= 0.98438
# Iter 48640, Minibatch Loss= 0.045778, Training Accuracy= 0.99219
# Iter 49920, Minibatch Loss= 0.352735, Training Accuracy= 0.92188
# Iter 51200, Minibatch Loss= 0.556426, Training Accuracy= 0.81250
# Iter 52480, Minibatch Loss= 0.004360, Training Accuracy= 1.00000
# Iter 53760, Minibatch Loss= 0.563033, Training Accuracy= 0.80469
# Iter 55040, Minibatch Loss= 0.202678, Training Accuracy= 0.98438
# Iter 56320, Minibatch Loss= 0.068467, Training Accuracy= 0.98438
# Iter 57600, Minibatch Loss= 0.021069, Training Accuracy= 1.00000
# Iter 58880, Minibatch Loss= 0.218716, Training Accuracy= 0.92969
# Iter 60160, Minibatch Loss= 0.242975, Training Accuracy= 0.94531
# Iter 61440, Minibatch Loss= 0.008358, Training Accuracy= 1.00000
# Iter 62720, Minibatch Loss= 0.016236, Training Accuracy= 0.99219
# Iter 64000, Minibatch Loss= 0.208271, Training Accuracy= 0.93750
# Iter 65280, Minibatch Loss= 0.091329, Training Accuracy= 0.98438
# Iter 66560, Minibatch Loss= 0.000211, Training Accuracy= 1.00000
# Iter 67840, Minibatch Loss= 0.321388, Training Accuracy= 0.88281
# Iter 70400, Minibatch Loss= 0.026148, Training Accuracy= 0.99219
# Iter 71680, Minibatch Loss= 0.014641, Training Accuracy= 1.00000
# Iter 72960, Minibatch Loss= 0.284812, Training Accuracy= 0.94531
# Iter 74240, Minibatch Loss= 0.260307, Training Accuracy= 0.90625
# Iter 75520, Minibatch Loss= 0.001303, Training Accuracy= 1.00000
# Iter 76800, Minibatch Loss= 0.366085, Training Accuracy= 0.85938
# Iter 78080, Minibatch Loss= 0.099403, Training Accuracy= 0.99219
# Iter 79360, Minibatch Loss= 0.061466, Training Accuracy= 0.98438
# Iter 80640, Minibatch Loss= 0.007409, Training Accuracy= 1.00000
# Iter 81920, Minibatch Loss= 0.148185, Training Accuracy= 0.93750
# Iter 83200, Minibatch Loss= 0.132444, Training Accuracy= 0.97656
# Iter 84480, Minibatch Loss= 0.006270, Training Accuracy= 1.00000
# Iter 85760, Minibatch Loss= 0.003511, Training Accuracy= 1.00000
# Iter 87040, Minibatch Loss= 0.124695, Training Accuracy= 0.96875
# Iter 88320, Minibatch Loss= 0.191351, Training Accuracy= 0.95312
# Iter 89600, Minibatch Loss= 0.000156, Training Accuracy= 1.00000
# Iter 90880, Minibatch Loss= 0.235717, Training Accuracy= 0.92969
# Iter 93440, Minibatch Loss= 0.016126, Training Accuracy= 0.99219
# Iter 94720, Minibatch Loss= 0.005329, Training Accuracy= 1.00000
# Iter 96000, Minibatch Loss= 0.236991, Training Accuracy= 0.94531
# Iter 97280, Minibatch Loss= 0.127145, Training Accuracy= 0.95312
# Iter 98560, Minibatch Loss= 0.000689, Training Accuracy= 1.00000
# Iter 99840, Minibatch Loss= 0.206335, Training Accuracy= 0.94531
# Optimization Finished!
# Test Accuracy: 0.85319
# ```
# ---
#
# ## Emergency Early Stopping
#
# Unfortunately, I cannot finish the codes at the moment -- some emergency things came up, and I have to fly to CA tomorrow. Before that I have to submit a draft of my dissertation, which I still need to update. That means I cannot finish the project without violating the time constraints.
#
# ### TODO list
#
# To avoid the closure problems, I am not deleting the the todo list that I was planning on finishing up before Tuesday:
#
# 1. Write simple time sampling routine just to check the NNs
# 2. Implement simple version of RF
# 3. Implement Feature Budgeted RF (would be useful for mobile applications). Pick either adaptive classifier or FoG.
# 4. Implement RNN: LSTM would be cool, recurrent Bernoulli RBM would be Godlike!
# 5. Change the time sampling into a running window across all sensors
# - Combining all 4 sensors into one feature vector would increase the dimensionality of the problem. The time offset between samples could be mitigated either by repetition of missing data, or by taking a `max` window over sevral samples with the same label.
# 6. Implement hyperplane sweep using CV data to identify the best hyperparameters for the LSTM. It would have taken a long time to sweep through but it is possible
# 7. Change the data splitting routine to allow for k-folding for cross validation. Currently CV data is static, and if we try to optimize for it, we will bleed a lot of information into the training routine, and we might overfit.
# 8. Leave the training routine running on enggrid cluster
#
#
| 19,456 |
/.ipynb_checkpoints/testing_III_group-checkpoint.ipynb
|
9fd42ea615ada886523ba33e7531b54796adcf57
|
[] |
no_license
|
FeaxLovesGit/science_work
|
https://github.com/FeaxLovesGit/science_work
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 193,067 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 20июля 2018
# Проверяю тем, что в цикле 1000 раз запускаю на разных данных.
# +
# %reload_ext autoreload
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from make_ready_data.make_ready_big_data import read_data, get_label, get_feat
from make_ready_data.make_ready_big_data import look_through_info, get_train_and_test_sets
from make_ready_data.make_ready_short_data import get_short_test_data_from_Roman
from make_ready_data.make_ready_short_data import get_short_test_data_from_Svetlana_G
from make_ready_data.create_features_indeces import get_labels_and_features_indeces
from make_ready_data.create_features_indeces import build_plot_for_best_features
# +
info, final_filelist = read_data()
data_sick, data_heal, inds_sick, inds_heal = look_through_info(info, final_filelist)
plot_labels, indlist = get_labels_and_features_indeces()
# -
test_patients_data = get_short_test_data_from_Svetlana_G(dirpath='../data/test_data/')
test_patients_data.append(('Roman_patient', get_short_test_data_from_Roman()))
def extraxt_samples_from_new_test_data(test_patients_data):
patients_samples = np.empty((0, 5))
for data in test_patients_data:
patients_samples = np.concatenate((patients_samples, data[1]))
return patients_samples
test_patients_samples = extraxt_samples_from_new_test_data(test_patients_data)
def testing_all_new_test_samples(data_sick, data_heal, indlist, test_patients_samples):
all_data = np.concatenate((data_heal, data_sick))
res = np.zeros(len(indlist))
for i, inds in enumerate(indlist):
clf = LogisticRegression(C=1000.)
clf.fit(all_data[:, inds], all_data[:, -1])
res[i] = clf.predict(test_patients_samples[:, inds]).sum() / test_patients_samples.shape[0]
return res
res_for_samples = testing_all_new_test_samples(data_sick, data_heal, indlist, test_patients_samples)
def testing_all_new_test_patients(data_sick, data_heal, indlist, test_patients_data):
all_data = np.concatenate((data_heal, data_sick))
res_for_patients = np.zeros(len(indlist))
for i, inds in enumerate(indlist):
print(inds)
clf = LogisticRegression(C=1000.)
clf.fit(all_data[:, inds], all_data[:, -1])
for j in range(len(test_patients_data)):
res = clf.predict(test_patients_data[j][1][:, inds])
log_ans = round(res.sum() / res.shape[0] + 0.0001)
print(test_patients_data[j][0], int(log_ans), res)
res_for_patients[i] += log_ans
print()
res_for_patients = res_for_patients / len(test_patients_data)
return res_for_patients
res_for_patients = testing_all_new_test_patients(data_sick, data_heal, indlist, test_patients_data)
print(plot_labels)
res_for_patients
build_plot_for_best_features(plot_labels, indlist,
y=[res_for_samples, res_for_patients],
y_arange=np.arange(0.85, 1.00 ,0.01),
marker=[':ro', ':go'],
legend_label=['результаты относительно сэмплов, C=1000',
'результаты относительно пациентов, C=1000'])
# +
test_patients_data = get_short_test_data_from_Svetlana_G("data/test_data")
n_tests = 1000
all_data = np.concatenate((data_heal, data_sick))
np.random.seed(0)
np.random.shuffle(all_data)
for data in test_patients_data:
print(data[0])
res1000 = np.zeros(data[1].shape[0])
for _ in range(n_tests):
tr_d, ts_d, tr_l, ts_l, _ = get_train_and_test_sets(data_sick, data_heal, inds_sick, inds_heal, 1.)
clf = LogisticRegression(C=1000.)
clf.fit(tr_d, tr_l)
res1000 = res1000 + clf.predict(data[1])
res1000 = res1000 / n_tests
print(res1000)
print()
| 4,072 |
/Day001_HW.ipynb
|
ac8cedd7a8d1e6c873fccf75771140ac048426e5
|
[] |
no_license
|
YangKD/-1st-Python-crawler
|
https://github.com/YangKD/-1st-Python-crawler
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 4,552 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 資料來源與檔案存取
#
# * 資料來源與取得
# * 開放資料
# * 資料儲存格式
# * Python 存取檔案
# ## 作業目標
#
# * 1.(簡答題)檔案、API、爬蟲三種取得資料方式有什麼不同?
# * 2.(實作)完成一個程式,需滿足下列需求:
# * 下載指定檔案到 Data 資料夾,存成檔名 Homework.txt
# * 檢查 Data 資料夾是否有 Homework.txt 檔名之檔案
# * 將「Hello World」字串覆寫到 Homework.txt 檔案
# * 檢查 Homework.txt 檔案字數是否符合 Hello World 字數
#
# ### 1.(簡答題)檔案、API、爬蟲三種取得資料方式有什麼不同?
"""
檔案通常會包裝成標準格式,例如csv檔或是json格式,處理資料會比較方便
API則提供程式化的連接的接⼝,讓工程師可以直接下載所需的部分
爬蟲則是從網站上直接爬取所需的資料,較複雜且可能有著作權疑慮
"""
# ### 2.(實作)完成一個程式,需滿足下列需求:
# * 下載指定檔案到 Data 資料夾,存成檔名 Homework.txt
# * 檢查 Data 資料夾是否有 Homework.txt 檔名之檔案
# * 將「Hello World」字串覆寫到 Homework.txt 檔案
# * 檢查 Homework.txt 檔案字數是否符合 Hello World 字數
#
# +
# 根據需求引入正確的 Library
from urllib.request import urlretrieve
import os
# +
# 下載檔案到 Data 資料夾,存成檔名 Homework.txt
try:
os.makedirs( './Data', exist_ok=True )
urlretrieve('https://www.w3.org/TR/PNG/iso_8859-1.txt' , './Data/Homework.txt')
except:
print('發生錯誤!')
# +
# 檢查 Data 資料夾是否有 Homework.txt 檔名之檔案
files = []
files = os.listdir('./Data')
if 'Homework.txt' in files:
print('[O] 檢查 Data 資料夾是否有 Homework.txt 檔名之檔案')
else:
print('[X] 檢查 Data 資料夾是否有 Homework.txt 檔名之檔案')
# +
# 將「Hello World」字串覆寫到 Homework.txt 檔案
f = ''
with open("./Data/Homework.txt", "w") as fh:
fh.write('Hello World')
try:
with open("./Data/Homework.txt", "r") as fh:
f = fh.read()
except EnvironmentError: # parent of IOError, OSError *and* WindowsError where available
pass
# +
# 檢查 Homework.txt 檔案字數是否符合 Hello World 字數
if len('Hello World') == len(f):
print('[O] 檢查 Homework.txt 檔案字數是否符合 Hello World 字數')
else:
print('[X] 檢查 Homework.txt 檔案字數是否符合 Hello World 字數')
# -
df['family_history']=df['family_history'].replace('No',0)
# df['family_history']=df['family_history'].replace('Yes',1)
# df['treatment']=df['treatment'].replace('No',0)
# df['treatment']=df['treatment'].replace('Yes',1)
# df['remote_work']=df['remote_work'].replace('No',0)
# df['remote_work']=df['remote_work'].replace('Yes',1)
# df['tech_company']=df['tech_company'].replace('No',0)
# df['tech_company']=df['tech_company'].replace('Yes',1)
# -
df
df['benefits'].unique()
df['benefits'].value_counts()
# +
# df['benefits']=df['benefits'].replace('No',0)
# df['benefits']=df['benefits'].replace('Yes',1)
# df['benefits']=df['benefits'].replace("Don't know",2)
# -
df['care_options'].unique()
df['care_options'].value_counts()
# +
# df['care_options']=df['care_options'].replace('No',0)
# df['care_options']=df['care_options'].replace('Yes',1)
# df['care_options']=df['care_options'].replace('Not sure',2)
# -
df['wellness_program'].unique()
df['wellness_program'].value_counts()
# +
# df['wellness_program']=df['wellness_program'].replace('No',0)
# df['wellness_program']=df['wellness_program'].replace('Yes',1)
# df['wellness_program']=df['wellness_program'].replace("Don't know",2)
# -
df['seek_help'].unique()
# +
# df['seek_help']=df['seek_help'].replace('No',0)
# df['seek_help']=df['seek_help'].replace('Yes',1)
# df['seek_help']=df['seek_help'].replace("Don't know",2)
# -
df['leave'].unique()
df['leave'].value_counts()
# df['leave']=df['leave'].replace("Don't know",0)
# df['leave']=df['leave'].replace('Somewhat easy',1)
# df['leave']=df['leave'].replace('Very easy',2)
# df['leave']=df['leave'].replace('Somewhat difficult',3)
# df['leave']=df['leave'].replace('Very difficult',4)
df['mental_health_consequence'].unique()
df['mental_health_consequence'].value_counts()
# +
# df['mental_health_consequence']=df['mental_health_consequence'].replace('No',0)
# df['mental_health_consequence']=df['mental_health_consequence'].replace('Yes',1)
# df['mental_health_consequence']=df['mental_health_consequence'].replace('Maybe',2)
# -
df['phys_health_consequence'].unique()
# +
# df['phys_health_consequence']=df['phys_health_consequence'].replace('No',0)
# df['phys_health_consequence']=df['phys_health_consequence'].replace('Yes',1)
# df['phys_health_consequence']=df['phys_health_consequence'].replace('Maybe',2)
# -
df.head()
df.dtypes
# +
df['treatment'] = df['treatment'].replace('No', 0)
df['treatment'] = df['treatment'].replace('Yes', 1)
# -
corrMatrix = df.corr()
fig, ax = plt.subplots(figsize=(13,15))
sns.heatmap(corrMatrix, annot=True, linewidths=.5, ax=ax)
plt.show()
corrMatrix = df.corr()
fig, ax = plt.subplots(figsize=(13,15))
sns.heatmap(corrMatrix, annot=True, linewidths=.5, ax=ax)
plt.show()
df_features=df.drop(['treatment'],axis=1)
df_target=df['treatment']
df_dummy=pd.get_dummies(df_features,drop_first=True)
# # PREDICTION
X=df_dummy
y=pd.DataFrame(df_target)
X
from sklearn.model_selection import train_test_split
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state = 10)
# check the dimensions of the train & test subset using 'shape'
# print dimension of train set
print("X_train",X_train.shape)
print("y_train",y_train.shape)
# print dimension of test set
print("X_test",X_test.shape)
print("y_test",y_test.shape)
# -
# # SVM
from sklearn.svm import SVC
# +
svclassifier = SVC(kernel = 'linear')
# fit the model
svc_model=svclassifier.fit(X_train, y_train)
# -
y_predsvm = svclassifier.predict(X_test)
svclassifier.score(X_test,y_test)
svclassifier.predict(X_train.iloc[0:10])
print(y_train.iloc[0:10])
type(X_train)
# +
# import pickle
# with open('model.pickle','wb') as f:
# pickle.dump(svclassifier,f)
# -
| 5,736 |
/20190420-空氣盒子數據風向與空污/.ipynb_checkpoints/PM2.5與風向之關聯性-checkpoint.ipynb
|
b4b4e94746f7ce14c22255df0b21914e3afbb28f
|
[] |
no_license
|
Yuchen-ck/Python-PM2.5-DataAnalyzing
|
https://github.com/Yuchen-ck/Python-PM2.5-DataAnalyzing
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 194,436 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
print("Sangmin Lee")
as plt
# %matplotlib inline
df=pd.read_excel('abc.xlsx')
x=df.WindDirec
y=df.PM25
plt.scatter(x,y)
# #### 由於政府公開資料的格式問題,導致風向與PM2.5的視覺化呈現有困難(僅每小時資料提供風向)
# #### 因此風向與PM2.5關聯性,將改以文獻閱讀為主
# # 論文
# ### The Impact of Meteorological Factors on PM2.5 Variations in Hong Kong
# > The north wind in winter increased the PM2.5 in Hong Kong while the south wind in summer decrease the PM2.5.
# <https://iopscience.iop.org/article/10.1088/1755-1315/78/1/012003>
# ### Effects of Wind Direction on Coarse and Fine ParticulateMatter Concentrations in Southeast Kansas
# >The effect of wind direction was found to be statisticallysignificant for both PM10and PM2.5.
# <https://www.tandfonline.com/doi/pdf/10.1080/10473289.2006.10464559>
# ### Effects of Meteorological Conditions on PM2.5 Concentrations in Nagasaki, Japan
from IPython.display import Image
from IPython.core.display import HTML
PATH = "PM25.png" #圖片路徑
Image(filename = PATH , width=600, height=600)
from IPython.display import Image
from IPython.core.display import HTML
PATH2 = "wind-direction.png" #圖片路徑
Image(filename = PATH2 , width=600, height=600)
| 1,486 |
/SubsurfaceDataAnalytics_DataFrame.ipynb
|
dbc875f0d49ceabe9c05bdc5091a9df86a35804b
|
[
"MIT"
] |
permissive
|
victsnet/PythonNumericalDemos
|
https://github.com/victsnet/PythonNumericalDemos
| 2 | 0 |
MIT
| 2019-08-24T13:31:04 | 2019-08-22T22:23:31 | null |
Jupyter Notebook
| false | false |
.py
| 74,762 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import requests
from bs4 import BeautifulSoup
from bokeh.plotting import figure, curdoc, show
from bokeh.models.formatters import DatetimeTickFormatter
from functools import partial
from bokeh.layouts import column
from bokeh.io import curdoc
from random import randint
from colorama import Fore, Style
# from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
import time, random
from bokeh.models import DatetimeTickFormatter, ColumnDataSource, ContinuousColorMapper, WheelZoomTool, HoverTool, LinearColorMapper, Legend, LegendItem, SingleIntervalTicker, DatetimeTicker
from datetime import datetime
# from bokeh.palettes import Viridis256
from bokeh.transform import linear_cmap
from bokeh.palettes import Turbo256
# from bokeh.palettes import Category10, Category20, Category20b, Category20c
from bokeh.plotting import figure, show
from tqdm import tqdm
import pandas as pd
# from selenium.webdriver.support.wait import WebDriverWait
# from selenium.webdriver.support import expected_conditions as EC
# from datetime import datetime, timedelta
import re
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
import pickle
from functools import lru_cache
# +
@lru_cache(maxsize=10)
def get_all_data():
user_agents = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0",
# Add more user agents here...
# ...
]
# Randomly select a user agent
random_user_agent = random.choice(user_agents)
# Create a session object
session = requests.Session()
# Set the user agent as a header for the session
session.headers.update({'User-Agent': random_user_agent,
'Accept': 'text/plain'})
url = 'https://www.dsebd.org/latest_share_price_scroll_l.php'
# Create a Retry object with desired parameters
retry_strategy = Retry(
total=3, # Maximum number of retries
backoff_factor=1, # Exponential backoff factor
status_forcelist=[500, 502, 503, 504] # HTTP status codes to retry on
)
https_adapter = HTTPAdapter(max_retries=retry_strategy, )
session.mount('https://', https_adapter)
# Make a request using the session
response = session.get(url, timeout=60)
# soup = BeautifulSoup(response.text)
html_content = response.content
# Parse the HTML content with BeautifulSoup
soup = BeautifulSoup(html_content, 'html.parser')
all_tr = soup.findAll('div', class_='table-responsive inner-scroll')[0].findAll('tr')
return all_tr
# @lru_cache(maxsize=10)
def get_co_data(co_name, all_tr, debug=False):
tmp_c = 0
for j in all_tr[1:]:
curr_tr_data = j.findAll('td')
for i in range(len(curr_tr_data)):
if i == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
serial=curr_tr_data[i].text
pass
if i % 10 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
volume = 0
else:
volume = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 9 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
value_of_trades = 0
else:
value_of_trades = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 8 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
number_of_trades = 0
else:
number_of_trades = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 7 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
percentage_change = 0
else:
percentage_change = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 6 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
yesterday_close_price = 0
else:
yesterday_close_price = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 5 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
close_price = 0
else:
close_price = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 4 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
last_low = 0
else:
last_low = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 3 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
last_high = 0
else:
last_high = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 2 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
last_trading_price = 0
else:
last_trading_price = float(curr_tr_data[i].text.replace(',', ''))
pass
elif i % 1 == 0:
if debug==True:
print(str(i) , " : ", curr_tr_data[i])
if curr_tr_data[i].text == '--':
trading_code = 'None'
else:
trading_code = re.sub(r'[^a-zA-Z0-9]', '', curr_tr_data[i].text)
pass
val_1 = trading_code + '_Serial'
val_2 = trading_code + '_Company_Name'
val_3 = trading_code + '_LTP'
val_4 = trading_code + '_LH'
val_5 = trading_code + '_LL'
val_6 = trading_code + '_CP'
val_7 = trading_code + '_YCP'
val_8 = trading_code + '_PC'
val_9 = trading_code + '_NoT'
val_10 = trading_code + '_VoT'
val_11 = trading_code + '_Vol'
val_12 = trading_code + '_Size'
val_1 = trading_code + '_Serial'
val_2 = trading_code + '_Company_Name'
val_3 = trading_code + '_LTP'
val_4 = trading_code + '_LH'
val_5 = trading_code + '_LL'
val_6 = trading_code + '_CP'
val_7 = trading_code + '_YCP'
val_8 = trading_code + '_PC'
val_9 = trading_code + '_NoT'
val_10 = trading_code + '_VoT'
val_11 = trading_code + '_Vol'
val_12 = trading_code + '_Size'
sz = 30 + float(value_of_trades/100)
# vot = value_of_trades
# if vot == 0:
# vot = 1
# sz = 30 + (1.0 / float(vot))
curr_co_data = {
val_1:[serial],
val_2:[trading_code],
val_3:[last_trading_price],
val_4:[last_high],
val_5:[last_low],
val_6:[close_price],
val_7:[yesterday_close_price],
val_8:[percentage_change],
val_9:[number_of_trades],
val_10:[value_of_trades],
val_11:[volume],
val_12:[sz],
}
# continue
tmp_c=tmp_c+1
if trading_code == co_name:
if debug==True:
print('Breaking')
return curr_co_data
break
# -
all_tr = get_all_data()
company_lst = ['BEXIMCO', 'EHL', 'STANDBANKL',
'ISLAMIINS']
# +
tmp = []
for i in range(len(company_lst)):
tmp.append(get_co_data(company_lst[i], all_tr))
# -
tmp
# +
import pandas as pd
# Define the column names
columns = ['LTP', 'LH', 'LL', 'CP', 'YCP', 'PC', 'NoT', 'VoT', 'Vol', 'Size']
# Create an empty DataFrame
df = pd.DataFrame(columns=columns)
all_data_list = tmp
company_name = company_lst
# Loop over the company list
for company_data in all_data_list:
company_name = list(company_data.keys())[0].split('_')[0]
company_values = [company_data[company_name + '_' + column][0] for column in columns]
df.loc[company_name] = company_values
# Print the resulting DataFrame
print(df)
# -
df
# +
import pandas as pd
# Define the column names
columns = ['LTP', 'LH', 'LL', 'CP', 'YCP', 'PC', 'NoT', 'VoT', 'Vol', 'Size', 'PCL']
# Create an empty DataFrame
df = pd.DataFrame(columns=columns)
all_data_list = tmp
company_name = company_lst
# Loop over the company list
for company_data in all_data_list:
company_name = list(company_data.keys())[0].split('_')[0]
company_values = [company_data.get(company_name + '_' + column, [None])[0] for column in columns[:-1]] # Exclude the last column "PCL"
company_values.append(None) # Add None to represent the PCL column temporarily
df.loc[len(df)] = company_values
# Get the PC values from previously collected data
previous_pc_values = df['PC'].values.tolist()
# Replace the temporary None value in the last column with the previous PC values
df['PCL'] = previous_pc_values
# Print the resulting DataFrame
print(df)
# -
df
# +
# Loop over the company list
for company_data in all_data_list:
company_name = list(company_data.keys())[0].split('_')[0]
company_values = [company_data.get(company_name + '_' + column, [None])[0] for column in columns[:-1]] # Exclude the last column "PCL"
company_values.append(None) # Add None to represent the PCL column temporarily
df.loc[len(df)] = company_values
# Get the PC values from previously collected data
previous_pc_values = df['PC'].values.tolist()
# Replace the temporary None value in the last column with the previous PC values
df['PCL'] = previous_pc_values
# Print the resulting DataFrame
print(df)
# -
df
# +
import pandas as pd
# Define the column names
columns = ['LTP', 'LH', 'LL', 'CP', 'YCP', 'PC', 'NoT', 'VoT', 'Vol', 'Size', 'PCL']
# Create an empty DataFrame
df = pd.DataFrame(columns=columns)
all_data_list = tmp
company_name = company_lst
# Create an empty list to store previous PC values
previous_pc_values = []
# Loop over the company list
for company_data in all_data_list:
company_name = list(company_data.keys())[0].split('_')[0]
company_values = [company_data.get(company_name + '_' + column, [None])[0] for column in columns[:-1]] # Exclude the last column "PCL"
company_values.append(previous_pc_values.copy()) # Append a copy of previous_pc_values to represent the PCL column
df.loc[len(df)] = company_values
# Update previous_pc_values with the current PC value
previous_pc_values.append(company_data.get(company_name + '_PC', [None])[0])
# Print the resulting DataFrame
print(df)
# +
import pandas as pd
# Define the column names
columns = ['LTP', 'LH', 'LL', 'CP', 'YCP', 'PC', 'NoT', 'VoT', 'Vol', 'Size']
# Create an empty DataFrame
df = pd.DataFrame(columns=columns)
all_data_list = tmp
company_name_list = company_lst
# Create an empty DataFrame to store the PCL values
pcl_df = pd.DataFrame(columns=['PCL'])
# Loop over the company list
for company_data in all_data_list:
company_name = list(company_data.keys())[0].split('_')[0]
company_values = [company_data.get(company_name + '_' + column, [None])[0] for column in columns]
df.loc[len(df)] = company_values
# Get the previous PC values for the current row
previous_pc_values = pcl_df.iloc[len(df) - 1, 0]
# Append the current PC value to the previous PC values
previous_pc_values.append(company_data.get(company_name + '_PC', [None])[0])
# Update the PCL column in the DataFrame
pcl_df.loc[len(df) - 1, 'PCL'] = previous_pc_values
# Concatenate the PCL DataFrame with the main DataFrame
df = pd.concat([df, pcl_df], axis=1)
# Print the resulting DataFrame
print(df)
# +
import pandas as pd
def generate_df(company_lst, all_data_dict_lst):
# Define the column names
columns = ['LTP', 'LH', 'LL', 'CP', 'YCP', 'PC', 'NoT', 'VoT', 'Vol', 'Size']
# Create an empty DataFrame
df = pd.DataFrame(columns=columns)
all_data_list = tmp
company_name = company_lst
# Loop over the company list
for company_data in all_data_list:
company_name = list(company_data.keys())[0].split('_')[0]
company_values = [company_data[company_name + '_' + column][0] for column in columns]
df.loc[company_name] = company_values
return df.assign(PCL='')
# Print the resulting DataFrame
# print(df)
df = generate_df(company_lst, tmp)
# -
df
df.columns
def update_pcl_column(df):
pcl_values = [] # Empty list to store PCL values
for index, row in df.iterrows():
pcl_list = row['PCL'] # Get the existing PCL list
if pd.isnull(pcl_list): # Check if PCL list is empty
pcl_list = [row['PC']] # Create a new list with PC value
else:
pcl_list.append(row['PC']) # Append PC value to existing list
pcl_values.append(pcl_list) # Store the updated PCL list
df['PCL'] = pcl_values # Update the PCL column in the DataFrame
return df
updated_df = update_pcl_column(df)
print(updated_df)
# +
import pandas as pd
def process_dataframe(df):
for index, row in df.iterrows():
pcl_list = row['PCL']
pc_value = row['PC']
if not isinstance(pcl_list, list):
pcl_list = [pc_value]
else:
pcl_list.append(pc_value)
df.at[index, 'PCL'] = pcl_list
return df
updated_df = process_dataframe(df)
print(updated_df)
# -
df
updated_df
# +
def process_dataframe(df1, df2):
for index, row in df2.iterrows():
pcl_list = row['PCL']
pc_value = df1.at[index, 'PC']
if not isinstance(pcl_list, list):
pcl_list = [pc_value]
else:
pcl_list.append(pc_value)
df2.at[index, 'PCL'] = pcl_list
return df2
process_dataframe(df, df)
# -
updated_df
df
| 17,843 |
/hw2/gradient_descent.ipynb
|
0e8d833493b4c09760e176906abefa45f7a047e5
|
[] |
no_license
|
Richardwang0326/machine_learning
|
https://github.com/Richardwang0326/machine_learning
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,292 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python [conda env:machine_learning]
# language: python
# name: conda-env-machine_learning-py
# ---
# +
# library
import numpy as np
import csv
import matplotlib.pyplot as plt
def loading_data():
lnchprg = []
math10 = []
lexpend = []
# load needed data
with open('MEAP93.csv', newline = '') as csvfile:
# stored as dict
rows = csv.DictReader(csvfile)
# load required data
for row in rows:
lnchprg.append((float)(row['lnchprg']))
math10.append((float)(row['math10']))
lexpend.append((float)(row['lexpend']))
# setting initial data
y = (np.array(math10)).reshape(-1, 1)
x = np.zeros((len(lnchprg),3))
for i in range(len(lnchprg)):
x[i,0] = 1
x[i,1] = lexpend[i]
x[i,2] = lnchprg[i]
return x, y
# -
def train(x, y, estimated, learning_rate, error):
# setting parameter
k=0
error_initial = 1
sse = 0
# if error converges, stop renew weight
while (error_initial > error):
# iteration
k += 1
# calculate predict value
y_predicted = np.dot(x, estimated)
# calculate gradient
out_error = y_predicted - y
gradients = 2 * np.dot(x.T, out_error)
# renew weight
origin = estimated
estimated = estimated - gradients * learning_rate
# calculate sse and error
sse = np.sum(out_error ** 2, axis = 0)
error_initial = np.sqrt(np.sum((origin-estimated)**2, axis = 0))
return estimated, k, sse
if __name__=="__main__":
# loading data
x_actual, y_actual = loading_data()
# setting parameter
initial_value = np.array([1,1,1],dtype = float).reshape(-1,1)
learning_rate_a = 1 * 10 ** (-10)
learning_rate_b = 1 * 10 ** (-5)
tolerance = 1 * 10 ** (-5)
# training
x_a, k_a, sse_a = train(x_actual, y_actual, initial_value, learning_rate_a, tolerance)
x_b, k_b, sse_b = train(x_actual, y_actual, initial_value, learning_rate_b, tolerance)
print("(a) k=%d, sse=%.4f \n" %(k_a, sse_a))
print(x_a)
print("\n(b) k=%d, sse=%.4f\n" %(k_b, sse_b))
print(x_b)
er.org/). Below are some examples of convenience functions provided.
# + [markdown] id="RVuqWUXPQSHa"
# Long running python processes can be interrupted. Run the following cell and select **Runtime -> Interrupt execution** (*hotkey: Cmd/Ctrl-M I*) to stop execution.
# + cellView="both" colab={"height": 244} id="d-S-3nYLQSHb" outputId="38d534fc-8b61-4f9f-d731-74b23aadb5bc"
import time
print("Sleeping")
time.sleep(30) # sleep for a while; interrupt me!
print("Done Sleeping")
# + [markdown] id="Wej_mEyXQSHc"
# ## System aliases
#
# Jupyter includes shortcuts for common operations, such as ls:
# + cellView="both" colab={"height": 323} id="5OCYEvK5QSHf" outputId="2ee8ae66-72ed-425c-e20b-bc5ac307ef3e"
# !ls /bin
# + [markdown] id="y8Da6JWKQSHh"
# That `!ls` probably generated a large output. You can select the cell and clear the output by either:
#
# 1. Clicking on the clear output button (x) in the toolbar above the cell; or
# 2. Right clicking the left gutter of the output area and selecting "Clear output" from the context menu.
#
# Execute any other process using `!` with string interpolation from python variables, and note the result can be assigned to a variable:
# + cellView="both" colab={"height": 35} id="zqGrv0blQSHj" outputId="4970b019-8ae1-47d3-cbc9-ebdb69ee1031"
message = 'Colaboratory is great!'
# foo = !echo -e '$message\n$message'
foo
# + [markdown] id="qM4myQGfQboQ"
# ## Magics
# Colaboratory shares the notion of magics from Jupyter. There are shorthand annotations that change how a cell's text is executed. To learn more, see [Jupyter's magics page](http://nbviewer.jupyter.org/github/ipython/ipython/blob/1.x/examples/notebooks/Cell%20Magics.ipynb).
#
# + cellView="both" colab={"height": 38} id="odfM-_GxWbCy" outputId="3f059816-dc25-4670-ca46-e2ee50a9490e" language="html"
# <marquee style='width: 30%; color: blue;'><b>Whee!</b></marquee>
# + colab={"height": 221} id="_YrTcK7k22Fp" outputId="bb3a69dd-49b7-4a6c-966a-64f77007a525" language="html"
# <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 450 400" width="200" height="200">
# <rect x="80" y="60" width="250" height="250" rx="20" style="fill:red; stroke:black; fill-opacity:0.7" />
# <rect x="180" y="110" width="250" height="250" rx="40" style="fill:blue; stroke:black; fill-opacity:0.5;" />
# </svg>
# + [markdown] id="d4L9TOP9QSHn"
# ## Tab-completion and exploring code
#
# Colab provides tab completion to explore attributes of Python objects, as well as to quickly view documentation strings. As an example, first run the following cell to import the [`numpy`](http://www.numpy.org) module.
# + cellView="both" id="Q0JKWcmtQSHp"
import numpy as np
# + [markdown] id="1M890-bXeyYp"
# If you now insert your cursor after ``np.random.`` and press **Tab**, you will see the list of available completions within the ``np.random`` submodule.
# + cellView="both" id="j6QRIfUHQSHq"
np.random.
# + [markdown] id="g6MfomFhQSHs"
# If you type an open parenthesis followed by the **Tab** key after any function or class in the module, you will see a pop-up of its documentation string:
# + cellView="both" id="SD0XnrVhQSHt"
np.random.rand(
# + [markdown] id="9ReRLQaxJ-zP"
# To open the documentation in a persistent pane at the bottom of your screen, add a **?** after the object or method name and execute the cell using **Shift+Enter**:
# + cellView="both" colab={"base_uri": "https://localhost:8080/"} id="YgQ6Tu7DK17l" outputId="d87a5d17-abe5-4ee5-a27d-07e1581df4a6"
# np.random?
# + [markdown] id="TYTBdJXxfqiJ"
# ## Exception Formatting
# + [markdown] id="4bqAVK-aQSHx"
# Exceptions are formatted nicely in Colab outputs:
# + cellView="both" id="CrJf1PEmQSHx" outputId="a8c7a413-a1e6-49f1-c7b5-58986eff5047"
x = 1
y = 4
z = y/(1-x)
# + [markdown] id="7cRnhv_7N4Pa"
# ## Rich, interactive outputs
# Until now all of the generated outputs have been text, but they can be more interesting, like the chart below.
# + colab={"height": 371} id="JVXnTqyE9RET" outputId="a20709c6-9773-49e9-bc49-1e8468ea990b"
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Fills and Alpha Example")
plt.show()
# + [markdown] id="aro-UJgUQSH1"
# # Integration with Drive
#
# Colaboratory is integrated with Google Drive. It allows you to share, comment, and collaborate on the same document with multiple people:
#
# * The **SHARE** button (top-right of the toolbar) allows you to share the notebook and control permissions set on it.
#
# * **File->Make a Copy** creates a copy of the notebook in Drive.
#
# * **File->Save** saves the File to Drive. **File->Save and checkpoint** pins the version so it doesn't get deleted from the revision history.
#
# * **File->Revision history** shows the notebook's revision history.
# + [markdown] id="4hfV37gxpP_c"
# ## Commenting on a cell
# You can comment on a Colaboratory notebook like you would on a Google Document. Comments are attached to cells, and are displayed next to the cell they refer to. If you have **comment-only** permissions, you will see a comment button on the top right of the cell when you hover over it.
#
# If you have edit or comment permissions you can comment on a cell in one of three ways:
#
# 1. Select a cell and click the comment button in the toolbar above the top-right corner of the cell.
# 1. Right click a text cell and select **Add a comment** from the context menu.
# 3. Use the shortcut **Ctrl+Shift+M** to add a comment to the currently selected cell.
#
# You can resolve and reply to comments, and you can target comments to specific collaborators by typing *+[email address]* (e.g., `[email protected]`). Addressed collaborators will be emailed.
#
# The Comment button in the top-right corner of the page shows all comments attached to the notebook.
| 8,405 |
/report_notebook.ipynb
|
233dcafa2135ca841dc2747cee232d4ae8c0932b
|
[] |
no_license
|
Hsieh-Cheng-Han/STA2453_Project1
|
https://github.com/Hsieh-Cheng-Han/STA2453_Project1
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 750,771 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
filename = './exampleFiles/truncateFile.txt'
changes = False
with open(filename, 'r+') as f:
doc = f.read()
doc = doc.replace('line', '+line')
changes = True
if changes:
f.truncate(0)
f.write(doc)
ta-science/master/data/cannabis_2_postgresql.csv'
build_data = pd.read_csv(build_data_url)
build_data = build_data.fillna("")
build_data.head(10)
build_data.iloc[2361]
effects_list = []
for i in range(len(build_data)):
for effect in build_data['Effects'].iloc[i].split(','):
#print(effect, i)
if effect not in effects_list:
effects_list.append(effect)
print(f'There are {len(effects_list)} unique reported effects among the {len(build_data)} strains.')
flavors_list = []
for i in range(len(build_data)):
for effect in build_data['Flavor'].iloc[i].split(','):
#print(effect, i)
if effect not in flavors_list:
flavors_list.append(effect)
print(f'There are {len(flavors_list)} unique reported flavors among the {len(build_data)} strains.')
ailments_list = []
for i in range(len(build_data)):
for effect in build_data['Ailment'].iloc[i].split(','):
#print(effect, i)
if effect not in ailments_list:
ailments_list.append(effect)
print(f'There are {len(ailments_list)} unique reported ailments among the {len(build_data)} strains.')
strain_types = []
for i in range(len(build_data)):
for effect in build_data['Type'].iloc[i].split(','):
#print(effect, i)
if effect not in strain_types:
strain_types.append(effect)
print(f'There are {len(strain_types)} unique reported strain types among the {len(build_data)} strains.')
# + id="3foayuK9FUwu" colab_type="code" colab={}
import sqlite3
db = '/content/db.sqlite3'
sl_conn = sqlite3.connect(db)
sl_curs = sl_conn.cursor()
build_data[['Strain', 'Type', 'Effects', 'Ailment', 'Flavor', 'Description']].to_sql('strains_table', sl_conn)
sl_conn.commit()
# + id="SQc0Ao_UIE0P" colab_type="code" outputId="e0d28e32-8726-4592-b789-077b392df397" colab={"base_uri": "https://localhost:8080/", "height": 187}
query = "SELECT Effects FROM strains_table LIMIT 10"
sl_curs.execute(query).fetchall()
# + id="Oftqt8Ped2l7" colab_type="code" colab={}
sl_conn.commit()
# + id="KdLPr5b1OMQI" colab_type="code" colab={}
# build_data.head()
# build_data[['Strain', 'Type', 'Effects', 'Ailment', 'Flavor', 'Description']]
# + id="dujkTEzLmzfl" colab_type="code" outputId="2062bdbd-5bea-4167-9dcc-61d03fe8c407" colab={"base_uri": "https://localhost:8080/", "height": 34}
# !pip install psycopg2-binary
# + id="8lOKDzdPm0hk" colab_type="code" colab={}
import psycopg2
# + id="eKVU_oBVm21R" colab_type="code" colab={}
# dbname = ''
# user = ''
# password = '' # Don't commit or share this for security purposes!
# host = '' # Port should be included or default
# + id="OseLPoVXm_sP" colab_type="code" colab={}
pg_conn = psycopg2.connect(dbname=dbname, user=user,
password=password, host=host)
# + id="LNJFHo1nnEYg" colab_type="code" outputId="710d6326-eb2d-4877-ce1b-e5c8fe08802d" colab={"base_uri": "https://localhost:8080/", "height": 34}
pg_conn
# + id="josl1FNpnGZC" colab_type="code" colab={}
pg_curs = pg_conn.cursor()
# + id="tbTWh6VPn3W0" colab_type="code" colab={}
# Our goal - copy the characters table from SQLite to PostgreSQL using Python
# Step 1 - E=Extract: Get the Characters
query = "SELECT * FROM strains_table"
strains = sl_curs.execute(query).fetchall()
# + id="6p-S9hn0oGKg" colab_type="code" outputId="a1cec8bb-c3fb-45a7-d3d0-59e997bd50bb" colab={"base_uri": "https://localhost:8080/", "height": 632}
strains[:5]
# + id="6CHu0tJjoMaV" colab_type="code" outputId="92f7aac7-d2d6-4fe9-e7e8-ec45565a2aeb" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(strains)
# + id="QYTDFSVSnIfB" colab_type="code" outputId="ccc4d654-21fd-4dd4-f939-b39a2a3b515b" colab={"base_uri": "https://localhost:8080/", "height": 136}
# Step 2 - Transform
# In this case, we don't actually want/need to change much
# Because we want to keep all the data
# And we're going from SQL to SQL
# But what do we need to be able to load into PostgreSQL?
# We need to make a new table with the appropriate schema
# What was the old schema? We can get at this with SQLite internals
sl_curs.execute('PRAGMA table_info(strains_table);').fetchall()
# + id="JbL7Vt_1nY9c" colab_type="code" colab={}
# https://www.postgresql.org/docs/current/sql-createtable.html
create_strains_table = """
CREATE TABLE strains_table (
index SERIAL PRIMARY KEY,
Strain VARCHAR(30),
Type VARCHAR(30),
Effects VARCHAR(120),
Ailment VARCHAR(80),
Flavor VARCHAR(80),
Description VARCHAR(2000)
);
"""
# + id="PMm9uinwqDkO" colab_type="code" colab={}
pg_curs.execute(create_strains_table)
# + id="8arKUwVmqHn8" colab_type="code" colab={}
pg_conn.commit()
# + id="PKZoTegWtGyg" colab_type="code" outputId="2d1f849d-3c14-43aa-ddf2-a99e1d75856c" colab={"base_uri": "https://localhost:8080/", "height": 34}
# We can query tables if we want to check
# This is a clever optional thing, showing postgresql internals
show_tables = """
SELECT
*
FROM
pg_catalog.pg_tables
WHERE
schemaname != 'pg_catalog'
AND schemaname != 'information_schema';
"""
pg_curs.execute(show_tables)
pg_curs.fetchall()
# + id="p7eUdNINznIO" colab_type="code" outputId="7b6d0e44-1a47-4ae3-cb0e-4745545962b0" colab={"base_uri": "https://localhost:8080/", "height": 156}
strains[0]
# + id="rDxZ-cqWzqSp" colab_type="code" outputId="15688e0d-dc9b-4387-c86c-d234483a880e" colab={"base_uri": "https://localhost:8080/", "height": 105}
example_insert = """
INSERT INTO strains_table
(Strain, Type, Effects, Ailment, Flavor, Description)
VALUES """ + str(strains[1][1:]) + ";"
print(example_insert)
# + id="Rp6C9JVjtRM3" colab_type="code" colab={}
# How do we do this for all characters? Loops!
for strain in strains:
insert_strain = """
INSERT INTO strains_table
(Strain, Type, Effects, Ailment, Flavor, Description)
VALUES """ + str(strain[1:]) + ";"
pg_curs.execute(insert_strain)
# + id="NXzqQxj-uE8b" colab_type="code" colab={}
pg_conn.commit()
values in corpus but they don’t make any sense for job hunters. In this case, we need to build the models to explore the relationship between features and salary.
# ### Category feature
# In the data crawling step, we crawled salaries for 5 different job categories. and we see that at the current time, most jobs are still about software and development.
# +
category = pd.DataFrame(job_salary['job_category'].value_counts()).reset_index()
category.columns = ['category', 'amount']
cat = category['category'] ; amount = category['amount']
colors = 'yrbkc'
explode = (0, 0, 0, 0, 0)
wedges, texts = plt.pie(amount, colors = colors, startangle=90, explode=explode)
plt.axis('equal')
plt.legend(wedges, cat, loc="best")
plt.show()
# -
# Since there are not many examples of the job such as data scientist and data engineer, it is not very useful to compare the mean salary of different category, but from the scatter plots, we can still realize that most jobs from different categories are in the similar range.
sns.stripplot(x="job_category", y="salary", data=job_salary_cleaned)
plt.show()
# ## Model Selection
# In the project, we tried logistic regression, K nearest neighbor, and random forest which are three popular models to solve classification problems. Logistic regression as a basic classification model is commonly used in our work based on the 2019 Kaggle ML and Data Science Survey so we decided to use it as one of models to predict salary based on extracted features. In the survey, the secondary popular algorithm is tree ensemble algorithm. In addition, random forest uses bootstrap method to build trees, which can reduce the impact of imbalanced data. Besides, it can compute the importance of features so we can know which skills affect the salary. The reason that we tried KNN is because KNN is power to solve multiclassification problems and it is easy to understand. In order to make sure the final model is generalized, we split the dataset into training set and test set, and test set has 20% data. For the evaluation metric, we chose weighted F1 score because the target has 5 imbalanced classes. Compared to regular macro F1 score, weighted F1 score takes the weighted average to the value. In this case, the impact between imbalanced classes is eliminated. Additionally, we used grid search and cross validation to tune hyperparameters. First, we used job category, location and 1000 extracted features to build the models.
# ### Models including job_category and location
# drop useless columns
full_data = clean_location(job_salary_cleaned)
mod_data = full_data.drop(columns=['job_title', 'company_name', 'requirements', 'industry', 'requirements_cleaned'])
mod_data = pd.get_dummies(data=mod_data, columns=['job_category','city', 'province'])
mod_data =mod_data.drop(columns=['salary', 'salary_buckets'])
# extract features using TF-IDF
vectorizer = TfidfVectorizer(ngram_range=(1, 2), max_features=1000)
corpus = full_data['requirements_cleaned'].values
grams = vectorizer.fit_transform(corpus)
feature_names = vectorizer.get_feature_names()
words_df = pd.DataFrame(grams.toarray(), columns=feature_names)
# split the data
X = pd.concat([mod_data, words_df], axis=1)
Y = full_data['salary_buckets']
xtrain, xtest, ytrain, ytest = train_test_split(X, Y, random_state=20, test_size=0.2)
# logistic regression
mod_log = LogisticRegression()
c = [0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20, 100, 200, 500, 1000]
parameters = {'C': c}
metric = make_scorer(f1_score, average='weighted')
grid_log = GridSearchCV(mod_log, parameters, scoring=metric, cv=5)
grid_log = grid_log.fit(xtrain, ytrain)
opt_log = grid_log.best_estimator_
# print("The optimal logistic regression model: \n", opt_log)
# print("====================================================================")
# print("The mean accuracy score is {0}.".format(grid_log.best_score_))
# KNN
mod_knn = KNeighborsClassifier()
neighbors = np.array(range(5, 50, 5))
parameters = {'n_neighbors': neighbors, 'weights': ['uniform', 'distance']}
metric = make_scorer(f1_score, average='weighted')
grid_knn = GridSearchCV(mod_knn, parameters, scoring=metric, cv=5)
grid_knn = grid_knn.fit(xtrain, ytrain)
opt_knn = grid_knn.best_estimator_
# print("The optimal SVM model: \n", opt_knn)
# print("====================================================================")
# print("The mean accuracy score is {0}.".format(grid_knn.best_score_))
# random forest
mod_rf = RandomForestClassifier(random_state=44)
numTrees = np.array(range(50, 500, 50))
parameters = {'n_estimators': numTrees}
metric = make_scorer(f1_score, average='weighted')
grid_rf = GridSearchCV(mod_rf, parameters, scoring=metric, cv=5)
grid_rf = grid_rf.fit(xtrain, ytrain)
opt_rf = grid_rf.best_estimator_
# print("The optimal random forest model: \n", opt_rf)
# print("====================================================================")
# print("The mean accuracy score is {0}.".format(grid_rf.best_score_))
score_df = pd.DataFrame(np.array([grid_log.best_score_, grid_knn.best_score_, grid_rf.best_score_]),
columns=['F1 score'],
index=['Logistic Regression', 'KNN', 'Random Forest'])
score_df
print("The weighted macro f1 socre of the optimal logistic regression model on the test set is",
f1_score(ytest, opt_log.predict(xtest), average='weighted'), ".")
# By comparing their F1 score, we found the logistic regression model gives the highest score among these three models. On the training set, the score of logistic regression is 67.18%. On the test set, the score of the logistic regression is 67.92%. Its training score and test score are closed, which means no overfitting occurred.
# visualize feature importance based on TF-IDF
# create feature importance data frame
df = pd.DataFrame(opt_rf.feature_importances_, columns=["importance"],
index=X.columns)
# sort by importance
df = df.sort_values('importance', ascending=False)
# plot top 20 features
plt.figure(figsize = (6,12))
sns.barplot(x=np.round(df.importance, 3)[0:50], y=df.index[0:50])
plt.title("Importance of Features")
plt.show()
# However, when we looked the feature importance, we found the most top important features are job categories and locations, which make sense because the salary must be influenced by them. However, the aim of the project is to find which skills are important for job hunters. In this case, we decided to exclude job category and location when we fitted models.
# +
# # bulid a confusion matrix
# cm = confusion_matrix(ytest, opt_rf.predict(xtest),
# labels=['<50,000', '50,000-75,000', '75,000-100,000', '100,000-150,000', '>150,000'])
# # plot the confusion matrix
# sns.heatmap(cm, annot=True, cmap='Blues', fmt='d',
# xticklabels=['<50,000', '50,000-75,000', '75,000-100,000', '100,000-150,000', '>150,000'],
# yticklabels=['<50,000', '50,000-75,000', '75,000-100,000', '100,000-150,000', '>150,000'])
# plt.xlabel("Predicted Label")
# plt.ylabel("True Label")
# plt.title("Confusion Matrix for salary buckets")
# plt.show()
# -
# ### Model selection excluding job_category and location
# when job_category and location are excluded
mod_data = full_data.drop(columns=['job_title', 'company_name', 'requirements', 'industry', 'requirements_cleaned'])
mod_data = mod_data.drop(columns=['job_category', 'city', 'province'])
mod_data =mod_data.drop(columns=['salary', 'salary_buckets'])
# split the data
X = pd.concat([mod_data, words_df], axis=1)
Y = full_data['salary_buckets']
xtrain, xtest, ytrain, ytest = train_test_split(X, Y, random_state=20, test_size=0.2)
# logistic regression
mod_log = LogisticRegression()
c = [0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20, 100, 200, 500, 1000]
parameters = {'C': c}
metric = make_scorer(f1_score, average='weighted')
grid_log = GridSearchCV(mod_log, parameters, scoring=metric, cv=5)
grid_log = grid_log.fit(xtrain, ytrain)
opt_log = grid_log.best_estimator_
# print("The optimal logistic regression model: \n", opt_log)
# print("====================================================================")
# print("The mean accuracy score is {0}.".format(grid_log.best_score_))
# KNN
mod_knn = KNeighborsClassifier()
neighbors = np.array(range(5, 50, 5))
parameters = {'n_neighbors': neighbors, 'weights': ['uniform', 'distance']}
metric = make_scorer(f1_score, average='weighted')
grid_knn = GridSearchCV(mod_knn, parameters, scoring=metric, cv=5)
grid_knn = grid_knn.fit(xtrain, ytrain)
opt_knn = grid_knn.best_estimator_
# print("The optimal SVM model: \n", opt_knn)
# print("====================================================================")
# print("The mean accuracy score is {0}.".format(grid_knn.best_score_))
# random forest
mod_rf = RandomForestClassifier(random_state=44)
numTrees = np.array(range(50, 500, 50))
parameters = {'n_estimators': numTrees}
metric = make_scorer(f1_score, average='weighted')
grid_rf = GridSearchCV(mod_rf, parameters, scoring=metric, cv=5)
grid_rf = grid_rf.fit(xtrain, ytrain)
opt_rf = grid_rf.best_estimator_
# print("The optimal random forest model: \n", opt_rf)
# print("====================================================================")
# print("The mean accuracy score is {0}.".format(grid_rf.best_score_))
score_df = pd.DataFrame(np.array([grid_log.best_score_, grid_knn.best_score_, grid_rf.best_score_]),
columns=['F1 score'],
index=['Logistic Regression', 'KNN', 'Random Forest'])
score_df
# As shown in the table, compared to previous models, the scores of the new models slightly lower but the logistic regression model still gives the highest score on the training set.
# visualize feature importance based on bag of wrods
# create feature importance data frame
df = pd.DataFrame(opt_rf.feature_importances_, columns=["importance"],
index=X.columns)
# sort by importance
df = df.sort_values('importance', ascending=False)
# plot top 20 features
plt.figure(figsize = (6,12))
sns.barplot(x=np.round(df.importance, 3)[0:50], y=df.index[0:50])
plt.title("Importance of Features")
plt.show()
print("The weighted macro f1 socre of the optimal random forest model on the test set is",
f1_score(ytest, opt_log.predict(xtest), average='weighted'), ".")
# When we looked the feature importance again, we found the programming language like javascript, html, sql, and css are important. Besides, the degree in computer science has a significant impact on the salary as well. Moreover, the score of the logistic regression model on the test set is 65.34%.
# +
# # bulid a confusion matrix
# cm = confusion_matrix(ytest, opt_rf.predict(xtest),
# labels=['<50,000', '50,000-75,000', '75,000-100,000', '100,000-150,000', '>150,000'])
# # plot the confusion matrix
# sns.heatmap(cm, annot=True, cmap='Blues', fmt='d',
# xticklabels=['<50,000', '50,000-75,000', '75,000-100,000', '100,000-150,000', '>150,000'],
# yticklabels=['<50,000', '50,000-75,000', '75,000-100,000', '100,000-150,000', '>150,000'])
# plt.xlabel("Predicted Label")
# plt.ylabel("True Label")
# plt.title("Confusion Matrix for salary buckets")
# plt.show()
# -
| 17,677 |
/week-2/Data Visualization/Graph_types.ipynb
|
5e8261f67b648ac060fa777853f9fa442d7d10d8
|
[] |
no_license
|
Sepelbaum/nyc-ds-100719-lectures
|
https://github.com/Sepelbaum/nyc-ds-100719-lectures
| 0 | 0 | null | 2019-10-07T20:45:53 | 2019-10-07T20:41:22 | null |
Jupyter Notebook
| false | false |
.py
| 200,638 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !kaggle datasets download -d kemical/kickstarter-projects
# !unzip kickstarter-projects.zip
import pandas as pd
df1 = pd.read_csv('ks-projects-201612.csv',encoding = "ISO-8859-1")
df2 = pd.read_csv('ks-projects-201801.csv',encoding = "ISO-8859-1")
df1.head()
df2.head()
import matplotlib.pyplot as plt
# %matplotlib inline
pd.plotting.scatter_matrix(df1)
pd.plotting.scatter_matrix(df2)
| 666 |
/Linear Regression - Bike Share - 17th july.ipynb
|
400b7b031fe38d3860da6c92afd449bd6befa0ed
|
[] |
no_license
|
RajeshSivanesan/My-Trainings
|
https://github.com/RajeshSivanesan/My-Trainings
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,839,479 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
print(dirname)
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# -
# !pip install mtcnn
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0"
import os
from PIL import Image
import matplotlib.pyplot as plt
from mtcnn.mtcnn import MTCNN
# -
# Load an image as an rgb numpy array
def load_image(filename):
image = Image.open(filename)
image = image.convert('RGB')
pixels = np.asarray(image)
return pixels
def extract_face(model, pixels, required_size=(80, 80)):
faces = model.detect_faces(pixels)
# skip cases where we could not detect a face
if len(faces) == 0:
return None
x1, y1, width, height = faces[0]['box']
# force detected pixel values to be positive (bug fix)
x1, y1 = abs(x1), abs(y1)
# convert into coordinates
x2, y2 = x1 + width, y1 + height
face_pixels = pixels[y1:y2, x1:x2]
# resize pixels to the model size
image = Image.fromarray(face_pixels)
image = image.resize(required_size)
return np.asarray(image)
# Load images and extract faces for all images in a directory
def load_faces(directory, n_faces):
model = MTCNN()
faces = list()
for filename in os.listdir(directory):
pixels = load_image(directory + filename)
face = extract_face(model, pixels)
if face is None:
continue
faces.append(face)
# stop once we have enough
if len(faces) >= n_faces:
break
return np.asarray(faces)
# Plot a list of loaded faces
def plot_faces(faces):
for i in range(100):
plt.subplot(10, 10, 1 + i)
plt.axis('off')
# Plotting raw pixel data
plt.imshow(faces[i])
plt.show()
# +
directory = '/kaggle/input/celeba-dataset/img_align_celeba/img_align_celeba/'
# Load and extract all faces
faces = load_faces(directory, 50000)
print('Loaded: ', faces.shape)
# Plotting faces
plot_faces(faces)
np.savez_compressed('img_celeba.npz', faces)
'])
linear_best_fit(X,y)
y_pred = model.predict(X)
temp_df = pd.DataFrame(y_pred)
bikeshareDF['y_pred'] = y_pred
bikeshareDF
temp_df = pd.DataFrame(y, y_pred)
X
# +
plt.xlabel("Actual expenses")
plt.ylabel("Predicted expenses")
plt.scatter(y, y_pred)
# -
error = y-y_pred
sns.displot(error)
error = y-y_pred
plt.scatter( y_pred, error)
plt.hist(y_pred)
kf = KFold(n_splits=10)
i=1
test_result = []
for train_index, test_index in kf.split(X):
train_X = X.iloc[train_index]
train_y = y.iloc[train_index]
test_X = X.iloc[test_index]
test_y = y.iloc[test_index]
model = LinearRegression()
model.fit(train_X, train_y)
train_pred = model.predict(train_X)
test_pred = model.predict(test_X)
train_mape = np.sqrt(mean_squared_error(train_y, train_pred))
test_mape = np.sqrt(mean_squared_error(test_y, test_pred))
print("Train MAPE = ",train_mape)
print("Test MAPE = ",test_mape)
test_result.append(test_mape)
np.mean(test_result)
np.std(test_result)
2.448028809806092e-13 - 1.4660345177835988e-13, 2.448028809806092e-13 + 1.4660345177835988e-13
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3, random_state=90)
model = LinearRegression()
model.fit(X_train, y_train)
train_pred = model.predict(X_train)
test_pred = model.predict(X_test)
np.sqrt(mean_squared_error(y_train, train_pred))
np.sqrt(mean_squared_error(y_test, test_pred))
2.0234126515698266e-13 - 1.9524392223316547e-13, 2.0234126515698266e-13 + 1.9524392223316547e-13
r2 = r2_score(y_train, train_pred)
r2
X_train.shape
n, p = X_train.shape[0], X_train.shape[1]
adjr2 = 1-(((1-r2)*(n-1))/(n-p-1))
adjr2
model = LinearRegression()
np.mean(np.abs(cross_val_score(model, X, y, scoring = 'neg_root_mean_squared_error', cv = 10)))
| 5,003 |
/class/Normalization/setup.ipynb
|
1d9c3337a9541234520afc1821c11d8d65db74f2
|
[] |
no_license
|
denison-cs/public-datasystems-f20
|
https://github.com/denison-cs/public-datasystems-f20
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 13,628 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import time
from os import listdir
from os.path import isfile, join
from datetime import time
import glob
import sys
import datetime
from pyspark.sql.functions import udf, hour, mean, minute, stddev, count, max as psmax, min as psmin, date_format
from pyspark.sql import SQLContext
from pyspark.sql.types import *
# -
# Go here to see pyspark functions
# http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html
#data locations
data_5min_path_d11 = "../../../d11_traffic_data/d11/"
data_5min_path_d8 = "../../../d8_traffic_data/"
meta_path_d11 = "../../../d11_traffic_data/meta/d11/"
# # Get data file names
# These are all of the filenames that are going to be fed into the Spark instance.
#get all files to process
#onlyfiles = [f for f in listdir(data_5min_path) if isfile(join(data_5min_path, f)) ]
#onlyfiles = [a for a in onlyfiles if 'txt.gz' in a]
onlyfiles = [data_5min_path_d11 + a for a in listdir(data_5min_path_d11) if 'txt.gz' in a ] + \
[data_5min_path_d8 + b for b in listdir(data_5min_path_d8) if 'txt.gz' in b ]
onlyfiles
# # Make spark schema
# struct list was generated with the following code after reading the files with inferschema = true, then hand modified
# ```
# '[' + ','.join(['StructField("%s",%s(),True)'% (colnames[idx], str(i.dataType))
# for idx, i in enumerate(rdd.schema)]) + ']'
# ```
# Column Names are not defined in the files. Defining column names for readability
colnames = [
'Timestamp','Station','District','Freeway','Direction_of_Travel',
'LaneType','StationLength','Samples',
'Perc_Observed','TotalFlow','AvgOccupancy','AvgSpeed',
'Lane1_Samples','Lane1_Flow','Lane1_AvgOcc','Lane1_AvgSpeed','Lane1_Observed',
'Lane2_Samples','Lane2_Flow','Lane2_AvgOcc','Lane2_AvgSpeed','Lane2_Observed',
'Lane3_Samples','Lane3_Flow','Lane3_AvgOcc','Lane3_AvgSpeed','Lane3_Observed',
'Lane4_Samples','Lane4_Flow','Lane4_AvgOcc','Lane4_AvgSpeed','Lane4_Observed',
'Lane5_Samples','Lane5_Flow','Lane5_AvgOcc','Lane5_AvgSpeed','Lane5_Observed',
'Lane6_Samples','Lane6_Flow','Lane6_AvgOcc','Lane6_AvgSpeed','Lane6_Observed',
'Lane7_Samples','Lane7_Flow','Lane7_AvgOcc','Lane7_AvgSpeed','Lane7_Observed',
'Lane8_Samples','Lane8_Flow','Lane8_AvgOcc','Lane8_AvgSpeed','Lane8_Observed'
]
colnames = [c.lower() for c in colnames]
# Ran spark instance once with limited dataset to identify the datatypes of each column to help create the following script.
# +
#print '[\n ' + ",\n ".join(['StructField("%s",%s(),True)'% (colnames[idx], str(i.dataType))
#for idx, i in enumerate(rdd.schema)]) + '\n]'
# -
# # Build dataframe with spark
# Defining the schema of the files that we are reading in. StructType creates the dataframe schema equivalent of create syntax in SQL)
# +
struct_list = [
StructField("timestamp",TimestampType(),True),
StructField("station",IntegerType(),True),
StructField("district",IntegerType(),True),
StructField("freeway",IntegerType(),True),
StructField("direction_of_travel",StringType(),True),
StructField("lanetype",StringType(),True),
StructField("stationlength",DoubleType(),True),
StructField("samples",IntegerType(),True),
StructField("perc_observed",IntegerType(),True),
StructField("totalflow",IntegerType(),True),
StructField("avgoccupancy",DoubleType(),True),
StructField("avgspeed",DoubleType(),True),
StructField("lane1_samples",IntegerType(),True),
StructField("lane1_flow",IntegerType(),True),
StructField("lane1_avgocc",DoubleType(),True),
StructField("lane1_avgspeed",DoubleType(),True),
StructField("lane1_observed",IntegerType(),True),
StructField("lane2_samples",IntegerType(),True),
StructField("lane2_flow",IntegerType(),True),
StructField("lane2_avgocc",DoubleType(),True),
StructField("lane2_avgspeed",DoubleType(),True),
StructField("lane2_observed",IntegerType(),True),
StructField("lane3_samples",IntegerType(),True),
StructField("lane3_flow",IntegerType(),True),
StructField("lane3_avgocc",DoubleType(),True),
StructField("lane3_avgspeed",DoubleType(),True),
StructField("lane3_observed",IntegerType(),True),
StructField("lane4_samples",IntegerType(),True),
StructField("lane4_flow",IntegerType(),True),
StructField("lane4_avgocc",DoubleType(),True),
StructField("lane4_avgspeed",DoubleType(),True),
StructField("lane4_observed",IntegerType(),True),
StructField("lane5_samples",IntegerType(),True),
StructField("lane5_flow",IntegerType(),True),
StructField("lane5_avgocc",DoubleType(),True),
StructField("lane5_avgspeed",DoubleType(),True),
StructField("lane5_observed",IntegerType(),True),
StructField("lane6_samples",IntegerType(),True),
StructField("lane6_flow",IntegerType(),True),
StructField("lane6_avgocc",DoubleType(),True),
StructField("lane6_avgspeed",DoubleType(),True),
StructField("lane6_observed",IntegerType(),True),
StructField("lane7_samples",IntegerType(),True),
StructField("lane7_flow",IntegerType(),True),
StructField("lane7_avgocc",DoubleType(),True),
StructField("lane7_avgspeed",DoubleType(),True),
StructField("lane7_observed",IntegerType(),True),
StructField("lane8_samples",IntegerType(),True),
StructField("lane8_flow",IntegerType(),True),
StructField("lane8_avgocc",DoubleType(),True),
StructField("lane8_avgspeed",DoubleType(),True),
StructField("lane8_observed",IntegerType(),True)
]
schema_struct = StructType(struct_list)
# -
# Loading the data into spark dataframe from the files (equivalent of insert statements with files in SQL)
limit_files = onlyfiles[:5] + onlyfiles[-5:]
limit_files
# +
#node this is only the first 5 days of files for now
limit_files = onlyfiles[:5] + onlyfiles[-5:]
files = [filename for filename in limit_files]
rdd = spark.read.csv(
files,
header='false',
timestampFormat='MM/dd/yyyy HH:mm:ss',
schema=schema_struct,
inferSchema='false'
)
rdd.take(1)
# -
rdd.count()
# ### Build freeway STM station order from meta data
# Build a sparksql dataframe with the metadata
# +
meta_path = "../../../d11_traffic_data/meta/d11/"
def loadMeta():
meta_dir= meta_path+'d11_text_meta_2015_*.txt'
meta_files = glob.glob(meta_dir)
meta_file_list = []
for meta_file in meta_files:
date = str('_'.join(meta_file.split('_')[4:7])).split('.')[0]
df = pd.read_table(meta_file, index_col=None, header=0)
date_col = pd.Series([date] * len(df))
df['file_date'] = date_col
# drop rows that are missing latitude / longitude values
#df.dropna(inplace=True, subset=['Latitude', 'Longitude'], how='any')
meta_file_list.append(df)
meta_frame = pd.concat(meta_file_list).drop_duplicates(subset='ID', keep='last')
usefwy = [ 56, 125, 805, 52, 163, 8, 15, 5, 905, 78, 94, 54]
meta_frame = meta_frame[meta_frame.Fwy.apply(lambda x: x in usefwy)]
#Add freeway name FwyDir
meta_frame['freeway'] = meta_frame.Fwy.apply(str) + meta_frame.Dir
r_c = {}
for c in meta_frame.columns:
r_c[c]=c.lower()
meta_frame=meta_frame.rename(columns = r_c )
return meta_frame
meta_data = sqlCtx.createDataFrame(loadMeta().loc[:,['id','abs_pm','type']].rename(columns={'id':'station'}))
# -
meta_data.show(100)
# # filter for weekdays I5 S
# # group by station, time
#
# Modify this to make all queries
# +
weekdaySelector = udf(
lambda x: "weekday" if int(x) < 6 else "weekend"
)
timeOfDay = udf(
lambda x: time(int(x.hour), int(x.minute)).strftime("%H:%M")
)
# +
station_time = (
rdd
.select(
'district',
'freeway',
'direction_of_travel',
'timestamp',
'station',
'totalflow',
'avgoccupancy',
'avgspeed',
date_format('timestamp', 'u').alias('dayofweek')
)
)
station_time = (
station_time
.withColumn(
'dayType',
weekdaySelector(station_time.dayofweek)
)
.withColumn(
'timeOfDay',
timeOfDay(station_time.timestamp)
)
.groupBy([
'district',
'freeway',
'direction_of_travel',
'station',
'dayType',
'timeOfDay'
])
.agg(
mean("totalflow").alias("flow_mean"),
stddev("totalflow").alias("flow_std"),
count("totalflow").alias("flow_count"),
psmax("totalflow").alias("flow_max"),
psmin("totalflow").alias("flow_min"),
mean("avgoccupancy").alias("occ_mean"),
stddev("avgoccupancy").alias("occ_std"),
count("avgoccupancy").alias("occ_count"),
psmax("avgoccupancy").alias("occ_max"),
psmin("avgoccupancy").alias("occ_min"),
mean("avgspeed").alias("speed_mean"),
stddev("avgspeed").alias("speed_std"),
count("avgspeed").alias("speed_count"),
psmax("avgspeed").alias("speed_max"),
psmin("avgspeed").alias("speed_min")
)
)
# -
df = station_time.toPandas()
df.sort_values('timeOfDay')
df.columns
def reduce_data_by_dict(df, keyval_dict):
for key, val in keyval_dict.iteritems():
df = df[df[key] == val]
return df
# %run ../trafficpassion/AnalyzeWiggles.py
restrict_dict = {'dayType': 'weekday',
'district': 11,
'freeway': 5,
'direction_of_travel': 'S'
}
reduce_data_by_dict(df,restrict_dict ).sort_values(['timeOfDay'])
#
| 9,881 |
/bubble_data_collection.ipynb
|
b11ff83409299b6742d97ef527b808d43e57cce6
|
[] |
no_license
|
yxinjiang/Background-Subtraction
|
https://github.com/yxinjiang/Background-Subtraction
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,887 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# [View in Colaboratory](https://colab.research.google.com/github/stgstg27/Background-Subtraction/blob/master/bubble_data_collection.ipynb)
# + [markdown] id="B7Q7DY6ER_4U" colab_type="text"
# My aim is to create a file that automatically runs through the folder and break the videos and run the code to store the data in the folder
# + id="j7sfF56cSSIZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="06c9e1ae-2ce4-485d-b37c-a5011bc5acda"
print ("Let's do it")
# + [markdown] id="irf6KXBiSZT-" colab_type="text"
# Since I am working on a Windows system I guess Batch file is the answer to my question so let's research about this batch file.
#
# After creating basic batch file I think I can start working on the code for the batch file that will automatically download my data. Below is my sample codes for batch file
#
#
# + id="DB0PciRVSmRi" colab_type="code" colab={}
#This is the code for running simple python file and storing their informataion in the text file
ECHO OFF
:: Echo off makes sure the command that you are executing is not seen
:: >> and > stores the output of the file into the output file
ECHO Hello World >> results.txt
python temp.py >> results.txt
::PAUSE is like a waitKey over here, if this command is not there the batch file automatically close itself once it is completed in a second
PAUSE
# + [markdown] id="oP_9JAwwAadS" colab_type="text"
# Since we can exploit directory using os library in python, I am not using cd command in my batch file.
#
# After exhaustive search I am not able to access folders in the mobile phone with a python code but I might be able to do that using a software which I am not so sure about.
#
# So for now I am planning to copy the videos in my laptop directory and and then run the batch file
#
#
# + id="x9DiSFwNA79g" colab_type="code" colab={}
import os
import numpy as np
import sys
import cv2
# + [markdown] id="Fly6t7qwJhJS" colab_type="text"
# The below code is for extracting the images frame from the video, this will save me a lot of time and effort
# + id="ObwkCB5eA-_c" colab_type="code" colab={}
def imageCapture(file,output):
#cv2 code goes here
video = cv2.VideoCapture(file)
folder = 'positive'
currpath = os.getcwd()
if output is "0":
folder = 'negative'
print ("Storing into Folder :" , folder)
if not os.path.exists(folder):
os.mkdir(folder)
os.chdir(folder)
frameNo = 0
while(True):
ok , frame = video.read()
if not ok:
break
timer = cv2.getTickCount()
cv2.imwrite(file + '_' + str(frameNo) + '.png' , frame)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
print ('Frame Per Second : ',fps)
frameNo += 1
cv2.waitKey(1)
print ("Extracted %d of images" %frameNo)
os.chdir(currpath)
# + id="Y7laUVaIBF03" colab_type="code" colab={}
fileList = os.listdir(os.getcwd())
aoi = input('No of Videos to consider\n')
output = input('0 for negative and 1 for positive\n')
fileList.sort()
fileList_len = len(fileList)
print ("Output : ",output)
imageCapture('1.mp4',output)
| 3,350 |
/day2/web_scraping_day_2_slides.ipynb
|
f80fb864aebed2831c1a6907fd7e13f40ff63791
|
[] |
no_license
|
EH437/web_scraping_basics
|
https://github.com/EH437/web_scraping_basics
| 0 | 4 | null | 2023-06-13T11:34:33 | 2021-08-13T03:27:37 | null |
Jupyter Notebook
| false | false |
.py
| 238,118 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from utils import SimulationEnvironment
from basic_walk.utils import BaseAgent
import sys
import time
from tqc import structures, DEVICE
from tqc.trainer import Trainer
from tqc.structures import Actor, Critic, RescaleAction
from tqc.functions import eval_policy
from tqdm import tqdm, trange
import copy
import torch
import pickle
import matplotlib.pyplot as plt
import numpy as np
# -
# # Base Agent
DEVICE
# +
headless_mode = True
foot_only_mode = True
random_mode = False
n_episodes = 2
episodes_length = 200
tqc_model_file_name = f"data/models/model_{500000}_actor"
# tqc_model_file_name = f"data/history/high_running_actor"
base_agent_info_log = []
tqc_agent_info_log = []
# -
# %%time
with SimulationEnvironment('scenes/basic_scene.ttt', headless_mode=headless_mode, foot_only_mode=foot_only_mode) as env:
for i in range(1):
current_info = []
agent = BaseAgent(random_mode=random_mode, foot_only_mode=foot_only_mode)
state = env.reset()
for _ in range(episodes_length):
action = agent.act(state)
state, r, done, info = env.step(action)
current_info.append(info)
if done:
print("Fail")
break
base_agent_info_log.append(current_info)
# %%time
with SimulationEnvironment('scenes/basic_scene.ttt', headless_mode=headless_mode, foot_only_mode=foot_only_mode) as env:
env = RescaleAction(env, -1., 1.)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
agent = Actor(state_dim, action_dim).to(DEVICE)
for i in range(n_episodes):
agent.load_state_dict(torch.load(tqc_model_file_name))
agent.eval()
current_info = []
state = env.reset()
for _ in range(episodes_length):
action = agent.select_action(state)
state, r, done, info = env.step(action)
current_info.append(info)
if done:
print("Fail")
break
tqc_agent_info_log.append(current_info)
# ## Анализ
# +
def extract_param(info_log, param_name):
param = []
for info in info_log:
param.append(info[param_name])
return param
def find_best_log(logs):
max_reward = 0
max_log = []
for log in logs:
curr_reward = np.sum(extract_param(log, "reward"))
if curr_reward > max_reward:
max_reward = curr_reward
max_log = log
return max_log
def plot_info_param(info_log, param_name, ax):
param = extract_param(info_log, param_name)
ax.plot(param, label=param_name)
def plot_two_info(first_info_log, second_info_log, param_name):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(17,5))
plot_info_param(first_info_log, param_name, ax=ax1)
plot_info_param(second_info_log, param_name, ax=ax2)
plt.legend()
plt.show()
# -
base_agent_best_log = find_best_log(base_agent_info_log)
tqc_agent_best_log = find_best_log(tqc_agent_info_log)
base_agent_best_log[0].keys()
# +
param_names = [
"reward",
"fall_reward",
"velocity_reward",
"smooth_reward",
"force_reward",
]
for param_name in param_names:
plot_two_info(base_agent_best_log, tqc_agent_best_log, param_name)
# -
(r.find_element_by_xpath('td[3]').text)
pub_type.append(r.find_element_by_xpath('td[4]').text)
pub_homepage.append(r.find_element_by_xpath('td[5]').text)
pub_url.append(r.find_element_by_xpath('td[2]/a').get_attribute('onclick'))
page_number += 1
if page_number == 3:
more_page = False
# + slideshow={"slide_type": "subslide"}
pub_df = pd.DataFrame(list(zip(pub_name, pub_gov, pub_type, pub_homepage, pub_url)),
columns=['name', 'gov', 'type', 'homepage', 'url'])
# + slideshow={"slide_type": "fragment"}
pub_df
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 리스트 수집 - 방식 2
# + slideshow={"slide_type": "fragment"}
page_number = 1
pub_list = []
while True:
driver.execute_script(f'goPage({page_number});return false;')
time.sleep(2)
print(f'processing page {page_number}')
table = driver.find_element_by_css_selector('#txt > table > tbody')
rows = table.find_elements_by_tag_name('tr')
for r in rows:
pub_list.append({
'name':r.find_elements_by_css_selector('td')[1].text,
'gov':r.find_elements_by_css_selector('td')[2].text,
'type':r.find_elements_by_css_selector('td')[3].text,
'homepage':r.find_elements_by_css_selector('td')[4].text,
'url':r.find_element_by_css_selector('td.left > a').get_attribute('onclick')
})
page_number += 1
if page_number == 2:
break
# + slideshow={"slide_type": "subslide"}
pub_df = pd.DataFrame(pub_list)
# + slideshow={"slide_type": "fragment"}
pub_df
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 본문 수집 - 방식 1
# +
table = driver.find_element_by_css_selector('#txt > table')
headers = table.find_elements_by_tag_name('th')
contents = table.find_elements_by_tag_name('td')
header_list = []
content_list = []
for h in range(len(headers)):
header_list.append(headers[h].text)
content_list.append(contents[h].text)
alio_dict = {}
for n in range(len(header_list)):
alio_dict[header_list[n]] = content_list[n]
# -
alio_dict
driver.execute_script("javascript:fnDetail('C0368');")
# + slideshow={"slide_type": "fragment"}
pub_contents = []
for u in pub_df['url']:
driver.execute_script(u)
time.sleep(2)
table = driver.find_element_by_css_selector('#txt > table')
headers = table.find_elements_by_tag_name('th')
contents = table.find_elements_by_tag_name('td')
header_list = []
content_list = []
for h in range(len(headers)):
header_list.append(headers[h].text)
content_list.append(contents[h].text)
alio_dict = {}
for n in range(len(header_list)):
alio_dict[header_list[n]] = content_list[n]
pub_contents.append(alio_dict)
driver.back()
time.sleep(2)
# -
pub_contents
pub_df
# + slideshow={"slide_type": "subslide"}
pub_content_df = pd.DataFrame(pub_contents)
# + slideshow={"slide_type": "fragment"}
pub_content_df
# + slideshow={"slide_type": "subslide"}
pub_df_merge = pd.merge(pub_df, pub_content_df, left_on='name', right_on='기관명', how='left')
# + slideshow={"slide_type": "fragment"}
pub_df_merge
# + slideshow={"slide_type": "subslide"}
pub_df_concat = pd.concat([pub_df, pub_content_df], axis=1)
# + slideshow={"slide_type": "fragment"}
pub_df_concat
# -
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 본문 수집 - 방식 2
# + [markdown] slideshow={"slide_type": "fragment"}
# ### url 구성에 필요한 정보 추출
# + slideshow={"slide_type": "fragment"}
# 'https://job.alio.go.kr/orginfoview.do?apba_id=C0005'
# https://job.alio.go.kr/orginfoview.do?pageNo=1&apba_id=C0847
# + slideshow={"slide_type": "fragment"}
pub_url[0]
# -
apba_id = pub_url[0].split("'")
apba_id
# + slideshow={"slide_type": "fragment"}
apba_id = pub_url[0].split("'")[1]
print(apba_id)
# + slideshow={"slide_type": "fragment"}
url = 'https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id
print(url)
# + slideshow={"slide_type": "subslide"}
pub_real_urls = []
for n in pub_url:
apba_id = n.split("'")[1]
pub_real_urls.append('https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id)
# + slideshow={"slide_type": "fragment"}
pub_real_urls
# + slideshow={"slide_type": "subslide"}
pub_contents = []
for u in pub_real_urls:
driver.get(u)
time.sleep(2)
rows = driver.find_elements_by_xpath('//*[@id="txt"]/table/tbody/tr')
alio_dict = {}
for r in rows:
header = r.find_element_by_tag_name('th').text
content = r.find_element_by_tag_name('td').text
alio_dict[header]=content
pub_contents.append(alio_dict)
# + slideshow={"slide_type": "subslide"}
pub_content_df = pd.DataFrame(pub_contents)
pub_content_df
# -
# ### 정리 1
# + slideshow={"slide_type": "subslide"}
pub_contents = []
for u in pub_url:
apba_id = u.split("'")[1]
pub_real_url = 'https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id
driver.get(pub_real_url)
time.sleep(2)
rows = driver.find_elements_by_xpath('//*[@id="txt"]/table/tbody/tr')
alio_dict = {}
for r in rows:
header = r.find_element_by_tag_name('th').text
content = r.find_element_by_tag_name('td').text
alio_dict[header]=content
pub_contents.append(alio_dict)
# + slideshow={"slide_type": "subslide"}
pub_content_df = pd.DataFrame(pub_contents)
pub_content_df
# -
# ### 정리 2
# +
import csv
file_name = "공공기관정보(job-alio).csv"
f = open(file_name, 'w', encoding='utf-8-sig', newline='')
writer = csv.writer(f)
for u in pub_url:
apba_id = u.split("'")[1]
pub_real_url = 'https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id
driver.get(pub_real_url)
time.sleep(2)
rows = driver.find_elements_by_xpath('//*[@id="txt"]/table/tbody/tr')
content = []
for r in rows:
content.append(r.find_element_by_tag_name('td').text)
writer.writerow(content)
f.close()
# +
import csv
file_name = "공공기관정보(job-alio)-1.csv"
f = open(file_name, 'w', encoding='utf-8-sig', newline='')
writer = csv.writer(f)
for u in pub_url:
apba_id = u.split("'")[1]
pub_real_url = 'https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id
driver.get(pub_real_url)
time.sleep(2)
rows = driver.find_elements_by_xpath('//*[@id="txt"]/table/tbody/tr')
content = [r.find_element_by_tag_name('td').text for r in rows]
writer.writerow(content)
f.close()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Headless Chrome
# + [markdown] slideshow={"slide_type": "fragment"}
# ### User Agents
#
# 웹사이트에 접속할 때 어느 브라우저를 쓰는지, OS를 쓰는지, 컴퓨터로 접속하는지, 스마트폰으로 접속하는지 등 정보를 서버가 알 수 있음
# 사람이 접속하는 것이 아니라 컴퓨터가 접속하는 경우 사이트에서는 접속을 차단할 수 있음
#
# https://www.whatismybrowser.com/detect/what-is-my-user-agent
# + [markdown] slideshow={"slide_type": "fragment"}
# 
# + slideshow={"slide_type": "subslide"}
headless_options = webdriver.ChromeOptions()
headless_options.headless = True
headless_options.add_argument('window-size=1920x1080')
headless_options.add_argument('User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36')
headless_options.add_argument('lang=ko_KR')
headless_options.add_argument('disable-gpu')
# + slideshow={"slide_type": "fragment"}
# options = webdriver.ChromeOptions()
# options.add_argument('--ignore-ssl-errors=yes')
# options.add_argument('--ignore-certificate-errors')
# driver = webdriver.Chrome(options=options)
# + slideshow={"slide_type": "subslide"}
from selenium import webdriver
import pandas as pd
import time
headless_driver = webdriver.Chrome('./chromedriver', options=headless_options)
url = 'https://job.alio.go.kr/orginfo.do'
headless_driver.get(url)
# + slideshow={"slide_type": "fragment"}
headless_driver.get_screenshot_as_file("./img/Job-Alio_first.png")
# + [markdown] slideshow={"slide_type": "subslide"}
# screenshot
#
# 
# + slideshow={"slide_type": "subslide"}
checkbox1 = headless_driver.find_element_by_xpath('//*[@id="so01"]')
checkbox1.click()
# + slideshow={"slide_type": "fragment"}
headless_driver.get_screenshot_as_file("./img/Job-Alio_checkbox.png")
# + [markdown] slideshow={"slide_type": "subslide"}
# screenshot2
#
# 
# + slideshow={"slide_type": "subslide"}
pub_contents = []
for u in pub_url:
apba_id = u.split("'")[1]
pub_real_url = 'https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id
headless_driver.get(pub_real_url)
time.sleep(2)
print(f'processing {apba_id}')
rows = headless_driver.find_elements_by_xpath('//*[@id="txt"]/table/tbody/tr')
alio_dict = {}
for r in rows:
header = r.find_element_by_tag_name('th').text
content = r.find_element_by_tag_name('td').text
alio_dict[header]=content
pub_contents.append(alio_dict)
# + slideshow={"slide_type": "subslide"}
pub_content_df1 = pd.DataFrame(pub_contents)
pub_content_df1
# + slideshow={"slide_type": "fragment"}
headless_driver.quit()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Requests & BeautifulSoup
# + slideshow={"slide_type": "subslide"}
import requests
from bs4 import BeautifulSoup
pub_contents = []
for u in pub_url:
apba_id = u.split("'")[1]
pub_real_url = 'https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id
res = requests.get(pub_real_url)
time.sleep(2)
print(f'processing {apba_id}')
soup = BeautifulSoup(res.text, 'html.parser')
rows = soup.select('#txt > table > tbody > tr')
alio_dict = {}
for r in rows:
header = r.select_one('th').get_text()
content = r.select_one('td').get_text()
alio_dict[header]=content
pub_contents.append(alio_dict)
# + slideshow={"slide_type": "subslide"}
pub_content_df_request = pd.DataFrame(pub_contents)
pub_content_df_request
# + [markdown] slideshow={"slide_type": "subslide"}
# ### HTTP response code
#
# HTTP라는 프로토콜 규격에 따라서 응답 데이터에 응답 코드를 함께 받음
# https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
#
# 응답코드가 200인 경우는 정상 응답
# + slideshow={"slide_type": "fragment"}
res = requests.get('https://job.alio.go.kr/orginfoview.do?apba_id=C0847')
print(res.status_code)
# + slideshow={"slide_type": "fragment"}
import requests
from bs4 import BeautifulSoup
pub_contents = []
for u in pub_url:
apba_id = u.split("'")[1]
pub_real_url = 'https://job.alio.go.kr/orginfoview.do?apba_id=' + apba_id
res = requests.get(pub_real_url)
time.sleep(2)
if res.status_code == 200:
print(f'processing {apba_id}')
soup = BeautifulSoup(res.text, 'html.parser')
rows = soup.select('#txt > table > tbody > tr')
alio_dict = {}
for r in rows:
header = r.select_one('th').get_text()
content = r.select_one('td').get_text()
alio_dict[header]=content
pub_contents.append(alio_dict)
else:
print(f'{apba_id}: {res.status_code}')
# + [markdown] slideshow={"slide_type": "subslide"}
# ### User Agents
# + slideshow={"slide_type": "fragment"}
import requests
res = requests.get('https://naver.com')
res.request.headers
# + slideshow={"slide_type": "fragment"}
import requests
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36"}
res = requests.get('https://naver.com', headers=headers)
res.request.headers
# + [markdown] slideshow={"slide_type": "slide"}
# # 동적 웹페이지 구조
# + [markdown] slideshow={"slide_type": "fragment"}
# 
# + [markdown] slideshow={"slide_type": "fragment"}
# 
# + [markdown] slideshow={"slide_type": "slide"}
# # 예제 3: 서울복지포털
#
# https://wis.seoul.go.kr/hope/customizedSearch.do
# + slideshow={"slide_type": "fragment"}
from selenium import webdriver
import pandas as pd
import time
# + slideshow={"slide_type": "fragment"}
driver = webdriver.Chrome('./chromedriver.exe')
url = 'https://wis.seoul.go.kr/hope/customizedSearch.do'
driver.get(url)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 방식 1
# + slideshow={"slide_type": "fragment"}
# //*[@id="content"]/div[1]/div[2]/dl[1]/dd/div/div/table
# //*[@id="content"]/div[1]/div[2]/dl[2]/dd/div/div/table
center_info = driver.find_elements_by_tag_name('table')
print(len(center_info))
for c in center_info:
print(c.text)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 방식 2
# + slideshow={"slide_type": "fragment"}
center_list = []
centers = driver.find_elements_by_xpath('//*[@id="content"]/div[1]/div[2]/dl')
for center in centers:
drop_btn = center.find_element_by_xpath('dt/button')
drop_btn.click()
time.sleep(2)
center_info = {}
ths = center.find_elements_by_tag_name('th')
tds = center.find_elements_by_tag_name('td')
for n in range(len(ths)):
center_info[ths[n].text]=tds[n].text
center_list.append(center_info)
# + slideshow={"slide_type": "subslide"}
center_list
# + slideshow={"slide_type": "fragment"}
center_df = pd.DataFrame(center_list)
center_df
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 방식 3
# + slideshow={"slide_type": "fragment"}
from bs4 import BeautifulSoup
soup = BeautifulSoup(driver.page_source, 'html.parser')
centers = soup.select('#content > div.sub_content > div.board_t1.no_out > dl')
center_list = []
for center in centers:
center_info = {}
ths = center.select('th')
tds = center.select('td')
for n in range(len(ths)):
center_info[ths[n].get_text()]=tds[n].get_text().strip(' \t')
center_list.append(center_info)
# + slideshow={"slide_type": "fragment"}
center_list
# + slideshow={"slide_type": "fragment"}
center_df = pd.DataFrame(center_list)
center_df
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Pagination
# + slideshow={"slide_type": "fragment"}
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
import time
# + slideshow={"slide_type": "fragment"}
driver = webdriver.Chrome('./chromedriver.exe')
url = 'https://wis.seoul.go.kr/hope/customizedSearch.do'
driver.get(url)
# + slideshow={"slide_type": "subslide"}
current_page_n = 1
center_list = []
while current_page_n < 3:
driver.execute_script(f'pagedIng({current_page_n});')
print(f'processing page {current_page_n}')
time.sleep(3)
soup = BeautifulSoup(driver.page_source, 'html.parser')
centers = soup.select('#content > div.sub_content > div.board_t1.no_out > dl')
for center in centers:
center_info = {}
ths = center.select('th')
tds = center.select('td')
for n in range(len(ths)):
center_info[ths[n].get_text()]=tds[n].get_text().strip(' \t')
center_list.append(center_info)
current_page_n += 1
# + slideshow={"slide_type": "subslide"}
center_list
# + slideshow={"slide_type": "fragment"}
center_df = pd.DataFrame(center_list)
center_df
# + [markdown] slideshow={"slide_type": "subslide"}
# ### csv
# + slideshow={"slide_type": "fragment"}
import csv
file_name = "서울시복지시설.csv"
f = open(file_name, 'w', encoding='utf-8-sig', newline='')
writer = csv.writer(f)
current_page_n = 1
while current_page_n < 3:
driver.execute_script(f'pagedIng({current_page_n});')
print(f'processing page {current_page_n}')
time.sleep(3)
soup = BeautifulSoup(driver.page_source, 'html.parser')
centers = soup.select('#content > div.sub_content > div.board_t1.no_out > dl')
for center in centers:
tds = center.select('td')
content = [td.text for td in tds]
current_page_n += 1
f.close()
# + [markdown] slideshow={"slide_type": "slide"}
# # 비교: 나라일터
#
# https://www.gojobs.go.kr/frameMenu.do?url=apmList.do%3FsearchJobsecode%3D050%26searchEmpmnsecode%3De05&menuNo=47&mngrMenuYn=N&message=
# + slideshow={"slide_type": "fragment"}
# https://www.gojobs.go.kr/apmList.do?searchJobsecode=050&searchEmpmnsecode=e05&menuNo=47&empmnsn=0&searchJobsecode=&searchBbssecode=0&searchBbssn=0
# https://www.gojobs.go.kr/apmList.do?searchJobsecode=050&searchEmpmnsecode=e05&menuNo=47&empmnsn=0&searchJobsecode=&searchBbssecode=0&searchBbssn=0&pageIndex=2
# https://www.gojobs.go.kr/apmView.do?empmnsn=135922&searchJobsecode=050
# + [markdown] slideshow={"slide_type": "slide"}
# # 연습: Daum 댓글
# + slideshow={"slide_type": "fragment"}
from selenium import webdriver
import pandas as pd
import time
driver = webdriver.Chrome('./chromedriver.exe')
# + slideshow={"slide_type": "fragment"}
driver.get('https://news.v.daum.net/v/20210811213002144')
time.sleep(2)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
# + slideshow={"slide_type": "fragment"}
chrono_btn = driver.find_element_by_xpath('//*[@id="alex-area"]/div/div/div/div[3]/ul[1]/li[4]/button')
chrono_btn.click()
# + slideshow={"slide_type": "subslide"}
click_count = 0
while True:
try:
more_btn = driver.find_element_by_xpath('//*[@id="alex-area"]/div/div/div/div[3]/div[3]')
more_btn.click()
time.sleep(5)
click_count += 1
if click_count == 3:
break
except:
break
# + slideshow={"slide_type": "fragment"}
cmts = driver.find_elements_by_css_selector('div.cmt_box > ul.list_comment > li')
# -
len(cmts)
div > strong > span > a > span:nth-child(2)
# + slideshow={"slide_type": "subslide"}
cmt_list = []
for cmt in cmts:
cmt_list.append({
'cmt_name': cmt.find_element_by_css_selector('div > strong > span > a').text,
'cmt_body': cmt.find_element_by_css_selector('div > p').text,
'cmt_like': cmt.find_element_by_css_selector('button.btn_g.btn_recomm.\#like > span.num_txt').text,
'cmt_dislike': cmt.find_element_by_css_selector('button.btn_g.btn_oppose.\#dislike > span.num_txt').text
})
# + slideshow={"slide_type": "fragment"}
cmt_list
# + slideshow={"slide_type": "subslide"}
cmt_df = pd.DataFrame(cmt_list)
# + slideshow={"slide_type": "fragment"}
cmt_df
# -
| 21,953 |
/.ipynb_checkpoints/2019_ALevelCS_PreRelease-checkpoint.ipynb
|
45d0bf6f7e08f0fe4f3515d7753672af1c980280
|
[] |
no_license
|
Xin-SHEN/CIE-ComputerScience
|
https://github.com/Xin-SHEN/CIE-ComputerScience
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,568 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# *Cambridge Assessment International Education</br>
# Cambridge International Advanced Subsidiary and Advanced Level*
#
# ***
# ### *COMPUTER SCIENCE*
# ### *Paper 2 Fundamental Problem-solving and Programming Skills.*
# ## *PRE-RELEASE MATERIAL*
#
# No Additional Materials are required.
# This material should be given to the relevant teachers and candidates as soon as it has been received at the Centre.
# ***
# #### READ THESE INSTRUCTIONS FIRST
# Candidates should use this material in preparation for the examination. Candidates should attempt the
# practical programming tasks using their chosen high-level, procedural programming language.
# Teachers and candidates should read this material prior to the June 2019 examination for 9608 Paper 2.
#
# #### Reminders
# The syllabus states:
# * there will be questions on the examination paper which do not relate to this pre-release material.
# * you must choose a high-level programming language from this list:
# * Visual Basic (console mode)
# * Python
# * Pascal / Delphi (console mode)
# ## TASK 1 – Arrays
# ### Introduction
# Candidates should be able to write programs to process array data both in pseudocode and in their chosen programming language. It is suggested that each task is planned using pseudocode before writing it in program code.
#
# #### TASK 1.1
# A 1D array of STRING data type will be used to store the name of each student in a class together with
# their email address as follows:
# > <StudentName>'#'<EmailAddress>
#
# An example string with this format would be:
# "Eric Smythe#[email protected]"
# Write program code to:
# 1. declare the array
# 2. prompt and input name and email address
# 3. form the string as shown
# 4. write the string to the next array element
# 5. repeat from step 2 for all students in the class
# 6. output each element of the array in a suitable format, together with explanatory text such as column headings
_list = []
numberOfStudents = 3
for x in range(numberOfStudents):
name = input('Please enter your name:')
email = input('Please enter your email:')
element = name + '#' + email
_list.append(element)
_list
_str = 'jupyter#[email protected]#21'
a = _str.find('#')
a
print('name | email')
for i in range(len(_list)):
index = _list[i].find('#')
print(_list[i][:index],' | ',_list[i][index+1:])
print('name | email')
for student in _list:
index = student.find('#')
print(student[:index],' | ',student[index+1:])
# ### TASK 1.2
# Consider what happens when a student leaves the class and their data item is deleted from the array.
# Decide on a way of identifying unused array elements and only output elements that contain student
# details. Modify your program to include this.
# ***
# 以下部分为删除一个学生的步骤,并非题目答案
# +
#找到一个字符串中的子字符串
str1 = "Runoob example....wow!!!"
str2 = "exam";
print (str1.find(str2))
print (str1.find(str2, 5))
print (str1.find(str2, 10))
# +
# 假设现在的学生列表为: ['a#1', 'b#2', 'c#3']
_list = ['a#1', 'b#2', 'c#3']
# 输入一个需要删除的学生名字
name_input = input('Please enter a name for search:')
# 在学生列表中枚举
for s in _list:
# 把每个学生的名字从元素中切割出来
index = s.find('#')
name = s[:index]
# 如果输入的名字等于这个名字
if name_input == name:
# 打印出这个名字
print('Find this ',s)
# 把这个元素从列表中删除
_list.remove(s)
_list
# -
# ***
# 以下为 TASK 1.2 答案
# 假设现在的学生列表为: ['a#1', 'b#2', 'c#3'],其中 'b#2'被移除
_list = ['a#1','', 'c#3']
_list
print('name | email')
for student in _list:
index = student.find('#')
# 如果字符串中没有‘#’,说明该学生已经被删除,所以必定返回 -1
if index != -1:
print(student[:index],' | ',student[index+1:])
| 3,884 |
/bat plot and anamoly detection.ipynb
|
725fb116b914fc83b5b1dc3c7d62049b71d914e0
|
[] |
no_license
|
TheKinginTheNorth-BrandonLuo/May
|
https://github.com/TheKinginTheNorth-BrandonLuo/May
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 250,703 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import os
# +
file_train_loss = './checkpoints/exp10/loss_log_train.txt'
file_val_loss = './checkpoints/exp10/loss_log_val.txt'
file_metric = './checkpoints/exp10/loss_log_val_metric.txt'
all_train_loss = []
with open(file_train_loss, 'r') as f:
for line in f:
#line = f.readline()
loss = float(line.split(' ')[7])
all_train_loss.append(loss)
all_val_loss = []
with open(file_val_loss, 'r') as f:
for line in f:
loss = float(line.split(' ')[8])
all_val_loss.append(loss)
sen = []
pre = []
acc = []
spe = []
mcc = []
with open(file_metric, 'r') as f:
for line in f:
item = line.split(' ')
sen_i = float(item[6][:-1])
sen.append(sen_i)
spe_i = float(item[8][:-1])
spe.append(spe_i)
acc_i = float(item[10][:-1])
acc.append(acc_i)
pre_i = float(item[12][:-1])
pre.append(pre_i)
mcc_i = float(item[14])
mcc.append(mcc_i)
# +
epochs = np.arange(len(acc))
fig = plt.figure()
#plot loss
bx = plt.subplot(211)
bx.plot(epochs, all_train_loss, label='train_loss')
bx.plot(epochs, all_val_loss, label='val_loss')
# Shrink current axis by 20%
box = bx.get_position()
bx.set_position([box.x0, box.y0, box.width, box.height])
# Put a legend to the right of the current axis
bx.legend(loc='center left', bbox_to_anchor=(1, 0.5))
#plot metric
ax = plt.subplot(212)
ax.plot(epochs, sen, label='sen')
ax.plot(epochs, spe, label='spe')
ax.plot(epochs, acc, label='acc')
ax.plot(epochs, pre, label='pre')
ax.plot(epochs, mcc, label='mcc')
# Shrink current axis by 20%
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width, box.height])
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
# -
(st[0].stats.starttime))
# -
# La idea es que la matriz $X$ de características contenga una columnta por cada característica. Por lo pronto 16. Tres de las componentes normales, 12 de las componente filtradas (4 filtros) y 3 de los espectogramas a cada componente
# +
from python_speech_features import sigproc, get_filterbanks, lifter, hz2mel, mel2hz
import numpy
from scipy.fftpack import dct
for w in wf:
# obteniendo si la forma de onda es de P, S o ruido
ph_type = w.split("/")[-1].split("_")[0]
print(w)
st_ = read(w)
st = st_.copy()
st.detrend(type='linear')
tr =st[0]
tr.normalize()
#tr.plot()
df = tr.stats.sampling_rate
x = tr.data
# obteniendo los coeficientes MFCC
"""winlen representa el largo de la ventana en segundos, con 200/df se selecciona
una ventana de 200 datos. winstep es la ventana de deslizamiento, calcula una nueva
ventana cada 80 datos.
En numcep se selecciona el número de coeficientes de la transformada discreta de coseno
que se cogeran. Se pueden un máximo del parámetro nfilt, que por defecto es 26, acá se
seleccionó para coger los primeros 13 coeficientes (estandar en ASR).
Como la función mfcc devuelve una matriz de total ventanas calculadas por numcep, se hizo
un reshape para dejar los datos en un solo vector fila"""
winlen = 5.13
winstep = 100
nfilt = 26
nfft = 1024
winfunc=lambda x:numpy.ones((x,))
highfreq= df/2
lowfreq = 1
ceplifter = 22
numcep = 13
t = np.arange(len(x))/100
f = np.linspace(0,highfreq,nfft/2+1)
# la amplitud de la señal queda 5% de lo que era originalmente
signal = sigproc.preemphasis(x,0.95)
plt.subplot(221)
plt.plot(t,x)
plt.title("Señal sin preénfasis")
plt.xlabel("Tiempo [s]")
plt.ylabel("Amplitud [cuentas]")
plt.grid()
plt.subplot(222)
plt.plot(t,signal)
plt.title("Señal con preénfasis")
plt.xlabel("Tiempo [s]")
plt.grid()
#print("type signal",type(signal),"shape", signal.shape)
# se divide en ventanas. Como en este caso el tamaño de la ventana es igual al de la señal (256)
# queda una sola ventana (igual)
frames = sigproc.framesig(signal, winlen*df, winstep*df, winfunc)
#print("type frames",type(frames),"shape", frames.shape)
pspec = sigproc.powspec(frames,nfft)
#print("type pspec",type(pspec),"shape", pspec.shape)
plt.figure()
plt.plot(f,pspec[0])
#plt.title("Espectro de potencia de fase P")
plt.xlabel("Frecuencia [hz]")
plt.ylabel("Potencia")
plt.grid()
# Calculando la energía
energy = numpy.sum(pspec,1) # this stores the total energy in each frame
energy = numpy.where(energy == 0,numpy.finfo(float).eps,energy) # if energy is zero, we get problems with log
fb, hz_points = get_filterbanks(nfilt,nfft,df,lowfreq,highfreq, points=True)
#print(hz_points)
plt.figure()
plt.plot(hz_points,np.zeros(len(hz_points)),"o")
#plt.title("Extremos del banco de filtros centrado en 7 hertz")
plt.xlabel("Frecuencia [hz]")
plt.yticks([], [])
#print("type fb",type(fb),"shape", fb.shape)
plt.figure()
for i in range(len(fb)):
plt.plot(f,fb[i])
#plt.legend()
#plt.title("Banco de filtros triangulares centrados en 7 hertz")
plt.xlabel("Frecuencia [hz]")
plt.ylabel("Amplitud")
feat = numpy.dot(pspec,fb.T) # compute the filterbank energies
#print("type feat",type(feat),"shape", feat.shape)
plt.figure()
plt.plot(feat[0])
plt.ylabel("Amplitud")
plt.xlabel("Filtro triangular usado")
#plt.title("fbank-> power_espectrum * fb "+ph_type)
log_feat = np.log(feat)
#print("type log_feat",type(log_feat),"shape", log_feat.shape)
plt.figure()
plt.plot(log_feat[0])
plt.title("log_fbank-> log(fbank) "+ph_type)
mfcc_1 = dct(feat, type=2, axis=1, norm='ortho')[:,:numcep]
#print("type mfcc_1",type(mfcc_1),"shape", mfcc_1.shape)
plt.figure()
plt.plot(mfcc_1[0])
plt.title("mfcc_1 "+ph_type)
# aplicando filtro para aumentar la magnitud de los coeficientes de frecuencias altas de la DCT
mfcc_2 = lifter(mfcc_1,ceplifter)
#print("type mfcc_2",type(mfcc_2),"shape", mfcc_2.shape)
plt.figure()
plt.plot(mfcc_2[0])
plt.title("mfcc_2 "+ph_type)
# replace first cepstral coefficient with log of frame energy
mfcc_2[:,0] = numpy.log(energy)
plt.figure()
plt.matshow(mfcc_2)
plt.colorbar()
plt.title("mfcc_2 with energy"+ph_type)
"""#f_fbank, energy = fbank(x, df, 512/df, 80/df)
print(type(f_fbank), f_fbank.shape)
feat_mfcc_ = mfcc(x, df, 512/df, 80/df)
feat_new = feat_mfcc_.reshape(1,feat_mfcc_.shape[1]*feat_mfcc_.shape[0])
print('\nMFCC:\nNumber of windows =', feat_new.shape[0])
print('Length of each feature =', feat_new.shape[1])
plt.figure(clear=True)
plt.matshow(feat_mfcc_)
plt.title(ph_type)"""
# -
# **get_filterbanks** es la función que define los filtros para calcular los filterbanks. Habría que modificarla para asegurarse que considere frecuencias relevantes en los sismos. Para ello también hay que modificar la función **hz2mel** que define la escala de Mel.
nfilt = 16
lowfreq = 1
highfreq = 10
lowfreq2 = 10+1
highfreq2 = 50
def my_mel2hz(x,t=1):
if t==1: return (x)**2+1
else: return -(x)**2+1
def my_hz2mel(x, t=1):
if t==1: return np.sqrt(x-1)
else: return -np.sqrt(x-1)
print(lowfreq, highfreq)
lowmel1 = my_hz2mel(lowfreq,2)
highmel1 = my_hz2mel(highfreq,2)
lowmel2 = my_hz2mel(lowfreq2)
highmel2 = my_hz2mel(highfreq2)
#lowmel = highmel/2
step = nfilt/2+1
#step = nfilt/2+1
print(lowmel1,highmel1,lowmel2,highmel2)
#melpoints = np.zeros(step)
melpoints1 = numpy.linspace(lowmel1,highmel1,int(step))
melpoints2 = numpy.linspace(lowmel2,highmel2,int(step))
#melpoints = numpy.linspace(lowmel2,highmel2,nfilt/2+1)
melpoints = np.append(melpoints1,melpoints2)
hz1 = my_mel2hz(melpoints1,2)+9
hz2 = my_mel2hz(melpoints2)
hz = np.append(hz1,hz2)
#print(melpoints1, melpoints2)
y = np.zeros([len(melpoints)])
#hz = -my_mel2hz(melpoints,2)
print(melpoints)
print(hz)
print(numpy.sort(hz))
plt.subplot(2,2,1)
plt.plot(hz, y, "o")
plt.subplot(2,2,2)
plt.plot(melpoints, hz, "o")
#plt.plot(melpoints)
#plt.plot(my_hz2mel(np.linspace(1,300,16)))
#plt.plot(my_mel2hz(np.linspace(3,12,16)))
st = read("/home/daniel/Data/wf_512_0.5/P/P_SGC2016lseg_20160615-113015_HHZ.mseed")
tr_ = st[0]
tr = tr_.copy()
tr.normalize()
x1 = tr_.data
x2 = tr.data
plt.subplot(221)
plt.plot(t,x1)
plt.title("Señal sin normalizar")
plt.xlabel("Tiempo [s]")
plt.ylabel("Amplitud [cuentas]")
plt.grid()
plt.subplot(222)
plt.plot(t,x2)
plt.title("Señal normalizada")
plt.xlabel("Tiempo [s]")
plt.grid()
i=0
for w in wf:
i+=1
# obteniendo si la forma de onda es de P, S o ruido
ph_type = w.split("/")[-1].split("_")[0]
print(w)
st = read(w)
st.detrend(type='linear')
tr =st[0]
#tr.normalize()
df = tr.stats.sampling_rate
x = tr.data
sub_num = int("22"+str(i))
print(sub_num)
plt.subplot(sub_num)
if i == 1:
plt.title("Segmento de sismo")
plt.ylabel("Amplitud [cuentas]")
else:
plt.title("Segmento de ruido")
plt.xlabel("Tiempo [s]")
plt.grid()
plt.plot(t,x)
x1 = read("/home/daniel/Data/wf_512/P/P_SGC2016lseg_20160615-113015_HHZ.mseed")[0].data
x2 = read("/home/daniel/Data/wf_512_0.5/P/P_SGC2016lseg_20160615-113015_HHZ.mseed")[0].data
plt.subplot(221)
plt.plot(t,x1)
plt.xlabel("Tiempo [s]")
plt.ylabel("Amplitud [cuentas]")
plt.grid()
plt.subplot(222)
plt.plot(t,x2)
plt.xlabel("Tiempo [s]")
plt.grid()
| 9,821 |
/TRATANDO_VALORES_NULOS_.ipynb
|
52f1c05be8e4f34221e1a14a8ffb05d6ce0669cf
|
[] |
no_license
|
ClauderCarvalho/ABERTO
|
https://github.com/ClauderCarvalho/ABERTO
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 30,231 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/marcoswds/sentiment-analysis/blob/main/main.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="_SMUMjq61Z3G"
import pandas as pd
df = pd.read_csv('imdb.csv',sep=';')
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="qvqx-q_S2F7d" outputId="5481ed1f-9301-421d-f441-92135407826c"
df.head()
# + id="3V7A_KDdVbu-"
#dropa colunas importadas erronemanete
df.drop(['Unnamed: 1','Unnamed: 2','Unnamed: 3','Unnamed: 4','Unnamed: 5','Unnamed: 6'],axis=1,inplace=True)
# + id="KUdO-HCOVnPe"
#renomeia a coluna importada para o nome correto
df.rename(columns={'review,sentiment': 'review'},inplace=True)
# + id="IOhz6I1iWk2b"
#cria uma coluna do sentimento pegando a informação da ultima virgula até o final
df['sentiment'] = df['review'].apply( lambda x: x.split(',')[-1])
# + id="45uuEhpIWyFQ"
#retira a informação do sentimento da coluna review
df['review'] = df['review'].apply( lambda x: x[:-1*(len(x.split(',')[-1]))])
# + id="W57I-ZDueKLj"
def limpar_lixo(x:str) -> str:
lixos = [
'<br /><br />'
]
for lixo in lixos:
x = x.replace(lixo,'')
return x
df['review'] = df['review'].apply( lambda x: limpar_lixo(x))
# + colab={"base_uri": "https://localhost:8080/"} id="3MKZvmxPZAUs" outputId="e8363083-7f0e-47f9-ace5-f467759226fa"
import nltk
nltk.download(["stopwords","punkt","vader_lexicon"])
# + id="CoNRMBkRXze4"
from nltk.tokenize import word_tokenize
df['tokenized_text'] = df['review'].apply(word_tokenize)
# + id="C2nuafEfZIMh"
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
df['tokenized_text'] = df['tokenized_text'].apply(lambda x: [elem for elem in x if elem not in stopwords ])
# + id="9x5ws_Dfh_1o"
from nltk.stem import SnowballStemmer
stemmer = SnowballStemmer("english")
df['tokenized_text'] = df['tokenized_text'].apply(lambda x: [stemmer.stem(elem) for elem in x ])
# + id="PYd2ih6zDBtr"
import numpy as np
lst_col = 'tokenized_text'
#Criando um row para cada palavra de cada lista do tokenized_text
df_words = pd.DataFrame({
col:np.repeat(df[col].values, df[lst_col].str.len())
for col in df.columns.drop(lst_col)}
).assign(**{lst_col:np.concatenate(df[lst_col].values)})[df.columns]
# + id="WHB_F59PEMqV"
#Cria um index de 0 pra negativo e 1 para positivo de acordo com o sentimento
df_words['sentiment_index'] = df_words['sentiment'].apply(lambda x: 1 if x == 'positive' else 0)
#A coluna sentiment_index possui agora um valor variando de 0 a 1 para indicar se ela está mais relacionada com valores positivos ou negativos
df_words = df_words.groupby(['tokenized_text'], as_index=False).agg({'sentiment_index': [np.mean]})
df_words.columns = list(map(''.join, df_words.columns.values))
#Cria um dicionário do sentiment index para ser mais fácil a pesquisa
dict_tst = df_words.set_index('tokenized_text')['sentiment_indexmean'].to_dict()
# + [markdown] id="hxTmgd_FQp9E"
# Muito da modelagem daqui para frente foi baseada em:
# https://realpython.com/python-nltk-sentiment-analysis/#training-and-using-a-classifier
# + id="g71q4dXe3eeL"
from statistics import mean
def find_positive_score(word):
try:
positive_score = dict_tst[word]
except:
positive_score = None
return positive_score
#função para usar com frases soltas
def extract_features(text):
limpar_lixo(text)
words = word_tokenize(text)
words = [elem for elem in words if elem not in stopwords]
words = [stemmer.stem(elem) for elem in words ]
features = dict()
wordcount = 0
positive_scores = list()
for word in words:
if not find_positive_score(word)==None:
positive_scores.append(find_positive_score(word))
features["mean_positive"] = mean(positive_scores)
return features
#função para usar com tudo tratado no dataframe
def extract_features_ready(words):
features = dict()
positive_scores = list()
for word in words:
if not find_positive_score(word)==None:
positive_scores.append(find_positive_score(word))
features["mean_positive"] = mean(positive_scores)
return features
# + id="YlBdCXq8-eBs"
features = [
(extract_features_ready(review), "pos")
for review in df[df['sentiment'] == 'positive'].tokenized_text.tolist()
]
features.extend([
(extract_features_ready(review), "neg")
for review in df[df['sentiment'] == 'negative'].tokenized_text.tolist()
])
# + colab={"base_uri": "https://localhost:8080/"} id="FT21Gs_UJvcg" outputId="f065d4e0-1afc-4ea7-bc86-46cc0fad8b9e"
from sklearn.naive_bayes import (
BernoulliNB,
ComplementNB,
MultinomialNB,
)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from random import shuffle
classifiers = {
"BernoulliNB": BernoulliNB(),
"ComplementNB": ComplementNB(),
"MultinomialNB": MultinomialNB(),
"KNeighborsClassifier": KNeighborsClassifier(),
"DecisionTreeClassifier": DecisionTreeClassifier(),
"RandomForestClassifier": RandomForestClassifier(),
"LogisticRegression": LogisticRegression(),
"MLPClassifier": MLPClassifier(max_iter=1000),
"AdaBoostClassifier": AdaBoostClassifier(),
}
train_count = len(features) // 4
shuffle(features)
for name, sklearn_classifier in classifiers.items():
classifier = nltk.classify.SklearnClassifier(sklearn_classifier)
classifier.train(features[:train_count])
accuracy = nltk.classify.accuracy(classifier, features[train_count:])
print(F"{accuracy:.2%} - {name}")
# + colab={"base_uri": "https://localhost:8080/"} id="QiUNwAJMlThA" outputId="2d64ec98-fc50-4286-d9be-94eb1815d7a1"
from nltk.sentiment import SentimentIntensityAnalyzer
sia = SentimentIntensityAnalyzer()
def calculate_sia_score(review):
sia_score = sia.polarity_scores(review)
if sia_score['pos'] >= sia_score['neg']:
return 'positive'
else:
return 'negative'
df['sia_sentiment'] = df['review'].apply(lambda x:calculate_sia_score(x))
df['sia_index'] = df.apply(lambda x: 1 if x['sia_sentiment'] == x['sentiment'] else 0,axis=1)
sia_list = df['sia_index'].to_list()
print(F"{sum(sia_list)/len(sia_list):.2%} - SentimentIntensityAnalyzer")
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="dj8pTm3nPL68" outputId="e1942d43-7f21-432a-8f7d-e94277b46337"
new_review = "I'm in a hurry"
f = extract_features(new_review)
classifier.classify(f)
| 7,025 |
/time_series_jupyterlab_v3.ipynb
|
1bcab22cd0396788fd0537f2d7d2a7d649cb616c
|
[
"MIT"
] |
permissive
|
donghaozhang/SensorDH
|
https://github.com/donghaozhang/SensorDH
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 20,793 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Redshift SQL Tricks
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 1) adding column indicating if row is first row
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select c1,
# floor(1 / (row_number() over (partition BY c1 order by c2))) as is_first_row
# from t
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 2) Pivot row into column
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select pk,
# max(case when c1 = 1 then c2
# else '#N/A'
# end) as c3,
# max(case when c1 = 2 then c2
# else '#N/A'
# end) as c4,
# from t
# group by pk
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 3) pivoting column into row
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select c2
# from t
# union
# select c1
# from t
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 4) adding column indicating if row is first row
# join ON table with multiple primary_key (if one didnt create gk) and they are the same you dont lead to connect o account so you create prefix
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# coalesce('u' + t1.pk1::varchar(255),'s' + t1.pk2::varchar(255),'a' + t1.pk3::varchar(255)) =
# coalesce('u' + t2.pk1::varchar(255),'s' + t2.pk2::varchar(255),'a' + t2.pk3::varchar(255))
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 5) check duplication on tables
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select pk,
# count(1)
# from t
# group by pk
# having count(1)>1
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 6) convert timestamp to int
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select EXTRACT('EPOCH' FROM c1)
# from t
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 7) get rows which are not connected in left join
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select *
# from t1
# left join t2
# on t2.pk = t1.pk
# whenere t2.pk is null
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 7) profile over facts using sparsed matrix
# when one construct profile table t1 from t2,t3,t4 facts the cleanest way to so is to insert into t1_temp the aggrigations over facts but then the granularity will stop be the pk of t1 because of sparisity so one need to group it all together
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# insert into t1_temp
# from <query over t2>;
#
# insert into t1
# from <query over t3>;
#
# insert into t1
# from <query over t4>;
#
# insert into t1
# from t1_temp
# group by pk1;
# ```
# + [markdown] run_control={"frozen": false, "marked": false, "read_only": false}
# ## 8) exact duplication row_number
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# http://stackoverflow.com/questions/18932/how-can-i-remove-duplicate-rows
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 10) ri
# good when pk is in the fact before in the dimention
#
# + run_control={"frozen": false, "read_only": false}
```sql
create table t3_que as
select c1
from t1_fact
left join t2_dim
on t1_fact.pk = t2_dim.pk
where t2_dim is null
```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 11) get value exactly after the last character _
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select REVERSE(SPLIT_PART(REVERSE(c1),'_',1))
# from t
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 12) checking wheter someone had 72 hour to of buisness hours
# * sunday/saturday he gets until end of monday
# * friday he gets 72 hours
# * otherwise he gets 24 hours
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select c1,
# c2
# from t1
# where ((date_part(dow, c1)=6 AND datediff(day, c1, c2)<=2) OR
# (date_part(dow, c1)=0 AND datediff(day, c1, c2)<=1) OR
# (date_part(dow, c1)=5 AND datediff(hour, c1, c2) <= 72) OR
# (date_part(dow, c1) in (1,2,3,4) AND datediff(hour,c1, c2)<=24
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 13) constructing table over time from event history table
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select c1
# c2 AS from_time,
# isnull(LEAD(c2) OVER (PARTITION BY c1 ORDER BY c2 ), '2999-01-01') AS to_time
# from t1
# where event_name = 'x'
# ```
# * which allow you to allow you to do:
# - where t2.c1 BETWEEN t1.from_time AND t1.to_time
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 14) interval to window function using cross join(one can do changing interval using 2 cross join intervals)
#
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# create temp table exploded as
# select row_number() over (order by true) as day
# from t1 limit 10;
#
#
# select exploded.day as day,
# count(distinct c1)
# from t2
# cross join exploded
# where getdate() > t2.c2
# and getdate()< DATEADD('day',t2.c2)
# group by day
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 14) generate unique gks
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# (select isnull(max(gk),0) from t) + row_number() over (order by 1) as gk
# ```
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 15) flip flop
# + [markdown] run_control={"frozen": false, "read_only": false}
# http://mysql.rjweb.org/doc.php/staging_table#flip_flop_staging
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## 16) # of random rows
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select * from table order by random() limit 1000;
#
# ```
# + [markdown] run_control={"frozen": false, "marked": true, "read_only": false}
# ## 17) % of random rows
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ```sql
# select * from table where random() < 0.01;
# ```
# -
# <br/><br/><img src='resources/sql.jpg' width=500 align="left"></img><br/><br/>
sensor_array[row_i, 13] = sensor_row[19]
sensor_array[row_i, 14] = sensor_row[20]
sensor_array[row_i, 15] = sensor_row[21]
return sensor_array
# class MyCustomDataset(Dataset):
# def __init__(self):
# # stuff
# def __getitem__(self, index):
# # stuff
# return (img, label)
# def __len__(self):
# return count # of how many examples(images?) you have
# -
# generate sensor dataset
# 9533: talk -> eat -> read -> drink 20mins in total 5 mins for each action
def sample_sensor_data(input_data, window_sz = 128, sample_sz = 130):
sensor_length = input_data.shape[0]
print('The shape of sensor input data', input_data.shape)
feature_sz = input_data.shape[1]
data_sz = 0
print('the length of sensor', sensor_length)
for i in range(0, sensor_length-window_sz-sample_sz, sample_sz):
data_sz = data_sz + 1
all_sensor_data = np.zeros((data_sz, feature_sz, window_sz))
cnt = 0
for i in range(0, sensor_length-window_sz-sample_sz, sample_sz):
sample = input_data[i:i + window_sz, :]
sample = np.transpose(sample)
all_sensor_data[cnt, :, :] = sample
cnt = cnt + 1
print('the shape of sensor dataset', all_sensor_data.shape)
return all_sensor_data
# processed_head_9533 = sample_sensor_data(head_9533)
# +
# import matplotlib.pyplot as plt
# import os
# # C:\Users\zhjsc\Desktop\zongyuan\sensor\repos\time_series_network\sensor_data_v3
# files = [f for f in os.listdir('sensor_data_v3') if os.path.isfile(f)]
# print(files)
# head_sensor = load_sensor_data(fname='sensor_data_v3\head_20200917.txt')
# xarray = np.asarray([i for i in range(len(head_sensor[:, 1]))])
# for j in range(16):
# plt.plot(xarray, head_9533_with_h[:, j])
# plt.ylabel(str(j))
# plt.show()
# -
def load_sensor_data_without_h(fname):
sensor_txt = np.genfromtxt(fname, delimiter=',', dtype=None, encoding=None)
# a 2-4 w(augular velocity) 6-8 Angle 10-12 h 14-16 q(quaternion) 18-21
# a 0-2 w 3-5 Angle 6-8 q 9 10 11 12
row_len = 3*3 + 4
data_length = len(sensor_txt)
sensor_array = np.zeros((data_length, row_len))
for row_i, sensor_row in enumerate(sensor_txt):
# a 2-4
sensor_array[row_i, 0] = sensor_row[2]
sensor_array[row_i, 1] = sensor_row[3]
sensor_array[row_i, 2] = sensor_row[4]
# w 6-8
sensor_array[row_i, 3] = sensor_row[6]
sensor_array[row_i, 4] = sensor_row[7]
sensor_array[row_i, 5] = sensor_row[8]
# Angle 10-12
sensor_array[row_i, 6] = sensor_row[10]
sensor_array[row_i, 7] = sensor_row[11]
sensor_array[row_i, 8] = sensor_row[12]
# q 18-21
sensor_array[row_i, 9] = sensor_row[18]
sensor_array[row_i, 10] = sensor_row[19]
sensor_array[row_i, 11] = sensor_row[20]
sensor_array[row_i, 12] = sensor_row[21]
return sensor_array
# 9533: talk -> eat -> read -> drink 20mins in total 5 mins for each action
head_sensor = load_sensor_data_without_h(fname='sensor_data_v3\head_20200917.txt')
all_sensor_data = sample_sensor_data(head_sensor)
import pandas as pd
sensor_length = head_sensor.shape[0]
window_sz = 128
sample_sz = 130
label = pd.read_csv('video_20200917.csv', index_col=0)
#14682
#slowfast 1-457
# print(label['eat'].iloc[:3])
label_array = label.to_numpy()
# print(label.iloc[0]['talk'])
# print(label_array[0])
# print(label.shape[0])
def sensor_to_slowfast(sensor_index, sensor_data, label_data):
slowfast_index = int(sensor_index / sensor_data.shape[0] * label_data.shape[0])
return slowfast_index
sensor_to_slowfast(sensor_index=20+128, sensor_data = head_sensor, label_data=label_array)
sensor_start_array = np.zeros((all_sensor_data.shape[0], 1))
cnt = 0
for i in range(0, sensor_length-window_sz-sample_sz, sample_sz):
sensor_start_array[cnt] = i
cnt = cnt + 1
# print(i)
# print(sensor_start_array)
# generate labels for 9533
# counter for text on/look at a cellphone
all_label = np.zeros((all_sensor_data.shape[0], 1))
cnt = 0
from collections import Counter
for i in range(0, sensor_length-window_sz-sample_sz, sample_sz):
start = sensor_to_slowfast(sensor_index=i, sensor_data = head_sensor, label_data=label_array)
end = sensor_to_slowfast(sensor_index=i+window_sz, sensor_data = head_sensor, label_data=label_array)
cur_label_array = label_array[start:end, :]
all_label[cnt] = (np.argmax(np.sum(cur_label_array,axis=0)))
cnt = cnt + 1
print(all_label.shape)
combine_flag = True
if combine_flag:
combine_label = all_label
combine_data = all_sensor_data
print(combine_label.shape)
valid_pct = 0.2
sz = combine_label.shape[0]
idx = np.arange(sz)
bs = 4
trn_idx, val_idx = train_test_split(idx, test_size=valid_pct, random_state=seed)
print(combine_data.shape)
trn_ds = TensorDataset(torch.tensor(combine_data[trn_idx]).float(), torch.tensor(combine_label[trn_idx]).long())
trn_dl = DataLoader(trn_ds, batch_size=bs, shuffle=True, num_workers=0)
val_ds = TensorDataset(torch.tensor(combine_data[val_idx]).float(), torch.tensor(combine_label[val_idx]).long())
val_dl = DataLoader(val_ds, batch_size=bs, shuffle=True, num_workers=0)
else:
valid_pct = 0.2
sz = all_label.shape[0]
idx = np.arange(sz)
bs = 4
trn_idx, val_idx = train_test_split(idx, test_size=valid_pct, random_state=seed)
print(all_sensor_data.shape)
trn_ds = TensorDataset(torch.tensor(all_sensor_data[trn_idx]).float(), torch.tensor(all_label[trn_idx]).long())
trn_dl = DataLoader(trn_ds, batch_size=bs, shuffle=True, num_workers=0)
val_ds = TensorDataset(torch.tensor(all_sensor_data[val_idx]).float(), torch.tensor(all_label[val_idx]).long())
val_dl = DataLoader(val_ds, batch_size=bs, shuffle=True, num_workers=0)
print(len(trn_ds))
# +
import torch as tf
lr = 0.001
n_epochs = 1000
iterations_per_epoch = len(trn_ds)
num_classes = 8
best_acc = 0
patience, trials = 500, 0
base = 1
step = 2
loss_history = []
acc_history = []
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model = Classifier_dh(combine_data.shape[1], num_classes).to(device)
criterion = nn.CrossEntropyLoss(reduction='sum')
opt = optim.Adam(model.parameters(), lr=lr)
print('Start model training')
for epoch in range(1, n_epochs + 1):
model.train()
epoch_loss = 0
for i, batch in enumerate(trn_dl):
x_raw, y_batch = [t.to(device) for t in batch]
opt.zero_grad()
out = model(x_raw)
y_batch = tf.squeeze(y_batch)
loss = criterion(out, y_batch)
epoch_loss += loss.item()
loss.backward()
opt.step()
loss_history.append(epoch_loss)
model.eval()
correct, total = 0, 0
for batch in val_dl:
x_raw, y_batch = [t.to(device) for t in batch]
y_batch = tf.squeeze(y_batch)
out = model(x_raw)
preds = F.log_softmax(out, dim=1).argmax(dim=1)
# print(preds, y_batch)
total += y_batch.size(0)
correct += (preds == y_batch).sum().item()
acc = correct / total
acc_history.append(acc)
if epoch % base == 0:
print(f'Epoch: {epoch:3d}. Loss: {epoch_loss:.4f}. Acc.: {acc:2.2%}')
base *= step
if acc > best_acc:
trials = 0
best_acc = acc
torch.save(model.state_dict(), 'best.pth')
print(f'Epoch {epoch} best model saved with accuracy: {best_acc:2.2%}')
else:
trials += 1
if trials >= patience:
print(f'Early stopping on epoch {epoch}')
break
# -
| 14,327 |
/Predication of Games Sales Exploratory Data Analysis .ipynb
|
f878633b05e8ccfa1ce88c4ac9fae26c8cc8fa06
|
[] |
no_license
|
Satyajit7791/GamesSales-EDA
|
https://github.com/Satyajit7791/GamesSales-EDA
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 629,850 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # EDA ANALYSIS OF VADIO GAMES SALES
#importing Required library
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# ### Data Reading
#
sat=pd.read_csv("vgsales.csv")
sat.shape
sat.info()
sat.head()
# ### Data cleaning
sat.isnull().sum()
sat.drop('Year',axis=1,inplace = True)
sat.drop('Publisher',axis=1,inplace = True)
sat.head()
sat.tail(3)
# ### Data information
sat.describe()
sat['Genre'].unique()
# #### looking for any Outliers available
sat.boxplot()
sat.hist()
# ### Data Visulation
sns.pairplot(sat) # all information in one block
# #### Relationship analysis
cor= sat.corr()
sns.heatmap(cor,xticklabels=cor.columns,yticklabels=cor.columns,annot=True)
grp = sat.groupby('Genre')
a = grp['NA_Sales'].agg(np.sum)
b = grp['EU_Sales'].agg(np.sum)
c = grp['JP_Sales'].agg(np.sum)
d = grp['NA_Sales'].agg(np.sum)
e = grp['Global_Sales'].agg(np.sum)
print(a)
print(b)
print(c)
print(d)
print(e)
plt.figure(figsize=(16,5))
plt.plot(a,'ro', color='r')
plt.xticks(rotation=90)
plt.title('Gener wise Reviews')
plt.xlabel('Gener')
plt.ylabel('NA_Sales')
plt.show()
plt.figure(figsize=(16,5))
plt.plot(b,'ro', color='b')
plt.xticks(rotation=90)
plt.title('Gener wise Reviews')
plt.xlabel('Gener')
plt.ylabel('EU_Sales')
plt.show()
plt.figure(figsize=(16,5))
plt.plot(c,'r^', color='r')
plt.xticks(rotation=90)
plt.title('Gener wise Reviews')
plt.xlabel('Gener')
plt.ylabel('JP_Sales')
plt.show()
plt.figure(figsize=(16,5))
plt.plot(d,'r--', color='b')
plt.xticks(rotation=90)
plt.title('Gener wise Reviews')
plt.xlabel('Gener')
plt.ylabel('NA_Sales')
plt.show()
plt.figure(figsize=(16,5))
plt.plot(e,'bs', color='r')
plt.xticks(rotation=90)
plt.title('Gener wise Reviews')
plt.xlabel('Gener')
plt.ylabel('Global_Sales')
plt.show()
d = sat.iloc[0]
d
d.plot.pie(figsize = (10,10))
| 2,176 |
/Laboratorios/Lab2/InsertionSort/insertionSortv2.ipynb
|
33b8c77184fa94fbc9586d928eff874b66b17553
|
[] |
no_license
|
jodorganistaca/Algoritmos
|
https://github.com/jodorganistaca/Algoritmos
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 77,103 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python (research)
# language: python
# name: research
# ---
# #### Notation
#
# * $a_t$ = ask price
# * $b_t$ = bid price
# * $s_t = a_t - b_t$ = bid ask spread
# * $x \cdot y$ standard dot product between two vectors in $\mathbb R^N$
# * $\mathbb{I}_{A}(x)$ = indicator function for set $A \subset \mathbb R^N$
# * $[x]_+ = |x| \mathbb{I}_{x > 0} = \max(x,0)$ = positive part of $x \in \mathbb R$
# * $[x]_- = |x| \mathbb{I}_{x < 0} = -\min(x,0)$ = negative part of $x \in \mathbb R$
#
# ## Problem description
# Consider $T$ trading rounds $t = 1,\ldots, T$ at each trading round agent observes the current
# ask and bid prices, and possibly other order book information, he then has to come up with:
# * trade direction $d_t$, $d_t = 1$ agent will buy, $d_t = -1$ agent will sell, $d_t = 0 $ agent won't take action
# * price deviation: $Z_t$, deviation from the current bid (if $d_t = -1$) or ask( if $d_t = 1$).
#
# The agent hopes to make profit by issuing a market order at the current bid/ask price $p_t$ and then placing a
# limit order at $p_t + Z_t$.
#
# ## Solution
# Let:
#
# * $u_t = b_{t+1} - a_t$, if $u_t > 0$ then the agent will have a profit opportunity
#
# * $v_t = a_{t+1} - b_t$, if $v_t < 0$ then the agent willa have a profit opportunity to exploit
#
# $d_t^u, d_t^v$ is the agent prediction for the sign of $u_t$, $v_t$ respectively.
# $Z_t^u, Z_t^v$ is the agent prediction for $|u_t|$, $|v_t|$ respectively.
# Notice that $v_t - u_t = s_{t+1} + s_t \ge 0$.
# A strategy is given by a quadruple $(d_t^u, d_t^v, Z_t^u, Z_t^v)$:
# The agent will observe current ask and bid prices $a_t,b_t$ then if $d_t^u > 0$ the agent, expecting an
# upward price movement, will issue a market
# order for one unit of asset for $a_t$ and then place a limit sell order for $a_t + Z_t^u$, hence its profit/loss
# from the trade will be $\mathbb{I}_{d_t^u > 0} \min(Z_t^u, u_t)$. The profit/loss deriving from a downward price movent
# is obtained in a similar fashion: the agent predicting $d_t^v < 0$ will cross the spread by selling short one unit
# of asset at $b_t$ and place a limit buy order at $b_t - Z_t^v$.
#
# The payoff at each round is given by :
#
# $$
# \begin{align*}
# \text{PO}_t &=
# \mathbb{I}_{d_t^u > 0} \min(Z_t^u,[u_t]_+) - \mathbb{I}_{d_t^u > 0} [u_t]_-
# +
# \mathbb{I}_{d_t^v < 0} \min(Z_t^v,[v_t]_-) - \mathbb{I}_{d_t^v < 0} [v_t]_+ \\
# &=
# \mathbb{I}_{d_t^u > 0} \mathbb{I}_{\tilde{u_t} > 0} \min(Z_t^u,|u_t|) - \mathbb{I}_{d_t^u > 0}\mathbb{I}_{\tilde{u_t} < 0} |u_t|
# +
# \mathbb{I}_{d_t^v < 0} \mathbb{I}_{\tilde{v_t} < 0}\min(Z_t^v,|v_t|) - \mathbb{I}_{d_t^v < 0} \mathbb{I}_{\tilde{v_t} > 0} |v_t|
# \end{align*}
# $$
#
# Let $c_t = \min(Z_t^u, Z_t^v, |u_t|,|v_t|) \ge 0$ Then
#
# $$
# \begin{align*}
# \text{PO}_t &\ge c_t \left(
# \mathbb{I}_{d_t^u > 0} \mathbb{I}_{\tilde{u_t} > 0} - \mathbb{I}_{d_t^u > 0}\mathbb{I}_{\tilde{u_t} < 0}
# +
# \mathbb{I}_{d_t^v < 0} \mathbb{I}_{\tilde{v_t} < 0} - \mathbb{I}_{d_t^v < 0} \mathbb{I}_{\tilde{v_t} > 0}
# \right)\\
# &=
# c_t \left(\mathbb{I}_{\tilde{u_t} > 0} + \mathbb{I}_{\tilde{v_t} < 0} -
# \mathbb{I}_{d_t^u < 0} \mathbb{I}_{\tilde{u_t} > 0} - \mathbb{I}_{d_t^u > 0}\mathbb{I}_{\tilde{u_t} < 0}
# -
# \mathbb{I}_{d_t^v > 0} \mathbb{I}_{\tilde{v_t} < 0} - \mathbb{I}_{d_t^v < 0} \mathbb{I}_{\tilde{v_t} > 0}
# \right) \\
# &=
# c_t \left( \mathbb{I}_{\tilde{u_t} > 0} + \mathbb{I}_{\tilde{v_t} < 0} - \mathbb{I}_{d_t^u \neq \tilde{u_t}} - \mathbb{I}_{d_t^v \neq \tilde{v_t}}
# \right)
# \end{align*}
# $$
#
# Fix $c_t = c > 0$, then the above can be interpreted as follows:
# Take the expected value with respect to $u$ and $v$ (since the two series are stationary), the
# final wealth will be greater than zero when:
# $\mathbb{P}[d^u > 0, u > 0] > \mathbb{P}[d^u > 0, u < 0]$ and
# $\mathbb{P}[d^v < 0, v < 0] > \mathbb{P}[d^v < 0, v > 0]$.
# Hence the priority here will be to correctly build a classifier such that the number of mistakes
#
# $$
# M_T =
# \sum_{t=1}^T \mathbb{I}_{d_t^u \neq \tilde{u}_t} + \mathbb{I}_{d_t^v \neq \tilde{v}_t}
# $$
#
# Will be minimized.
#
# ### Classificiation
# I assume linear models for sign prediction: $\tilde{u}$ is correctly classified whenever $\tilde{u}(x_1 \cdot w_1) \ge 0$, for feature vector $x_1$ and weighs $w_1$, it must be that $|x_1 \cdot w_1| \le 1$. The model for $\tilde{v}$ is defined analogously.
# The loss to minimize at each round is the zero one loss both for $\tilde{u}$ and $\tilde{u}$, we can bound the
# 01 loss by the quadratic hinge loss $\ell(y,\widehat{y}) = (1 - y\widehat{y})^2\mathbb{I}_{y \neq \widehat{y}}$
# Running a vanilla perceptron for classification produces poor results:
# Suppose that perfect linear separation is achieved, that is there exists a vector $u$ such that when the perceptron
# is run the cumulative hinge loss $\sum_{t=1}^T (1 - y_t u \cdot x_t)_+ = 0$, then
#
# $$\text{#Mistakes made by the perceptron} \le \left( \frac{\max_{t=1,\ldots,T} \|x_t\|_2 \|u\|_2}{\min_{
# t=1,\ldots, T } y_t u \cdot x_t } \right)^2
# $$
# If $w_t$ happens to be the vector that separates instances $(y_1,x_1), \ldots, (y_t,x_t)$ then the perceptron encounters the greatest trouble if the next instance $(y_{t+1},x_{t+1})$ is such that $x_t$ is high in magnitude
# $\|x_t\|_2$ or $x_t$ forms an approx 90 degree angle with $w_t$.
# This means that to fool the perceptron at $t+1$ you can feed it with correlated feature vectors $x_1,\ldots, x_t$ and
# then a large uncorrelated $x_{t+1}$ at $t+1$.
# If the all the feature vectors $x_1, \ldots, x_T$ are whitened, that is premultiplied by matrix $M^{-1/2}$, where $M = \sum_{s=1}^t x_s x_s^T$ and also
# if the perceptron produces vector $M^{1/2}w_t$ then, mistakes made up to $t$
# are bounded by
#
# $$
# \left(
# \frac{\max_{s=1,\ldots,t}(x_s^T M^{-1} x_s) (w_t^T M^{1} w_t)}{\min_{s=1,\ldots,t} y_s w_t \cdot x_s}
# \right)^2
# $$
#
# The data is separated by the same unnormalized margin $\min_t y_t u \cdot x_t$
# Whitening the data has the effect of minimize the 'leaning' of $w_t$ towards correlated $x_1,\ldots, x_t$, producing less mistakes.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# plt.rc('figure',figsize=(16,9),dpi=80)
df = pd.read_csv('data/raw/data_binance.csv')
df.columns=['timestamp','ask','bid','ask_size','bid_size','sell_qty','buy_qty']
df.timestamp = pd.to_datetime(df.timestamp)
# df = df.set_index('timestamp').resample('10s').apply({'bid': np.mean,
# 'ask': np.mean,
# 'bid_size': np.sum,
# 'ask_size': np.sum,
# 'sell_qty': np.sum,
# 'buy_qty': np.sum
# })
df.set_index('timestamp',inplace=True)
df['ofi'] = (df.ask.diff() >= 0).astype(int) * df.ask_size.shift(-1).apply(np.abs) -\
(df.ask.diff() <= 0).astype(int) * df.ask_size.apply(np.abs) +\
(df.bid.diff() >= 0).astype(int) * df.bid_size.apply(np.abs) -\
(df.bid.diff() <= 0).astype(int) * df.bid_size.shift(-1).apply(np.abs)
df.head()
# -
df['spread'] = df.ask - df.bid
X1 = df.copy()
X2 = df.copy()
X1['u'] = df.bid.shift() - df.ask
X2['v'] = df.ask.shift() - df.bid
X1['u_prev'] = X1.u.shift(1)
X2['v_prev'] = X2.v.shift(1)
# X1['runs'] = X1.fillna(0).u.apply(np.sign).cumcount()
X1.head()
X1['u_sign'] = np.sign(X1.u_prev)
X1['runs'] = X1.groupby('u_sign').cumcount()
X2['v_sign'] = np.sign(X2.v_prev)
X2['runs'] = X2.groupby('v_sign').cumcount()
X1.head()
# +
x1 = X1.dropna().drop(['u','ask_size','bid_size','u_prev','ask','bid','u_sign','runs'],axis=1).values
x2 = X2.dropna().drop(['v','ask_size','bid_size','v_prev','ask','bid','v_sign','runs'],axis=1).values
T,p = x1.shape
u = X1.dropna().u.values
v = X2.dropna().v.values
w1 = np.zeros((T,p))
w2 = np.zeros((T,p))
_w1 = np.zeros((T,p))
_w2 = np.zeros((T,p))
# +
DEBUG = 1
QTY = 13
wealth = np.zeros(T)
eps = 0
def loss1(sign, t, eps = 30.):
if sign > 0:
return np.minimum(eps, sign*u[t])
else:
return 0
def loss2(sign, t ,eps = 30.):
if sign < 0:
return np.minimum(eps, sign * v[t])
else:
return 0
eps_u = np.ones(T) * 100
eps_v = np.ones(T) * 100
a = 3
M1 = a * np.eye(p)
M1_inv = 1/a * np.eye(p)
M2 = a * np.eye(p)
M2_inv = 1/a * np.eye(p)
errors = 0
for t in range(T-1):
# prediction
y_pred1 = np.sign(np.dot(w1[t],x1[t]))
y_pred2 = np.sign(np.dot(w2[t],x2[t])) if y_pred1 !=1 else 1
# suffer loss
gains1 = loss1(y_pred1, t, QTY)
gains2 = loss2(y_pred2, t, QTY)
wealth[t+1] = wealth[t] + gains1 + gains2
# SOP Update
mistake1 = y_pred1 * np.sign(u[t]) <= 0
mistake2 = y_pred2 * np.sign(v[t]) <= 0
if DEBUG:
errors += int(mistake1) + int(mistake2)
grad1 = x1[t] * np.sign(u[t]) * int(mistake1)
grad2 = x2[t] * np.sign(v[t]) * int(mistake2)
increase_1 = np.outer(x1[t],x1[t]) * int(mistake1)
increase_2 = np.outer(x2[t],x2[t]) * int(mistake2)
M1 = M1 + increase_1
M1_inv = M1_inv - M1_inv.dot(increase_1).dot(M1_inv)/(1+ x1[t].dot(M1_inv).dot(x1[t]))
M2 = M1 + increase_2
M2_inv = M2_inv - M2_inv.dot(increase_2).dot(M2_inv)/(1+ x2[t].dot(M2_inv).dot(x2[t]))
### update
w1[t+1] = M1_inv.dot(_w1[t])
_w1[t+1] = _w1[t] + grad1
w2[t+1] = M2_inv.dot(_w2[t])
_w2[t+1] = _w2[t] + grad2
print(f'accuracy {np.round(1-errors/(2*T),2)}')
# +
# compare the result with a only buy only sell strategy
a = 3
wealth_bo = np.zeros(T) # wealth buy only
wealth_so = np.zeros(T) # wealth sell only
errors = 0
for t in range(T-1):
# suffer loss
wealth_bo[t+1] = wealth_bo[t] + loss1(1,t,QTY) + loss2(1,t,QTY)
wealth_so[t+1] = wealth_so[t] + loss1(-1,t,QTY) + loss2(-1,t,QTY)
# +
# randomly picks price direction according to a Bernoulli(.5)
from scipy.stats import bernoulli
wealth_ran = np.zeros(T) # random picks
errors = 0
for t in range(T-1):
# suffer loss
direction = bernoulli.rvs(.5,size=1)
wealth_ran[t+1] = wealth_ran[t] + loss1(direction,t,QTY) + loss2(direction,t,QTY)
# +
# vanilla perceptron
x1_c = x1.copy()/max([np.linalg.norm(x_) for x_ in x1])
x2_c = x2.copy()/max([np.linalg.norm(x_) for x_ in x2])
w1 = np.zeros((T,p))
w2 = np.zeros((T,p))
wealth_per = np.zeros(T)
DEBUG = 1
for t in range(T-1):
# prediction
y_pred1 = np.sign(np.dot(w1[t],x1_c[t]))
y_pred2 = np.sign(np.dot(w2[t],x2_c[t])) if y_pred1 !=1 else 1
# suffer loss
gains1 = loss1(y_pred1, t, QTY)
gains2 = loss2(y_pred2, t, QTY)
wealth_per[t+1] = wealth_per[t] + gains1 + gains2
# SOP Update
mistake1 = y_pred1 * np.sign(u[t]) <= 0
mistake2 = y_pred2 * np.sign(v[t]) <= 0
if DEBUG:
errors += int(mistake1) + int(mistake2)
grad1 = x1_c[t] * np.sign(u[t]) * int(mistake1)
grad2 = x2_c[t] * np.sign(v[t]) * int(mistake2)
### update
w1[t+1] = w1[t] + grad1
w2[t+1] = w2[t] + grad2
print(f'accuracy {np.round(1-errors/(2*T),2)}')
# -
sns.set_style('whitegrid')
plt.figure(figsize=(10,6), dpi=100)
pd.Series(wealth,index=X1.dropna().index).plot(label='sop')
pd.Series(wealth_bo,index=X1.dropna().index).plot(label='buy only',color='green')
pd.Series(wealth_so,index=X1.dropna().index).plot(label='sell only',color='red')
pd.Series(wealth_ran,index=X1.dropna().index).plot(label='random',color='gold')
pd.Series(wealth_per,index=X1.dropna().index).plot(label='perceptron',color='purple')
plt.legend()
_= plt.title(f'wealth for various strategies with offset $Z_t^u = Z_t^v=${QTY}')
# why the whitening perceptron does work?
import statsmodels.api as sm
fig, ax = plt.subplots(1,2,figsize=(10,6), dpi=100)
_ = sm.graphics.tsa.plot_pacf(df.dropna()['sell_qty'], ax=ax[0])
_ = sm.graphics.tsa.plot_pacf(df.dropna()['buy_qty'], ax=ax[1])
ax[0].set_title('sell tx volume')
ax[1].set_title('buy tx volume')
ax[0].set_xlabel('lag')
ax[1].set_xlabel('lag')
plt.figure(figsize=(10,6), dpi=100)
(-df.sell_qty).plot(figsize=(10,6),label='sell tx volume')
df.buy_qty.plot(figsize=(10,6), label='buy tx volume')
plt.ylim((-63,63))
plt.ylabel('number of BTC')
plt.legend()
# ### Why the whitened perceptron works
# * Inspecting the acfplots above we see that features presents significant autocorrelation, which intuitively implies that $x_t$ is correlated with $x_{t-1},x_{t-2}$ and so on. Suppose the binary classification problems both for the sign of $u_t$ and $v_t$ are
# linearly separable. The vanilla perceptron will tilt the separating hyperplanes towards all the previous values
# for $x_t$ leaving the perceptron prone to mistakes, when a large uncorrelated feature $x_{t+1}$ arrives
#
# * The feature vectors have values that greatly differ in magnitude, the rescaling is recommended, and in the
# 2nd order perceptron rescaling is obtained by premultiplying by $M^{-1/2}$:
#
# The bound on the mistakes made by the second order perceptron is given by :
#
# $$
# \sum_{t=1}^T \mathbb{I}_{d_t^u \neq \tilde{u}} \le \inf_{y > 0} \min_{w \in \mathbb{R}^d} \frac{L_{\gamma,T}(w)}{\gamma} + \frac{1}{\gamma} \sqrt{
# \left( a \|w\|^2 + w^T A_n w \right) \sum_{i=1}^d \ln \left(1 + \frac{\lambda_i}{a} \right)
# }
# $$
#
# where:
# $$
# L_{\gamma,T}(w) = \sum_{t=1}^T(\gamma - \tilde{u_t} w \cdot x_t)^2 \mathbb{I}_{y_t w \cdot x_t < 0}
# $$
# ## Magnitude prediction
# Now that we have choosen a decent classifier, we have to decide the offset (number of ticks) at which we place
# the market order. When decent accuracy is achieved, the agent is practically able to make a profit after $T$ trading
# rounds.
#
# The closed form expression for the profit would encourage any forecaster, to shoot for the highest value of $Z_t$,
# since max profit is capped at $u_{t+1}$ or $v_{t+1}$ for any correct prediction.
# In real world scenario, in case of correct price direction prediction, overshooting the value of $u_t$ (for example) will most definitively lead to an unfilled order. The agent, however, can potentially recover from such a situation
# (by simply cancelling the limit order
# and issuing a market order at $a_t + u_{t+1}$ and eventually paying extra taker fees).
# We can however cast the problem into prediction with expert advice. To each expert we associate a value $\epsilon$
# in a equally spaced partition range $\Pi = \{z_{min}, z_{min}+\delta, \ldots, z_{max} - \delta, z_{max}\}$ where $\delta$ is
# a multiple of tick size, and $z_{min},z_{max}$ are predetermined parameters.
# In particular to discourage expert $\epsilon$ to exceed $|u_t|$ we can reward each expert with gain per round : $\min(\epsilon,
# |u_t|)$ if $\epsilon \le |u_t|$ and $0$ otherwise.
# Let $H_{i,t}$ be the cumulative reward associated to the $i$-th expert up to time $t$, that is:
#
# $$
# H_{i,t} = \sum_{s =1}^t \underbrace{\min(z_{i}, |u_s|) \mathbb{I}_{z_{i} \le |u_s|}}_{h_{s,i}}
# $$
#
# The forecaster then will issue a limit order at price:
#
# $$
# Z_t^u = \sum_{i=1}^N \frac{z_i H_{i,t}}{\sum_{i=1}^N H_{i,t}} = \sum_{i=1}^N z_i p_{t,i}
# $$
#
# Where $N = |\Pi|$ is the size of the partition. $p_t$ is a probability distribution on the $N$ dimensional simplex.
# The above algorithm is equivalent to the Hedge algorithm with weights given by $\exp(\log \frac{H_{t,i}}{H_{t-1,i}})$
# $i = 1\ldots N$.
# +
x1 = X1.dropna().drop('u',axis=1).values
x2 = X2.dropna().drop('v',axis=1).values
T,p = x1.shape
w1 = np.zeros((T,p))
w2 = np.zeros((T,p))
_w1 = np.zeros((T,p))
_w2 = np.zeros((T,p))
errors = 0
epss = np.arange(20.,30.,1.)
d = epss.shape[0]
mix1 = np.ones((T,d))/d
mix2 = np.ones((T,d))/d
prec1 = np.ones((T,d)) * 1e-8
prec2 = np.ones((T,d)) * 1e-8
a = 3
M1 = a * np.eye(p)
M1_inv = 1/a * np.eye(p)
M2 = a * np.eye(p)
M2_inv = 1/a * np.eye(p)
lam = .03
def loss1(sign, t, eps = 3.):
if sign > 0:
return np.minimum(eps, sign*u[t])
else:
return 0
def loss2(sign, t ,eps = 3.):
if sign < 0:
return np.minimum(eps, sign * v[t])
else:
return 0
def backtest(init_cap = 300, eta1 = .1, eta2 = .1):
global errors,M1,M2,M1_inv,M2_inv
wealth = np.zeros(T)
wealth[0] = init_cap
for t in range(T-1):
# prediction
y_pred1 = np.sign(np.dot(w1[t],x1[t]))
y_pred2 = np.sign(np.dot(w2[t],x2[t])) if y_pred1 !=1 else 1
# experts update
for i in range(d):
prec1[t+1,i] = prec1[t,i] + epss[i] if epss[i] <= np.abs(u[t]) else 1e-8
prec2[t+1,i] = prec2[t,i] + epss[i] if epss[i] <= np.abs(v[t]) else 1e-8
# forecaster prec
e1 = (prec1[t]/prec1[t].sum()).dot(epss)
e2 = (prec2[t]/prec2[t].sum()).dot(epss)
g1 = loss1(y_pred1,t, e1)
g2 = loss2(y_pred2,t, e2)
wealth[t+1] = wealth[t] + g1 + g2
# SOP Update
mistake1 = y_pred1 * np.sign(u[t]) <= 0
mistake2 = y_pred2 * np.sign(v[t]) <= 0
if DEBUG:
errors += int(mistake1) + int(mistake2)
grad1 = x1[t] * np.sign(u[t]) * int(mistake1)
grad2 = x2[t] * np.sign(v[t]) * int(mistake2)
increase_1 = np.outer(x1[t],x1[t]) * int(mistake1)
increase_2 = np.outer(x2[t],x2[t]) * int(mistake2)
M1 = M1 + increase_1
M1_inv = M1_inv - M1_inv.dot(increase_1).dot(M1_inv)/(1+ x1[t].dot(M1_inv).dot(x1[t]))
M2 = M1 + increase_2
M2_inv = M2_inv - M2_inv.dot(increase_2).dot(M2_inv)/(1+ x2[t].dot(M2_inv).dot(x2[t]))
### update
w1[t+1] = M1_inv.dot(_w1[t])
_w1[t+1] = _w1[t] + grad1
w2[t+1] = M2_inv.dot(_w2[t])
_w2[t+1] = _w2[t] + grad2
if DEBUG:
print(f'accuracy {np.round(1-errors/(2*T),2)}')
return wealth
wealth = backtest()
# -
plt.figure(figsize=(10,6))
pd.Series(wealth).plot()
# from empyrical import cum_returns, max_drawdown, calmar_ratio, sharpe_ratio
rets = pd.Series(wealth,index=X1.dropna().index).pct_change().dropna()
max_drawdown = rets.min()
sharpe_ratio = rets.mean()/rets.std()
omega_ratio = rets.mean()/(-max_drawdown)
max_drawdown, sharpe_ratio, omega_ratio
# ### Commenti
# La soluzione mi sembra soddisfacente.
# Il livello di rischio (drawdown) e' abbastanza contenuto. Ho cambiato approccio rispetto, visto che praticamente ho fatto due problemi di classificazione in parallelo, invece che uno come in precedenza.
q_sell = df.sell_qty.quantile([.1,.9]).values
q_buy = df.buy_qty.quantile([.1,.9]).values
s_ask = df.ask_size.quantile([.1,.9]).values
s_bid = df.bid_size.quantile([.1,.9]).values
qty = pd.concat([-df.sell_qty[(df.sell_qty < q_sell[1])],\
df.buy_qty[(df.buy_qty < q_buy[1])]])
size = pd.concat([-df.ask_size[(df.ask_size < s_ask[1])],\
df.bid_size[(df.bid_size < s_bid[1])]])
N = min(qty.shape[0], size.shape[0])
sns.jointplot(x=qty.values[:N],y=size.values[:N])
# +
# sns.jointplot?
# -
# * best expert
# * strategia baseline molto semplice
# * strategia che si basa sul fatto che il trend precedente funziona
# * Running average negli ultimi es 10 gg (nessuno delle cose che ho fatto tiene conto della stazionarieta')
#
# * Formula il problema e poi parla dei dati
#
# al: [email protected]
#
# Se hai tempo prova a vedere l'aspetto moltiplicativo,
# la media mobile, benchmark, (baseline che usa glistessi dati). Se hai tempo da qui all'otto puoi esplorare l'aspetto
# moltiplicativo, pero' e' piu' realistico.
| 19,885 |
/Tutorial_Hackathon.ipynb
|
cc550b096f749615868507a977e3c4da3146b176
|
[] |
no_license
|
Yeonsoo94/pytorch-tutorials
|
https://github.com/Yeonsoo94/pytorch-tutorials
| 0 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 15,219 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Image Classification using Pytorch
#
#
# 1. Reading the input image
# - Performing transformations on the image, for example resize, center crop, normalization, etc.
# 2. Forward Pass: Use the weights to predict the output.
# 3. Based on the scores obtained, display the predictions.
# %matplotlib inline
from __future__ import print_function
from __future__ import division
from torchvision import datasets, models, transforms
import torch
import json
import os
import matplotlib.pyplot as plt
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from datetime import datetime
from pytz import timezone
import copy
import time
# Load Data
# ---------
#
# we can initialize the data transforms, image datasets, and the dataloaders.
#
# If you use models with the hard-coded normalization values, you should specify image transformations.
#
# Torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
#
#
#
# +
print("Initializing Datasets and Dataloaders...")
batchSize = 16
inputSize = 32 # 299 for VGG, 32 for CNN
transform = transforms.Compose([
transforms.RandomResizedCrop(inputSize),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
trainset = datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batchSize,
shuffle=True, num_workers=2)
testset = datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batchSize,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# -
# Define a Convolution Neural Network
# ---------
# Define the neural network that has some learnable parameters (or weights).
# You can modify it.
#
#
# +
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# -
# Training Model
# ---------
#
# The train_model function handles the training and validation of a given model.
#
# It takes a PyTorch model, a dictionary of dataloaders, a loss function, an optimizer, a specified number of epochs to train and validate.
#
# we need to find out the index of maximum-score and use this index to find out the prediction.
#
# +
from __future__ import print_function
from __future__ import division
import torch
import copy
import time
def train_model(model, dataloaders, criterion, optimizer, num_epochs):
since = time.time()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
val_acc_history = []
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
train_losses = [0. for i in range(num_epochs)]
test_losses = [0. for i in range(num_epochs)]
train_accuracies = [0. for i in range(num_epochs)]
test_accuracies = [0. for i in range(num_epochs)]
label_acc_per_epoch = [[0] * num_epochs for i in range(10)]
label_val_per_epoch = [[0] * num_epochs for i in range(10)]
batch_loss = 0.0
checkPoint = 1000;
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
epoch_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
class_correct_train = list(0. for i in range(10))
class_total_train = list(0. for i in range(10))
for phase in ['train', 'val']:
if phase == 'train':
model.train()
else:
model.eval()
batch_loss = 0.0
running_loss = 0.0
running_corrects = 0
for j, (inputs, labels) in enumerate(dataloaders[phase]):
inputs = inputs.to(device)
labels = labels.to(device)
correct_train_iter, total_train_iter = 0.0, 0.0
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
loss = criterion(outputs, labels)
batch_loss += loss.item()
_, preds = torch.max(outputs, 1)
c = (preds == labels).squeeze()
total_train_iter += labels.size(0)
correct_train_iter += preds.eq(labels).sum().item()
for i in range(len(labels)):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
if phase == 'train':
loss.backward()
optimizer.step()
if phase == 'train':
if j % checkPoint == checkPoint - 1 :
train_acc_iter = correct_train_iter / total_train_iter
train_loss_iter = batch_loss / checkPoint
print('\n[%d, %5d] loss: %.3f accuracy: %.3f' % (
epoch + 1, j + 1, train_loss_iter, train_acc_iter))
batch_loss = 0.0
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / len(dataloaders[phase].dataset)
epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)
if phase == 'train':
for i in range(10):
acc = class_correct[i] / class_total[i]
label_acc_per_epoch[i][epoch] = acc
train_losses[epoch] = epoch_loss
train_accuracies[epoch] = epoch_acc
else:
for i in range(10):
acc = class_correct[i] / class_total[i]
label_val_per_epoch[i][epoch] = acc
test_losses[epoch] = epoch_loss
test_accuracies[epoch] = epoch_acc
print('{} Loss: {:.3f} Acc: {:.3f}'.format(phase, epoch_loss, epoch_acc))
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
if phase == 'val':
val_acc_history.append(epoch_acc)
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:3f}'.format(best_acc))
model.load_state_dict(best_model_wts)
history = {
"train_losses": train_losses,
"train_accuracies": train_accuracies,
"test_losses": train_losses,
"test_accuracies": train_accuracies,
"label_acc_per_epoch": label_acc_per_epoch,
"label_val_per_epoch": label_val_per_epoch
}
return model, history
# -
# Run Training and Validation Step
# ---------
#
# - Define the neural network
#
# - Define a Loss function and optimizer
#
# Let’s use a Classification Cross-Entropy loss and Adam optimizer.
# +
num_epochs = 30
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
scratch_model = Net() ## non-pretrained model
## Inception v3
#scratch_model = models.inception_v3(pretrained=True, aux_logits=False)
## VGG19
#scratch_model = models.vgg19(pretrained='true')
#scratch_model.classifier[6] = nn.Linear(4096, 10)
print(scratch_model)
scratch_model = scratch_model.to(device)
scratch_optimizer = optim.SGD(scratch_model.parameters(), lr=0.001, momentum=0.9)
#optimizer = optim.Adam(scratch_model.parameters(), lr=0.001)
scratch_criterion = nn.CrossEntropyLoss()
data_set = {'train': trainloader, 'val': testloader}
model, history = train_model(scratch_model, data_set, scratch_criterion, scratch_optimizer, num_epochs=num_epochs)
# -
# Visualizing your network in PyTorch
# ---------
#
# Let's plot the accuracy and loss
# +
#fig = plt.figure()
fig = plt.figure()
fig.set_size_inches(10.5, 7.5)
plot_title = ["train_accuracies", "train_losses", "test_accuracies", "test_losses"]
for i in range(4):
subplot = fig.add_subplot(2, 2, i+1)
subplot.set_xlim([-0.2, num_epochs])
subplot.set_ylim([0.0, 1 if i % 2 == 0 else max(history[plot_title[i]])+1])
subplot.set_title(plot_title[i])
subplot.plot(history[plot_title[i]], color = ("red" if i % 2 == 0 else "blue"))
plt.legend(frameon=False)
plt.tight_layout()
plt.show()
# -
# Plot train accuracy and validation accuracy for each epoch
# +
fig = plt.figure(1,)
fig.set_size_inches(10, 5.5)
for i in range(0, 10):
a = plt.plot(history["label_val_per_epoch"][i], label='%s' % classes[i])
plt.title("validation")
plt.ylabel('category')
plt.xlabel('Epoch')
plt.grid(True)
plt.ylim((0,1))
plt.xlim((-0.3, num_epochs))
plt.xticks(np.arange(0, num_epochs+1, 1.0))
plt.tight_layout()
plt.legend(loc='lower center', ncol=5, frameon=False)
plt.show()
# +
fig = plt.figure(1,)
fig.set_size_inches(10, 5.5)
for i in range(0, 10):
a = plt.plot(history["label_acc_per_epoch"][i], label='%s' % classes[i])
plt.title("Accuracy")
plt.ylabel('category')
plt.xlabel('Epoch')
plt.grid(True)
plt.ylim((0,1))
plt.xlim((-0.3, num_epochs))
plt.xticks(np.arange(0, num_epochs+1, 1.0))
plt.tight_layout()
plt.legend(loc='lower center', ncol=5, frameon=False)
plt.show()
| 10,720 |
/Riot Api/.ipynb_checkpoints/playing around with Riot API-checkpoint.ipynb
|
58fbfeebe5e610597e1d774d869863709ad743e3
|
[] |
no_license
|
kookoowaa/Projects
|
https://github.com/kookoowaa/Projects
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,610 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
from helper_functions import prepare_data, replace_strings
from pprint import pprint
from IPython.display import Image
# -
# # Data Preparation
# +
# load data
df_train = pd.read_csv("../../data/train.csv", index_col="PassengerId")
df_test = pd.read_csv("../../data/test.csv", index_col="PassengerId")
test_labels = pd.read_csv("../../data/test_labels.csv", index_col="PassengerId", squeeze=True)
# prepare data
df_train = prepare_data(df_train)
df_test = prepare_data(df_test, train_set=False)
# handle missing values in training data
embarked_mode = df_train.Embarked.mode()[0]
df_train["Embarked"].fillna(embarked_mode, inplace=True)
df_train.head()
# -
# # Naive Bayes from Scratch
Image(filename='../../images/Naive Bayes algorithm.png', width=1000)
# ## 1. Step of the Algorithm
# # Comparison to Sklearn
from sklearn.naive_bayes import GaussianNB, MultinomialNB, ComplementNB, BernoulliNB
# +
# data preparation
df_train = replace_strings(df_train)
X_train = df_train.drop("Survived", axis=1)
y_train = df_train.Survived
X_test = replace_strings(df_test)
y_test = test_labels
# +
# use different sklearn Naive Bayes models
clfs = [GaussianNB(), MultinomialNB(), ComplementNB(), BernoulliNB()]
clfs_names = ["GaussianNB", "MultinomialNB", "ComplementNB", "BernoulliNB"]
print("NB Model\tAccuracy")
print("--------\t--------")
for clf, clf_name in zip(clfs, clfs_names):
clf.fit(X_train, y_train)
acc = clf.score(X_test, y_test)
print(f"{clf_name}\t{acc:.3f}")
# -
s in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
# + _cell_guid="78a7fead-30c2-b922-db7e-30b4b769baef"
#importing the dataset
dataset = pd.read_csv("../input/creditcard.csv")
dataset.head()
# + _cell_guid="b129e962-fb89-bfa5-1fb4-f0fdcd7a3c4c"
#Checking the target classes
count_classes = pd.value_counts(dataset['Class'], sort = True).sort_index()
count_classes.plot(kind = 'bar')
plt.title("Fraud class histogram")
plt.xlabel("Class")
plt.ylabel("Frequency")
# + _cell_guid="b53c96c1-6da7-3269-6aa0-b8b2cb739369"
#feature scaling is done on the values that have not been normalized
from sklearn.preprocessing import StandardScaler
dataset['normAmount'] = StandardScaler().fit_transform(dataset['Amount'].reshape(-1, 1))
#dropping copied and unneeded columns
dataset = dataset.drop(['Time','Amount'],axis=1)
# + _cell_guid="8301ed85-51aa-825c-b601-5ceea38ba694"
#assign x and y values
x = np.array(dataset.iloc[:,:-1])
y = np.array(dataset.iloc[:,-2])
# + _cell_guid="d8c84651-3af4-9b64-2a58-77b4aaad0251"
# Number of data points in the minority class
number_records_fraud = len(dataset[dataset.Class == 1])
fraud_indices = np.array(dataset[dataset.Class == 1].index)
# Picking the indices of the normal classes
normal_indices = dataset[dataset.Class == 0].index
# Out of the indices we picked, randomly select "x" number (number_records_fraud)
random_normal_indices = np.random.choice(normal_indices, number_records_fraud, replace = False)
random_normal_indices = np.array(random_normal_indices)
# Appending the 2 indices
under_sample_indices = np.concatenate([fraud_indices,random_normal_indices])
# Under sample dataset
under_sample_data = dataset.iloc[under_sample_indices,:]
x_undersample = np.array(under_sample_data.ix[:, under_sample_data.columns != 'Class'])
y_undersample = np.array(under_sample_data.ix[:, under_sample_data.columns == 'Class'])
# + _cell_guid="a012b5cb-57df-5411-fe14-0fd04d828d36"
#splitting the sample data into trian and test set
from sklearn.cross_validation import train_test_split
# Undersampled dataset
x_train_undersample, x_test_undersample, y_train_undersample, y_test_undersample = train_test_split(x_undersample,y_undersample,test_size = 0.3)
# + _cell_guid="f77424a9-bd90-5121-3b5f-5c61329578da"
#checking the target class
count_classes = pd.value_counts(np.ravel(y_train_undersample), sort = True).sort_index()
count_classes.plot(kind = 'bar')
plt.title("Fraud class histogram")
plt.xlabel("Class")
plt.ylabel("Frequency")
# + _cell_guid="991c43f8-5d4a-474c-5eb8-8b8e86ecd4de"
names = ["Logistic Regression","Nearest Neighbors", "Linear SVM", "RBF SVM",
"Decision Tree", "Random Forest","Naive Bayes" ]
classifiers = [
LogisticRegression(),
KNeighborsClassifier(),
SVC(kernel="linear"),
SVC(kernel="rbf"),
DecisionTreeClassifier(criterion = 'entropy'),
RandomForestClassifier(criterion = 'entropy'),
GaussianNB(),
]
# + _cell_guid="ed820637-e20c-e0f1-12cc-f290645d2b73"
from sklearn.model_selection import cross_val_score
results = {}
for name, clf in zip(names, classifiers):
scores = cross_val_score(clf, x_train_undersample, np.ravel(y_train_undersample), cv=5)
results[name] = scores
for name, scores in results.items():
print("%20s | Accuracy: %0.2f%% (+/- %0.2f%%)" % (name, 100*scores.mean(), 100*scores.std() * 2))
# + _cell_guid="ad67b860-2911-353e-1aae-43aaffaa1894"
from sklearn.grid_search import GridSearchCV
clf = RandomForestClassifier()
# prepare a range of values to test
param_grid = [
{'n_estimators': [10,30,50,80,100,200], 'criterion': ['gini','entropy']},
]
grid = GridSearchCV(estimator=clf, param_grid=param_grid)
grid.fit(x_train_undersample, np.ravel(y_train_undersample))
print(grid)
clf = RandomForestClassifier()
clf.fit(x_train_undersample, np.ravel(y_train_undersample))
y_pred = clf.predict(x_test_undersample)
# + _cell_guid="00b839a0-e5ff-9e0a-350b-dfd94ab8ab48"
#creating the confusion matrix and checking the accuracy
from sklearn.metrics import confusion_matrix,accuracy_score,classification_report
cm = confusion_matrix(y_test_undersample,y_pred)
acc = accuracy_score(y_test_undersample,y_pred)
clr = classification_report(y_test_undersample,y_pred)
# + _cell_guid="cd2785eb-4b1f-e335-3541-865bde322cd5"
#visulaizing the confusion matirx
import seaborn as sns
print(acc)
print(clr)
label = ["0","1"]
sns.heatmap(cm, annot=True, xticklabels=label, yticklabels=label)
# + _cell_guid="684f0e38-3cf0-0d00-e6cf-24ead27ea3c1"
# Applying k-Fold Cross Validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = clf, X = x_train_undersample, y = np.ravel(y_train_undersample), cv = 10)
accuracies.mean()
| 6,670 |
/notebooks/complete.steroids.ipynb
|
e4cebe1a9aa8b69b27a5e7974ef6e17210470d16
|
[] |
no_license
|
ebmdatalab/steroids-covid-codelist-notebook
|
https://github.com/ebmdatalab/steroids-covid-codelist-notebook
| 0 | 1 | null | 2020-05-12T08:31:53 | 2020-05-06T16:54:41 |
Python
|
Jupyter Notebook
| false | false |
.py
| 1,196,617 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 5.10 多様体学習 MDS(多次元尺度構成法)
# 主成分分析を用いた次元削減では、データ内に非線形の関係がある場合はうまく機能しない。この欠点に対処するために多様体学習を用いる。
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
# ## 5.10.1 多様体学習:HELLO
# +
#HELLOという2次元データの作成
def make_hello(N=1000, rseed=42):
fig, ax = plt.subplots(figsize=(4, 1))
#一枚のグラフに複数グラフを描く方法
fig.subplots_adjust(left=0, right=1, bottom=0, top=1)
#subplots_adjust:グラフに余白を持たせる
ax.axis('off')
ax.text(0.5, 0.4, 'HELLO', va='center', ha='center', weight='bold', size=85)
#指定座標にテキストを入れる
fig.savefig('hello.png')
plt.close(fig)
# PNGをオープンしてそこからランダムに点を描画する
# HELLOをプロットしてPNGとして保存する
from matplotlib.image import imread
data = imread('hello.png')[::-1, :, 0].T
#hello.pngの画像を読み込む
rng = np.random.RandomState(rseed)
X = rng.rand(4 * N, 2)
i, j = (X * data.shape).astype(int).T
mask = (data[i, j] < 1)
X = X[mask]
X[:, 0] *= (data.shape[0] / data.shape[1])
X = X[:N]
return X[np.argsort(X[:, 0])]
X
# -
#上で定義した関数を呼び出して、結果を可視化する
X = make_hello(1000)
colorize = dict(c=X[:, 0], cmap=plt.cm.get_cmap('rainbow', 5))
plt.scatter(X[:, 0], X[:, 1], **colorize)
#scatter(Xの1列め,Xの2列目):散布図を描く
plt.axis('equal');
#axisは軸。equal:図の縦と横の比を1:1にする
# +
#HELLOを回転してもHELLOに見える。回転行列を使用する
def rotate(X, angle):
theta = np.deg2rad(angle)
#deg2rad:角度をラジアンに変換
R = [[np.cos(theta), np.sin(theta)],
[-np.sin(theta), np.cos(theta)]]
return np.dot(X, R)
#20度回転してみた
X2 = rotate(X, 20) + 5
plt.scatter(X2[:, 0], X2[:, 1], **colorize)
plt.axis('equal');
#xとyの値が必ずしもデータの関係にとって本質的でないということがわかる
#このケースでは各ポイント間の距離が関係している
# -
#Nポイントの場合N*Nの行列を作成して各ポイント間の距離を行列で表す。
from sklearn.metrics import pairwise_distances
#pairwise_distances:距離行列を計算
D = pairwise_distances(X)
D.shape
# 1000×1000の行列
#上記を可視化する
plt.imshow(D, zorder=2, cmap='Blues', interpolation='nearest')
plt.colorbar();
#回転させたデータ(X2)の距離行列も同じであることを確認
D2 = pairwise_distances(X2)
#D2:回転させたデータの距離行列
np.allclose(D, D2)
#allclose:2つの行列の成分が同じか調べることができる
#行列を可視化した結果は全然直感的でなく、HELLOを読み取ることもできない。
#ここでMDS(多次元尺度構成法)を用いて距離行列から元のデータを復元する。
from sklearn.manifold import MDS
model = MDS(n_components=2, dissimilarity='precomputed', random_state=1)
#n_components=2:2次元にマッピングする。dissimilarity:すでに行列のデータがある。
out = model.fit_transform(D)
#行列DをMDS(model)を用いて変換する。
plt.scatter(out[:, 0], out[:, 1], **colorize)
plt.axis('equal');
out
# +
#距離行列から3次元に投影することができる。
#回転行列を3次元に一般化した関数を使用する
def random_projection(X, dimension=3, rseed=42):
assert dimension >= X.shape[1]
#次元が1以上になる場合のみ実行する
rng = np.random.RandomState(rseed)
C = rng.randn(dimension, dimension)
#randn:標準正規分布
e, V = np.linalg.eigh(np.dot(C, C.T))
return np.dot(X, V[:X.shape[1]])
X3 = random_projection(X, 3)
X3.shape
# -
#可視化する。 3次元
from mpl_toolkits import mplot3d
ax = plt.axes(projection='3d')
ax.scatter3D(X3[:, 0], X3[:, 1], X3[:, 2],
**colorize)
ax.view_init(azim=70, elev=50)
#グラフの表示角度を見やすいように調整
#上の3次元データをMDSに落とし込んで計算すると2次元の元のデータが復元される。
model = MDS(n_components=2, random_state=1)
out3 = model.fit_transform(X3)
plt.scatter(out3[:, 0], out3[:, 1], **colorize)
plt.axis('equal');
out3
#3次元データを2次元の距離行列に変換したもの
# ## 5.10.4 MDSがうまくいかない場合
# ここまでは回転、平行移動、拡大など線形な埋め込みについて考えてきたが、非線形の場合はMDSがうまく動かない。
# +
#3次元空間でSの形に曲げて配置する関数
def make_hello_s_curve(X):
t = (X[:, 0] - 2) * 0.75 * np.pi
x = np.sin(t)
y = X[:, 1]
z = np.sign(t) * (np.cos(t) - 1)
return np.vstack((x, y, z)).T
XS = make_hello_s_curve(X)
# -
#3次元で可視化
from mpl_toolkits import mplot3d
ax = plt.axes(projection='3d')
ax.scatter3D(XS[:, 0], XS[:, 1], XS[:, 2],**colorize);
#Sになってることを確認
from sklearn.manifold import MDS
model = MDS(n_components=2, random_state=2)
outS = model.fit_transform(XS)
plt.scatter(outS[:, 0], outS[:, 1], **colorize)
plt.axis('equal');
outS
#putS:上記の3次元データをMDSを用いて2次元にした行列
# ## 5.10.5 非線形多様体:局所線形埋め込み
# [MDSとLLEにおけるポイント間の関係表現の違い](https://jakevdp.github.io/PythonDataScienceHandbook/06.00-figure-code.html#LLE-vs-MDS-Linkages)
#LLE(局所的線形埋め込み)を用いて近傍点間の距離のみを保存する。2点間の線の長さをほぼ同じに保ちながらデータを展開することができる。
#以下は改良LLEと呼ばれるLLEを改良したものを用いて、埋め込まれている2次元多様体を復元する。
from sklearn.manifold import LocallyLinearEmbedding
model = LocallyLinearEmbedding(n_neighbors=100, n_components=2, method='modified',
eigen_solver='dense')
#n_neighbors=100:100ポイントごと(近傍点)の距離を保存している。
out = model.fit_transform(XS)
#out:LLEを用いてXSを次元削減した2次元の行列
fig, ax = plt.subplots()
ax.scatter(out[:, 0], out[:, 1], **colorize)
ax.set_ylim(0.15, -0.15);
out
# ## 5.10.6 多様体学習に対する考察
# ### PCAと比較して優れている点
# データの非線形の関係を保てる
# ### PCAと比較して劣っている点
# 高次元データの単純で定性的な可視化以外に使用することはない
# ・良いフレームワークがない
# ・ノイズの影響を受けやすい
# ・最適な近傍点の数を最適に選択する方法がない
# ・最適な出力次元を決定することが難しい
# etc...
# https://jakevdp.github.io/PythonDataScienceHandbook/05.10-manifold-learning.html#Some-Thoughts-on-Manifold-Methods
# ## 5.10.7 事例:顔画像へのIsomap適用
#opencv(画像処理のライブラリ)をインストールする。
#多様体学習は高次元データのポイント間の関係を理解するのに使用される。高次元データの一般的な事例の一つが画像。1000ピクセルの画像は1000次元の点の集まりと捉えることができる。
#5.7と5.9で使用したlabeled faces in the wild のデータセットを用いる
import cv2
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=30)
#30は最低30マイ以上画像がある人の顔画像を取ってくるということ
faces.data.shape
# 2914ピクセルの画像が2370個あるってこと
#読み込んだデータをいくつか可視化
fig, ax = plt.subplots(4, 8, subplot_kw=dict(xticks=[], yticks=[]))
#4×8で顔画像データを可視化
for i, axi in enumerate(ax.flat):
axi.imshow(faces.images[i], cmap='gray')
#enumerate():インデックスと要素だけを取得する
# ### PCA(主成分分析)を計算して因子寄与率を調べる
# +
from sklearn.decomposition import RandomizedPCA
model = RandomizedPCA(100).fit(faces.data)
#RandomizedPCA(100):100個の成分
#model = RandomizedPCA(50).fit(faces.data):これがデフォルトになってるが、分散が90%を維持できない。
plt.plot(np.cumsum(model.explained_variance_ratio_))
#explained_variance_ratio_:累積寄与率をcumsumで足し合わせていく
plt.xlabel('n components')
plt.ylabel('cumulative variance');
plt.show()
# -
# 分散の90%を維持するためには100以上の成分が必要とわかる。この場合、非線形多様体埋め込みが役に立つ。
from sklearn.manifold import Isomap
model = Isomap(n_components=2)
#Isomapを用いて2次元にする
proj = model.fit_transform(faces.data)
proj.shape
# 2370個の2次元投影
# +
#投影された点に画像のサムネイルを出力する関数を定義
from matplotlib import offsetbox
def plot_components(data, model, images=None, ax=None,
thumb_frac=0.05, cmap='gray'):
ax = ax or plt.gca()
proj = model.fit_transform(data)
#dproj:ataをIsomapを用いて2次元にしたもの
ax.plot(proj[:, 0], proj[:, 1], '.k')
if images is not None:
min_dist_2 = (thumb_frac * max(proj.max(0) - proj.min(0))) ** 2
shown_images = np.array([2 * proj.max(0)])
for i in range(data.shape[0]):
dist = np.sum((proj[i] - shown_images) ** 2, 1)
if np.min(dist) < min_dist_2:
# don't show points that are too close
continue
shown_images = np.vstack([shown_images, proj[i]])
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(images[i], cmap=cmap),
proj[i])
ax.add_artist(imagebox)
# -
fig, ax = plt.subplots(figsize=(10, 10))
plot_components(faces.data,
model=Isomap(n_components=2),
images=faces.images[:, ::2, ::2])
# 左から右へ画像全体の明るさを表している
# 上から下へ顔の向きを表している
| 7,449 |
/MY WORKS/Deep_learning/Computer Vision/Detection_two_wheeler.ipynb
|
30d20def7c62d98f3f8996563a2c8735f123d8af
|
[] |
no_license
|
Sumith-T-S/ML
|
https://github.com/Sumith-T-S/ML
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,761 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
# +
cascade_src = 'data/two_wheeler.xml'
video_src = 'data/two_wheeler2.mp4'
# -
cap = cv2.VideoCapture(video_src)
fgbg = cv2.createBackgroundSubtractorMOG2()
car_cascade = cv2.CascadeClassifier(cascade_src)
# +
while True:
ret, img = cap.read()
fgbg.apply(img)
if (type(img) == type(None)):
break
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray,1.01, 1)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,215),2)
cv2.imshow('video', img)
if cv2.waitKey(33) == 27:
break
cv2.destroyAllWindows()
| 908 |
/NumPy/pseudorandom_number_generation.ipynb
|
5396ec8dbe2ce5b00faad7a2123480048e3bd470
|
[] |
no_license
|
BaekSe/Python-for-Data-Analysis
|
https://github.com/BaekSe/Python-for-Data-Analysis
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,005 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
samples = np.random.normal(size=(4, 4))
samples
from random import normalvariate
N = 1000000
# %timeit samples = [normalvariate(0, 1) for _ in range(N)]
# %timeit np.random.normal(size=N)
rng = np.random.RandomState(1234)
rng.randn(10)
| 531 |
/intro/mnist-mlp.ipynb
|
69e7b34719a084c31f9a2c03c1fd88f549b0a5e6
|
[] |
no_license
|
dshulchevskii/tcs-course-nn
|
https://github.com/dshulchevskii/tcs-course-nn
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 15,680 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./dataset', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])), batch_size=50, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./dataset', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])), batch_size=50, shuffle=True)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(28*28, 256)
self.fc2 = nn.Linear(256, 256)
self.fc3 = nn.Linear(256, 10)
def _softmax(self, x):
x = torch.exp(x)
return torch.div(x, torch.sum(x, dim=1, keepdim=True))
def forward(self, x):
self.x_flatten = x.view(-1, 28*28)
self.h1 = F.sigmoid(self.fc1(self.x_flatten))
self.h2 = F.sigmoid(self.fc2(self.h1))
self.h3 = self.fc3(self.h2)
self.out = F.log_softmax(self.h3, dim=1)
return self.out
model = Net()
optimizer = optim.Adam(model.parameters(), lr=0.01)
def train(epoch):
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % 200 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
else:
batch_idx += 1
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
def test():
test_loss = 0
correct = 0
for data, target in test_loader:
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
for epoch in range(1, 21):
train(epoch)
test()
ge_array)
#using softmax to get the results
score = tf.nn.softmax(predictions[0])
import numpy
print(classes[numpy.argmax(score)], 100*numpy.max(score))
# # Save the model
# After we have created a classifier, we can save it for later use.
# +
#model.save('dummy.model', save_format="h5")
# -
# # Evaluation
# It is necessary to evaluate the model before we move to the next stage. The evaluation will give us insights about two things:
# 1. Accuracy
# 2. If there is any anomaly in the model - Underfitting or Overfitting.
#
# In any case, we must follow a different approach to train the model to get the best results. This can include, image augmentation and choosing our own layers and placing them over the MobileNetv2 network.
#
# <h3> How to Improve the Model?</h3>
#
# Sometimes, the accuracy of the model isn't what we have anticipated. So, there are a certain practices that can be followed to
# improve the performance of the model in order to get efficiency while working with the new data.
#
# The following are some of the practices that may improve the model's performance:
# 1. Add more training data
# 2. Data Augmentation can help increase the number of training samples.
# 3. There might be a chance of overfitting the model with increased number of samples, in that case you can try a different model or include a head over the base model with custom layers.
# +
acc = face_mask_detection.history['accuracy']
val_acc = face_mask_detection.history['val_accuracy']
loss= face_mask_detection.history['loss']
val_loss= face_mask_detection.history['val_loss']
epochs_range = range(10)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
| 5,296 |
/List Data type.ipynb
|
ada92f2e57f6f7e9173df1c622f5ddece332c2a6
|
[] |
no_license
|
ssgearup/List-data-type
|
https://github.com/ssgearup/List-data-type
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 13,715 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # List Data type
# +
#List are mutable as we can add, update and delete values
# +
# ordered indexing slicing
# -
# # Indexing and Slicing
l= [10,20,30,"python","java",[100,200,300]]
print(l)
print(l[0])
print(l[-1][1])
print(l[1:3])
l[::-1]
l[:-1]
for i in l[::2]:
print(i)
l= [10,20,30,"python","java",[100,200,300]]
print(l)
# # append
l.append(600) # jitni baar iss code ko execute krunga ye utni baar 600 add krta jayga
#and most imp thing- ye values ko ek baar me add krta hai not in single wise
l
#
# # Extend
l1= [1,2,3] #extend har ek element ko single single krke add krta hai and ye original list ko modify krta hai
l.extend([1,2,3])
l
# above same code using append
l1= [1,2,3]
l.append(l1)
l
l.extend("Python")
l
# # insert
l.insert(1,10000)
l
l= [10,20,30]
l2=l
print(id(l),id(l2))
print(l,l2)
# # # copy
#when adding a value to 1st list then 2nd list doesnt change
l= [10,20,30]
l2=l.copy()
l.append(40)
print(l,l2)
# # Update
l=[10,20,30,40,50]
l[2]= 300
l
# # Pop
r=l.pop() #last item removed
print(l)
print(l,r)
r=l.pop(1)
print(l,r)
# # remove (one item at a time)
l=[10,20,30,40,50]
l
l.remove(20)
l
l=[10,20,30,20,20,50]# one item at a time
l.remove(20)
l
# # clear(all items cleared)
l.clear()
l
# # del (Whole LIST is deleted from memory location)
del l
# # sort
l=[50,40,30,20,10]
l.sort()
l
# # sorted (works on every data structures)
l=[50,40,30,20,10]
l
sorted(l)
# # index
l.index(30) # if multiple 30s are present it will show for the first.
# # count
l=[10,10,10,10,20,20,40]
l.count(10)
# # Addition of 2 lists
l1=[10,20,30]
l2=[100,200,300]
l3=l1 + l2
l3
# # multiplication
l=[0.1]
print(l*10)
| 1,953 |
/dictionarys.ipynb
|
1b906905402e01f8c83d7b72626f2d8f1feb1677
|
[] |
no_license
|
islamic-code-org/ICO
|
https://github.com/islamic-code-org/ICO
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,621 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pylab
pylab.rcParams['figure.figsize'] = (15.0, 15.0)
from matplotlib import rc
rc('font', size=18)
from Memristor import Memristor
SM2_TO_M2 = ((10**(-2))**2)
NM_TO_M = 10**(-9)
D = 10 * NM_TO_M
mu = (10**(-10)) * SM2_TO_M2
nu = 100
w0 = D / 10
R_ON = 1
R_OFF = R_ON * 160
U0 = 1
# +
def U_sin(t, U0, nu):
return U0 * np.sin(nu * t)
def Phi_sin(t, U0, nu):
return U0 * (1 - np.cos(nu * t)) / nu
# +
def U_sin_sqr(t, U0, nu):
return U0 * (np.sin(nu * t) ** 2)
def Phi_sin_sqr(t, U0, nu):
return U0 * (2 * nu * t - np.sin(2 * nu * t)) / (4 * nu)
# -
m_sin_sqr1 = Memristor(D, mu, nu, w0, 160, 1, lambda t: U_sin_sqr(t, U0, nu), lambda t: Phi_sin_sqr(t, U0, nu))
m_sin_sqr2 = Memristor(D, mu, nu, w0, 380, 1, lambda t: U_sin_sqr(t, U0, nu), lambda t: Phi_sin_sqr(t, U0, nu))
m_sin_sqr1.task1(plt, 1000, 5)
m_sin_sqr1.task2(plt, 1000, 5)
m_sin_sqr1.task3(plt, 1000, 5)
m_sin_sqr2.task1(plt, 1000, 5)
m_sin_sqr2.task2(plt, 1000, 5)
m_sin_sqr2.task3(plt, 1000, 5)
shared between regions
m = 128
k = 512 # lvl1 nodes
n_fft = 4096 # lvl1 receptive field
window = 16384 # total number of audio samples?
stride = 512
batch_size = 100
# -
# function for returning scientific notation in a plot
def fmt(x, pos):
a, b = '{:.0e}'.format(x).split('e')
b = int(b)
return fr'${a} \times 10^{{{b}}}$'
# +
regions = 1 + (window - n_fft)//stride
def worker_init(args):
signal.signal(signal.SIGINT, signal.SIG_IGN) # ignore signals so parent can handle them
np.random.seed(os.getpid() ^ int(time())) # approximately random seed for workers
kwargs = {'num_workers': 4, 'pin_memory': True, 'worker_init_fn': worker_init}
# -
start = time()
root = '../../../data/'
train_set = musicnet.MusicNet(root=root, epoch_size=train_size
, train=True, download=True, refresh_cache=False,
window=window, mmap=False, pitch_shift=pitch_shift, jitter=jitter)
test_set = musicnet.MusicNet(root=root, train=False, download=True, refresh_cache=False, window=window, epoch_size=test_size, mmap=False)
print("Time used = ", time()-start)
train_loader = torch.utils.data.DataLoader(dataset=train_set,batch_size=batch_size,**kwargs)
test_loader = torch.utils.data.DataLoader(dataset=test_set,batch_size=batch_size,**kwargs)
def create_filtersv2(n_fft, freq_bins=None, low=50,high=6000, sr=44100, freq_scale='linear', mode="fft"):
if freq_bins==None:
freq_bins = n_fft//2+1
s = torch.arange(0, n_fft, 1.)
wsin = torch.empty((freq_bins,1,n_fft))
wcos = torch.empty((freq_bins,1,n_fft))
start_freq = low
end_freq = high
# num_cycles = start_freq*d/44000.
# scaling_ind = np.log(end_freq/start_freq)/k
if mode=="fft":
window_mask = 1
elif mode=="stft":
window_mask = 0.5-0.5*torch.cos(2*math.pi*s/(n_fft)) # same as hann(n_fft, sym=False)
else:
raise Exception("Unknown mode, please chooes either \"stft\" or \"fft\"")
if freq_scale == 'linear':
start_bin = start_freq*n_fft/sr
scaling_ind = (end_freq/start_freq)/freq_bins
for k in range(freq_bins): # Only half of the bins contain useful info
wsin[k,0,:] = window_mask*torch.sin(2*math.pi*(k*scaling_ind*start_bin)*s/n_fft)
wcos[k,0,:] = window_mask*torch.cos(2*math.pi*(k*scaling_ind*start_bin)*s/n_fft)
elif freq_scale == 'log':
start_bin = start_freq*n_fft/sr
scaling_ind = np.log(end_freq/start_freq)/freq_bins
for k in range(freq_bins): # Only half of the bins contain useful info
wsin[k,0,:] = window_mask*torch.sin(2*math.pi*(np.exp(k*scaling_ind)*start_bin)*s/n_fft)
wcos[k,0,:] = window_mask*torch.cos(2*math.pi*(np.exp(k*scaling_ind)*start_bin)*s/n_fft)
else:
print("Please select the correct frequency scale, 'linear' or 'log'")
return wsin,wcos
Loss = torch.nn.MSELoss()
def L(yhatvar,y):
return Loss(yhatvar,y) * 128/2
# +
class Model(torch.nn.Module):
def __init__(self, avg=.9998):
super(Model, self).__init__()
# Create filter windows
wsin, wcos = create_filtersv2(n_fft,k, low=50, high=6000,
mode="stft", freq_scale='log')
self.wsin = torch.Tensor(wsin).to(device)
self.wcos = torch.Tensor(wcos).to(device)
# Creating Layers
# self.linear = torch.nn.Linear(regions*k, k,bias=False)
# self.linear_output = torch.nn.Linear(k,m, bias=False)
# wscale = 10e-5
# torch.nn.init.normal_(self.linear.weight, std=1e-4) # initialize
# torch.nn.init.normal_(self.linear_output.weight, std=1e-4)
# torch.nn.init.zeros_(self.linear.weight)
# torch.nn.init.zeros_(self.linear_output.weight)
self.k_out = 128
self.Layer2 = torch.nn.Conv2d(1,self.k_out,
kernel_size=(128,1),stride=(1,1))
self.Layer3 = torch.nn.Conv2d(128,256,
kernel_size=(1,25),stride=(1,1))
self.Linear = torch.nn.Linear(self.k_out*385*regions, m)
self.activation = torch.nn.ReLU()
self.avg = avg
#Create a container for weight average
self.averages = copy.deepcopy(list(parm.to(device).data for parm in self.parameters()))
def forward(self,x):
zx = conv1d(x[:,None,:], self.wsin, stride=stride).pow(2) \
+ conv1d(x[:,None,:], self.wcos, stride=stride).pow(2)
zx = torch.log(zx + 10e-8) # Log Magnitude Spectrogram
z2 = self.Layer2(zx.unsqueeze(1)) # Make channel as 1 (N,C,H,W)
y = self.Linear(self.activation(z2.view(x.data.size()[0],self.k_out*385*regions)))
return y
def average_iterates(self):
for parm, pavg in zip(self.parameters(), self.averages):
pavg.mul_(self.avg).add_(1.-self.avg, parm.data) # 0.9W_avg + 0.1W_this_ite
@contextmanager
def averages(model):
orig_parms = copy.deepcopy(list(parm.data for parm in model.parameters()))
for parm, pavg in zip(model.parameters(), model.averages):
parm.data.copy_(pavg)
yield
for parm, orig in zip(model.parameters(), orig_parms):
parm.data.copy_(orig)
# -
# # Averaged Weights
model = Model()
model.to(device)
# +
loss_history_train = []
avgp_history_train = []
loss_history_test = []
avgp_history_test = []
avg = .9998
optimizer = torch.optim.SGD(model.parameters(), lr=lr, momentum=.95)
# optimizer = SWA(base_opt, swa_start=0, swa_freq=1, swa_lr=0.000001)
try:
with train_set, test_set:
total_i = len(train_loader)
print("epoch\ttrain loss\ttest loss\ttrain avg\ttest avg\ttime\tutime")
for e in range(epochs):
yground = torch.FloatTensor(batch_size*len(train_loader), m) # what not do this together with loss
yhat = torch.FloatTensor(batch_size*len(train_loader), m)
avgp, loss_e = 0.,0
t = time()
for i, (x,y) in enumerate(train_loader):
print(f"Training {i}/{total_i} batches", end = '\r')
optimizer.zero_grad()
# print(model.Layer2.weight[0][0][0])
# making x and y into pytorch dealable format
x = x.to(device,non_blocking=True)
y = y.to(device,non_blocking=True)
yhatvar = model(x)
loss = L(yhatvar,y)
loss.backward()
loss_e += loss.item() #getting the number
yground[i*batch_size:(i+1)*batch_size] = y.data
yhat[i*batch_size:(i+1)*batch_size] = yhatvar.data
optimizer.step()
model.average_iterates() # Averaging the weights for validatre(figsize=(16,8))
plt.subplot(211)
sns.countplot(x= "payment_plan_days" ,data = data, color="teal")
plt.ylabel('Count', fontsize=12)
plt.xlabel('Payment_Plan_Days', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Frequency of Payment_Plan_Days Count", fontsize=12)
plt.show()
payment_plan_days_count = Counter(data['payment_plan_days']).most_common()
print("Payment_plan_days Count " +str(payment_plan_days_count))
plt.figure(figsize=(16,8))
plt.subplot(211)
sns.countplot(x= "plan_list_price" ,data = data, color="red")
plt.ylabel('Count', fontsize=12)
plt.xlabel('Plan_list_price', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Frequency of Plan_list_price ", fontsize=12)
plt.show()
plan_list_price_count = Counter(data['plan_list_price']).most_common()
print("plan_list_price Count " +str(plan_list_price_count))
plt.figure(figsize=(16,8))
plt.subplot(211)
sns.countplot(x= "actual_amount_paid" ,data = data, color="blue")
plt.ylabel('Count', fontsize=12)
plt.xlabel('Actual_amount_paid', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Frequency of Actual_amount_paid ", fontsize=12)
plt.show()
actual_amount_paid_count = Counter(data['actual_amount_paid']).most_common()
print("actual_amount_paid Count " +str(actual_amount_paid_count))
plt.figure(figsize=(16,8))
plt.subplot(211)
sns.countplot(x= "registered_via" ,data = data, color="coral")
plt.ylabel('Count', fontsize=12)
plt.xlabel('Registered_via', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Frequency of Registered_via Count", fontsize=12)
plt.show()
registered_via_count = Counter(data['registered_via']).most_common()
print("Registered_Via Count " +str(registered_via_count))
plt.figure(figsize=(16,8))
plt.subplot(211)
sns.countplot(x= "city" ,data = data, color="burlywood")
plt.ylabel('Count', fontsize=12)
plt.xlabel('city', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Frequency of city Count", fontsize=12)
plt.show()
city_count = Counter(data['city']).most_common()
print("city Count " +str(city_count))
plt.figure(figsize=(16,8))
plt.subplot(211)
sns.countplot(x= "gender" ,data = data, color="salmon")
plt.ylabel('Count', fontsize=12)
plt.xlabel('gender', fontsize=12)
plt.xticks(rotation='horizontal')
plt.title("Frequency of gender Count", fontsize=12)
plt.show()
gender_count = Counter(data['gender']).most_common()
print("gender Count " +str(gender_count))
#registration_init_time yearly trend
data['registration_init_time_year'] = pd.DatetimeIndex(data['registration_init_time']).year
data['registration_init_time_year'] = data.registration_init_time_year.apply(lambda x: int(x) if pd.notnull(x) else "NAN" )
year_count=data['registration_init_time_year'].value_counts()
#print(year_count)
plt.figure(figsize=(12,12))
plt.subplot(311)
year_order = data['registration_init_time_year'].unique()
year_order=sorted(year_order, key=lambda x: str(x))
year_order = sorted(year_order, key=lambda x: float(x))
sns.barplot(year_count.index, year_count.values,order=year_order)
plt.ylabel('Count', fontsize=12)
plt.xlabel('Year', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Yearly Trend of registration_init_time", fontsize=12)
plt.show()
year_count_2 = Counter(data['registration_init_time_year']).most_common()
print("Yearly Count " +str(year_count_2))
#registration_init_time monthly trend
data['registration_init_time_month'] = pd.DatetimeIndex(data['registration_init_time']).month
data['registration_init_time_month'] = data.registration_init_time_month.apply(lambda x: int(x) if pd.notnull(x) else "NAN" )
month_count=data['registration_init_time_month'].value_counts()
plt.figure(figsize=(12,12))
plt.subplot(312)
month_order = data['registration_init_time_month'].unique()
month_order = sorted(month_order, key=lambda x: str(x))
month_order = sorted(month_order, key=lambda x: float(x))
sns.barplot(month_count.index, month_count.values,order=month_order)
plt.ylabel('Count', fontsize=12)
plt.xlabel('Month', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Monthly Trend of registration_init_time", fontsize=12)
plt.show()
month_count_2 = Counter(data['registration_init_time_month']).most_common()
print("Monthly Count " +str(month_count_2))
#registration_init_time day wise trend
data['registration_init_time_weekday'] = pd.DatetimeIndex(data['registration_init_time']).weekday_name
data['registration_init_time_weekday'] = data.registration_init_time_weekday.apply(lambda x: str(x) if pd.notnull(x) else "NAN" )
day_count=data['registration_init_time_weekday'].value_counts()
plt.figure(figsize=(12,12))
plt.subplot(313)
day_order = ['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday','NAN']
sns.barplot(day_count.index, day_count.values,order=day_order)
plt.ylabel('Count', fontsize=12)
plt.xlabel('Day', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Day-wise Trend of registration_init_time", fontsize=12)
plt.show()
day_count_2 = Counter(data['registration_init_time_weekday']).most_common()
print("Day-wise Count " +str(day_count_2))
#registration_init_time yearly trend
data['membership_expire_date_year'] = pd.DatetimeIndex(data['membership_expire_date']).year
data['membership_expire_date_year'] = data.membership_expire_date_year.apply(lambda x: int(x) if pd.notnull(x) else "NAN" )
year_count=data['membership_expire_date_year'].value_counts()
#print(year_count)
plt.figure(figsize=(12,12))
plt.subplot(311)
year_order = data['membership_expire_date_year'].unique()
year_order=sorted(year_order, key=lambda x: str(x))
year_order = sorted(year_order, key=lambda x: float(x))
sns.barplot(year_count.index, year_count.values,order=year_order)
plt.ylabel('Count', fontsize=12)
plt.xlabel('Year', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Yearly Trend of Membership_Expire_Date", fontsize=12)
plt.show()
year_count_2 = Counter(data['membership_expire_date_year']).most_common()
print("Yearly Count " +str(year_count_2))
#membership_expire_date monthly trend
data['membership_expire_date_month'] = pd.DatetimeIndex(data['membership_expire_date']).month
data['membership_expire_date_month'] = data.membership_expire_date_month.apply(lambda x: int(x) if pd.notnull(x) else "NAN" )
month_count=data['membership_expire_date_month'].value_counts()
plt.figure(figsize=(12,12))
plt.subplot(312)
month_order = data['membership_expire_date_month'].unique()
month_order = sorted(month_order, key=lambda x: str(x))
month_order = sorted(month_order, key=lambda x: float(x))
sns.barplot(month_count.index, month_count.values,order=month_order)
plt.ylabel('Count', fontsize=12)
plt.xlabel('Month', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Monthly Trend of Membership_Expire_Date", fontsize=12)
plt.show()
month_count_2 = Counter(data['membership_expire_date_month']).most_common()
print("Monthly Count " +str(month_count_2))
# #### Bar plot of "is_churn" and "membership_expire_date_month" variables
membership_expire_date_crosstab=pd.crosstab(data['membership_expire_date_month'],data['is_churn'])
membership_expire_date_crosstab.plot(kind='bar',stacked= True, grid=False)
membership_expire_date_crosstab
#transaction_date monthly trend
data['transaction_date_month'] = pd.DatetimeIndex(data['transaction_date']).month
data['transaction_date_month'] = data.transaction_date_month.apply(lambda x: int(x) if pd.notnull(x) else "NAN" )
month_count=data['transaction_date_month'].value_counts()
plt.figure(figsize=(12,12))
plt.subplot(312)
month_order = data['transaction_date_month'].unique()
month_order = sorted(month_order, key=lambda x: str(x))
month_order = sorted(month_order, key=lambda x: float(x))
sns.barplot(month_count.index, month_count.values,order=month_order)
plt.ylabel('Count', fontsize=12)
plt.xlabel('Month', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Monthly Trend of Transaction_Date", fontsize=12)
plt.show()
month_count_2 = Counter(data['transaction_date_month']).most_common()
print("Monthly Count " +str(month_count_2))
# ## Observations:-
# - Those who put auto_renewal for their payments are less likely to cancel the subscription.
# - Also if auto_renewal is on, more people like to renew their subcriptions.
# - About 92.5% people put their payments on Auto_renewal.
# - No effect of gender, registred_via on churn rate.
# - In the last 4-5 years,there is more increase in subcription rate.
# - Large amount of observations are from city-1.
# - About 59% people used registered via technique number-7 used 41-category method-id to make payments.
# - More new registrations at the start and end of the year
# - About 93% people enrolled in 30 days payment plan.
# - About 55% people opted $149 price plan and paid actual amount.
# - On weekends there are more new registrations for the subscription.
# - Membership expires mainly in starting months of the year till march and seems like more churn rate in month of February.
# - There are more transactions in the month of January and February may be due to new subscription plans are available.
# # Inferential Statistics Part
# #### Correlation graph between various variables
plt.figure(figsize=(14,12))
sns.heatmap(data.corr(), linewidths=.1, cmap="YlGnBu", annot=True)
plt.yticks(rotation=0)
# - We can see from above Graph, There is strong correlation between payment plan days, actual_amount paid, plan list price features.
# - Relationship between membership_expire_date_month and transaction_date_month, means most of the subscriptions expires and more new transactions were made at the time of starting months.
# - Also is_churn and is_auto_renewal are negatively correlated to each other.
# ## Inferential Statistics
#
# - We want to check is there any relationship between
# - is_auto_renew and is_churn /
# - is_cancel and is_churn.
# - We will be using Chi-Squared test to check the association between these variables.
# # First let's take a look towards the Assumptions:
#
# - Our data is pretty much random and is good representation of the population
# - The variables under study are categorical.
# - The expected value of the number of sample observations at each level of the variable is more than 5.
# #### The Null and Alternate Hypotheses:-
# - Recall that we are interested in knowing if there is a relationship between 'is_churn' and 'is_auto_renew'.
# - In order to do so, we will use the Chi-squared test and set our significance level to be 0.05 .But first, let's state our null hypothesis and the alternative hypothesis.
#
# - H0:There is no statistically significant relationship between 'is_churn' and 'is_auto_renew'
# - Ha:There is a statistically significant relationship between 'is_churn' and 'is_auto_renew'
# #### Constructing the Contingency Table
# - The next step is to format the data into a frequency count table.
# - This is called a Contingency Table, we can accomplish this by using the pd.crosstab() function in pandas.
contingency_table = pd.crosstab(
data['is_churn'],
data['is_auto_renew'],
margins = True)
contingency_table
f_obs = np.array([contingency_table.iloc[0][0:2].values,
contingency_table.iloc[1][0:2].values])
print(f_obs)
from scipy import stats
stats.chi2_contingency(f_obs)[0:3]
# - Chisq_test_statistic = 632301.26, P-value ~ 0, degree_of_freedom = 1
# #### Conclusions:-
#
# - With a p-value < 0.05 , we can reject the null hypothesis. There is definitely some sort of statistically significant relationship between 'is_churn' and the 'is_auto_renew' column.
# - We don't know what this relationship is, but we do know that these two variables are not independent of each other.
# #### Relationship between 'is_churn' and 'is_cancel' Variables
#
# The Null and Alternate Hypotheses:-
# - Recall that we are interested in knowing if there is a relationship between 'is_churn' and 'is_cancel'. In order to do so, - we would have to use the Chi-squared test and we set our significance level to be 0.05 .But first, let's state our null hypothesis and the alternative hypothesis.
#
# - H0:There is no statistically significant relationship between 'is_churn' and 'is_cancel'
# - Ha:There is a statistically significant relationship between 'is_churn' and 'is_cancel'
contingency_table1 = pd.crosstab(
data['is_churn'],
data['is_cancel'],
margins = True)
contingency_table1
f_obs1 = np.array([contingency_table1.iloc[0][0:2].values,
contingency_table1.iloc[1][0:2].values])
print(f_obs1)
from scipy import stats
stats.chi2_contingency(f_obs1)[0:3]
# - Chisq_test_statistic = 56664.17, P-value ~ 0, degree_of_freedom = 1
# #### Conclusions:-
#
# - With a p-value < 0.05 , we can reject the null hypothesis. There is definitely some sort of statistically significant relationship between 'is_churn' and the 'is_cancel' column.
# - We don't know what this relationship is, but we do know that these two variables are not independent of each other.
# ## Logistic Regression
| 21,141 |
/random_decision_forest.ipynb
|
7baec00872ec7fb7c51a3831afecdddeefcf963c
|
[] |
no_license
|
himani-de/ml_randomdecisionforest
|
https://github.com/himani-de/ml_randomdecisionforest
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 18,817 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris # imported data set available in sklearn
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
np.random.seed(0)
iris = load_iris()# loas data set
df = pd.DataFrame(iris.data, columns = iris.feature_names) # creating data frame with features
df.head() # to get first 5 rows
df['Species']= pd.Categorical.from_codes(iris.target,iris.target_names)
df.head()
df['is_train']= np.random.uniform(0,1,len(df)) <=0.75 # creating train and test dataa where 75% of data is used for training
# 25% for testing
df.head()
# create train and test datafraame
train, test = df[df['is_train']==True],df[df['is_train']==False] # is_train==true will go to train and is_true==False will go to test
print('Number of observation in the training data:',len(train))
print('Number of observation in the test data:',len(test))
features = df.columns[:4]
features
y = pd.factorize(train['Species'])[0]
y
# creating random forest clasifier
clf = RandomForestClassifier(n_jobs=2, random_state =0)
clf.fit(train[features],y)
clf.predict(test[features])
clf.predict_proba(test[features])[0:10] # predict probability
# mapping names of plants against probablity
pred = iris.target_names[clf.predict(test[features])]
pred[0:25]
# check species of first five observtions
test['Species'].head()
#creating confusion matrix i.e, combining the test[species] + pred
pd.crosstab(test['Species'],pred, rownames=['Actual species'], colnames=['Predictd species'])
| 1,835 |
/notes-jupyter/notes-python3-matplot.ipynb
|
9a1b805168384361eba89665a84d591ac619a554
|
[] |
no_license
|
ZX1209/JupyterNoteBook-files
|
https://github.com/ZX1209/JupyterNoteBook-files
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 47,472 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
#左边的列表是 x 坐标 第二个参数是 y 的坐标表
#两个表要长度一致
#第三个参数 是 样式,,看下面的参考
plt.plot([1, 2, 3, 4], [1, 4, 9, 16],'b+')
#设置 x,y 的标志
plt.ylabel('y label')
plt.xlabel('x label')
#show
plt.show()
# + active=""
# By default, each line is assigned a different style specified by a ‘style cycle’. To change this behavior, you can edit the axes.prop_cycle rcParam.
#
# The following format string characters are accepted to control the line style or marker:
#
# character description
# '-' solid line style
# '--' dashed line style
# '-.' dash-dot line style
# ':' dotted line style
# '.' point marker
# ',' pixel marker
# 'o' circle marker
# 'v' triangle_down marker
# '^' triangle_up marker
# '<' triangle_left marker
# '>' triangle_right marker
# '1' tri_down marker
# '2' tri_up marker
# '3' tri_left marker
# '4' tri_right marker
# 's' square marker
# 'p' pentagon marker
# '*' star marker
# 'h' hexagon1 marker
# 'H' hexagon2 marker
# '+' plus marker
# 'x' x marker
# 'D' diamond marker
# 'd' thin_diamond marker
# '|' vline marker
# '_' hline marker
# The following color abbreviations are supported:
#
# character color
# ‘b’ blue
# ‘g’ green
# ‘r’ red
# ‘c’ cyan
# ‘m’ magenta
# ‘y’ yellow
# ‘k’ black
# ‘w’ white
# -
import numpy as np
t = np.arange(0.0,5.0,0.2)
#三个为一组吗...
plt.plot(t,t,'r--',t,t**2,'bs',t,t**3,'g^')
plt.show()
# +
daddl2=[1,4,67,7]
for i in range(10):
plt.plot(daddl2);
daddl.append(random.randrange(30))
plt.show()
# +
import random
daddl = [1,2]
plt.close()
plt.ion()
for i in range(10):
plt.clf()
plt.plot(daddl);
daddl.append(random.randrange(30))
plt.pause(1)
plt.ioff()
# +
# -*- coding: utf-8 -*-
"""
Created on Sat Mar 25 23:28:29 2017
@author: wyl
"""
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
import numpy as np
import math
plt.close() #clf() # 清图 cla() # 清坐标轴 close() # 关窗口
fig=plt.figure()
ax=fig.add_subplot(1,1,1)
ax.axis("equal") #设置图像显示的时候XY轴比例
plt.grid(True) #添加网格
plt.ion() #interactive mode on
IniObsX=0000
IniObsY=4000
IniObsAngle=135
IniObsSpeed=10*math.sqrt(2) #米/秒
print('开始仿真')
try:
for t in range(180):
#障碍物船只轨迹
obsX=IniObsX+IniObsSpeed*math.sin(IniObsAngle/180*math.pi)*t
obsY=IniObsY+IniObsSpeed*math.cos(IniObsAngle/180*math.pi)*t
ax.scatter(obsX,obsY,c='b',marker='.') #散点图
#ax.lines.pop(1) 删除轨迹
#下面的图,两船的距离
plt.pause(0.001)
except Exception as err:
print(err)
| 2,744 |
/public/2018/07/15/keras教程-01-神经网络图解和手动计算/keras教程-01-神经网络图解和手动计算.ipynb
|
8a232394a6473c19fa66fcf0a4c99fe695dbf0c0
|
[
"MIT"
] |
permissive
|
xxxspy/mlln.cn
|
https://github.com/xxxspy/mlln.cn
| 2 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,605 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from numpy.linalg import inv as inv
from numpy.linalg import qr as qr
from numpy.linalg import svd as svd
def FRSVT(Z, tau, k, J):
"""Fast Randomized Singular Value Thresholding (FRSVT)."""
m, n = Z.shape
R = np.random.randn(n, k)
Q, r = qr(np.matmul(Z, R)) # m-by-k
for j in range(J):
Q, r = qr(np.matmul(Z, np.matmul(Z.T, Q)))
U, Sigma, V = svd(np.matmul(Q.T, Z), full_matrices = 0)
s = Sigma - tau
pos = np.where(s < 0)
s[pos] = 0
return np.matmul(np.matmul(np.matmul(Q, U), np.diag(s)), V)
# -
def SVT(X, tau):
U, Sigma, V = svd(X, full_matrices = 0)
s = Sigma - tau
pos = np.where(s < 0)
s[pos] = 0
return np.matmul(np.matmul(U, np.diag(s)), V)
# +
Z = np.random.rand(5000, 3000)
tau = 1
k = 20
J = 10
import time
start = time.time()
out1 = FRSVT(Z, tau, k, J)
end = time.time()
print('Running time of FRSVT: %d seconds.'%(end - start))
print()
start = time.time()
out2 = SVT(Z, tau)
end = time.time()
print('Running time of SVT: %d seconds.'%(end - start))
print()
print(np.sum(np.abs(out1 - out2)))
# +
# Z = np.random.rand(5000, 3000)
tau = 1
k = 200
J = 10
import time
start = time.time()
out1 = FRSVT(Z, tau, k, J)
end = time.time()
print('Running time of FRSVT: %d seconds.'%(end - start))
print()
start = time.time()
out2 = SVT(Z, tau)
end = time.time()
print('Running time of SVT: %d seconds.'%(end - start))
print()
print(np.sum(np.abs(out1 - out2)))
# +
# Z = np.random.rand(5000, 3000)
tau = 1
k = 20
J = 100
import time
start = time.time()
out1 = FRSVT(Z, tau, k, J)
end = time.time()
print('Running time of FRSVT: %d seconds.'%(end - start))
print()
start = time.time()
out2 = SVT(Z, tau)
end = time.time()
print('Running time of SVT: %d seconds.'%(end - start))
print()
print(np.sum(np.abs(out1 - out2)))
# +
# Z = np.random.rand(5000, 3000)
tau = 1
k = 20
J = 3
import time
start = time.time()
out1 = FRSVT(Z, tau, k, J)
end = time.time()
print('Running time of FRSVT: %d seconds.'%(end - start))
print()
start = time.time()
out2 = SVT(Z, tau)
end = time.time()
print('Running time of SVT: %d seconds.'%(end - start))
print()
print(np.sum(np.abs(out1 - out2)))
# +
# Z = np.random.rand(5000, 3000)
tau = 1
k = 500
J = 3
import time
start = time.time()
out1 = FRSVT(Z, tau, k, J)
end = time.time()
print('Running time of FRSVT: %d seconds.'%(end - start))
print()
start = time.time()
out2 = SVT(Z, tau)
end = time.time()
print('Running time of SVT: %d seconds.'%(end - start))
print()
print(np.sum(np.abs(out1 - out2)))
# +
# Z = np.random.rand(5000, 3000)
tau = 1
k = 1000
J = 3
import time
start = time.time()
out1 = FRSVT(Z, tau, k, J)
end = time.time()
print('Running time of FRSVT: %d seconds.'%(end - start))
print()
start = time.time()
out2 = SVT(Z, tau)
end = time.time()
print('Running time of SVT: %d seconds.'%(end - start))
print()
print(np.sum(np.abs(out1 - out2)))
| 3,190 |
/recursion/Return-Subsets.ipynb
|
24b5e22661e2d870afd6414b5ca8c1d3aa1ed047
|
[] |
no_license
|
ksprashu/exercises-udacity_nd_ds_algo
|
https://github.com/ksprashu/exercises-udacity_nd_ds_algo
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 6,521 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + hideCode=false hidePrompt=false
import xmltodict
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import OneHotEncoder, scale
import os, zipfile
import enum
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
from bokeh.charts import Bar, Histogram, Scatter
from bokeh.io import output_notebook, show
from bokeh.layouts import row
import threading
import json
import sys
from importlib import reload
from collections import defaultdict
sys.path.append('/Users/jason.katz/Desktop/data-science/ad_hoc/Opta')
# %load_ext autotime
# -
import shots_object
reload(shots_object)
from shots_object import Shot
# + hideCode=false hidePrompt=false
output_notebook()
# + code_folding=[0, 38] hideCode=false hidePrompt=false
def find_file_path(folder_path):
zip_file_name = os.listdir(folder_path)[-1]
if zip_file_name.endswith(".zip"): # check for ".zip" extension
zip_file_path = folder_path + '/' + zip_file_name # get full path of files
zip_ref = zipfile.ZipFile(zip_file_path, 'r') # create zipfile object
zip_ref.extractall(folder_path) # extract file to dir
zip_ref.close() # close file
os.remove(zip_file_path) # delete zipped file
folder_items = os.listdir(folder_path)
for file_name in folder_items:
if 'f24' in file_name:
return folder_path + '/' + file_name
elif 'Opta' in file_name or 'opta_import' == file_name:
sub_folder_path = folder_path + '/' + file_name
sub_folder_items = os.listdir(sub_folder_path)
for file_name2 in sub_folder_items:
if 'f24' in file_name2:
return sub_folder_path + '/' + file_name2
def create_static_instances(file_path, instances):
with open(file_path) as fd:
items = []
data = xmltodict.parse(fd.read())
game = data['Games']['Game']['@id']
home_team_id = data['Games']['Game']['@home_team_id']
away_team_id = data['Games']['Game']['@away_team_id']
home_team_name = data['Games']['Game']['@home_team_name']
away_team_name = data['Games']['Game']['@away_team_name']
events = data['Games']['Game']['Event']
for idx, item in enumerate(events):
error = False
if item['@type_id'] in ['30', '32', '34']:
events[idx]['@x'] = '50.0'
events[idx]['@y'] = '50.0'
elif item['@type_id'] in ['17', '18', '19', '20', '21', '22', '23', '24', '25', '35', '40']:
if item['@team_id'] == events[idx-1]['@team_id']:
events[idx]['@x'] = events[idx-1]['@x']
events[idx]['@y'] = events[idx-1]['@y']
else:
events[idx]['@x'] = str(abs(100-float(events[idx-1]['@x'])))
events[idx]['@y'] = str(abs(100-float(events[idx-1]['@y'])))
events[idx]['@index'] = idx
events[idx]['@time'] = int(events[idx]['@min']) * 60 + int(events[idx]['@sec'])
if item['@type_id'] in ['13', '14', '15', '16'] and all(x not in [qualifier['@qualifier_id'] for qualifier in item['Q']] for x in ['9','28']):
item['@game_id'] = game
item['@home_team_id'] = home_team_id
item['@away_team_id'] = away_team_id
item['@home_team_name'] = home_team_name
item['@away_team_name'] = away_team_name
if item['@team_id'] == events[idx-1]['@team_id']:
try:
item['@speed_1'] = abs(float(events[idx-1]['@x']) - float(item['@x'])) / (item['@time'] - events[idx-1]['@time'])
except:
error = True
else:
try:
item['@speed_1'] = abs(abs(100-float(events[idx-1]['@x'])) - float(item['@x'])) / (item['@time'] - events[idx-1]['@time'])
except:
error = True
if item['@team_id'] == events[idx-2]['@team_id']:
try:
item['@speed_2'] = abs(float(events[idx-2]['@x']) - float(item['@x'])) / (item['@time'] - events[idx-2]['@time'])
except:
error = True
else:
try:
item['@speed_2'] = abs(abs(100-float(events[idx-2]['@x'])) - float(item['@x'])) / (item['@time'] - events[idx-2]['@time'])
except:
error = True
if item['@team_id'] == events[idx-3]['@team_id']:
try:
item['@speed_3'] = abs(float(events[idx-3]['@x']) - float(item['@x'])) / (item['@time'] - events[idx-3]['@time'])
except:
error = True
else:
try:
item['@speed_3'] = abs(abs(100-float(events[idx-3]['@x'])) - float(item['@x'])) / (item['@time'] - events[idx-3]['@time'])
except:
error = True
if item['@team_id'] == events[idx-4]['@team_id']:
try:
item['@speed_4'] = abs(float(events[idx-4]['@x']) - float(item['@x'])) / (item['@time'] - events[idx-4]['@time'])
except:
error = True
else:
try:
item['@speed_4'] = abs(abs(100-float(events[idx-4]['@x'])) - float(item['@x'])) / (item['@time'] - events[idx-4]['@time'])
except:
error = True
if item['@team_id'] == events[idx-5]['@team_id']:
try:
item['@speed_5'] = abs(float(events[idx-5]['@x']) - float(item['@x'])) / (item['@time'] - events[idx-5]['@time'])
except:
error = True
else:
try:
item['@speed_5'] = abs(abs(100-float(events[idx-5]['@x'])) - float(item['@x'])) / (item['@time'] - events[idx-5]['@time'])
except:
error = True
if item['@team_id'] == events[idx-6]['@team_id']:
try:
item['@speed_6'] = abs(float(events[idx-6]['@x']) - float(item['@x'])) / (item['@time'] - events[idx-6]['@time'])
except:
error = True
else:
try:
item['@speed_6'] = abs(abs(100-float(events[idx-6]['@x'])) - float(item['@x'])) / (item['@time'] - events[idx-6]['@time'])
except:
error = True
if item['@team_id'] == events[idx-7]['@team_id']:
try:
item['@speed_7'] = abs(float(events[idx-7]['@x']) - float(item['@x'])) / (item['@time'] - events[idx-7]['@time'])
except:
error = True
else:
try:
item['@speed_7'] = abs(abs(100-float(events[idx-7]['@x'])) - float(item['@x'])) / (item['@time'] - events[idx-7]['@time'])
except:
error = True
if not error:
items.append(item)
instances.extend(items)
def process_folder(folder_path, instances):
file_path = find_file_path(folder_path)
create_static_instances(file_path, instances)
# + code_folding=[0] hideCode=false hidePrompt=false
class ImportOpta(object):
def __init__(self, path, run=True):
self.path = path
self.file_paths = []
self.instances = []
if run:
print("Beginning upload of games")
self.create_instances_fast()
print("Finished uploading games")
def create_instances_fast(self):
threads = []
counter = 0
for folder_name in os.listdir(self.path)[1:]: # loop through items in dir
counter += 1
folder_path = self.path + '/' + folder_name
args = (folder_path, self.instances)
thread = threading.Thread(target=process_folder, args=args)
thread.start()
threads.append(thread)
print("Uploaded game {} of {}".format(counter, len(os.listdir(self.path)[1:])))
for thread in threads:
thread.join()
# + code_folding=[0, 7, 26, 44, 53, 60, 65, 70, 75, 80, 85, 90, 95, 100, 106, 112, 117, 123, 129, 135, 140] hideCode=false hidePrompt=false
class BodyPart(enum.Enum):
head = 0
left_foot = 1
right_foot = 2
other = 3
none = 4
class ShotPitchLocation(enum.Enum):
small_box_center = 0
box_center = 1
out_of_box_center = 2
center_35_plus = 3
small_box_right = 4
small_box_left = 5
box_deep_right = 6
box_right = 7
box_left = 8
box_deep_left = 9
out_of_box_deep_right = 10
out_of_box_right = 11
out_of_box_left = 12
out_of_box_deep_left = 13
right_35_plus = 14
left_35_plus = 15
none = 16
class ShotGoalLocation(enum.Enum):
left = 0
high = 1
right = 2
low_left = 3
high_left = 4
low_center = 5
high_center = 6
low_right = 7
high_right = 8
blocked = 9
close_left = 10
close_right = 11
close_high = 12
close_left_and_high = 13
close_right_and_high = 14
none = 15
class PatternOfPlay(enum.Enum):
regular_play = 0
fast_break = 1
set_piece = 2
from_corner = 3
from_kick = 4
throw_in = 5
none = 6
class ShotResult(enum.Enum):
miss = 0
post = 1
saved = 2
goal = 3
none = 4
class Assisted(enum.Enum):
yes = 1
no = 0
none = 2
class IntentionalAssist(enum.Enum):
yes = 1
no = 0
none = 2
class Strong(enum.Enum):
yes = 1
no = 0
none = 2
class Swerved(enum.Enum):
yes = 1
no = 0
none = 2
class Deflection(enum.Enum):
yes = 1
no = 0
none = 2
class BigChance(enum.Enum):
yes = 1
no = 0
none = 2
class Weak(enum.Enum):
yes = 1
no = 0
none = 2
class IndividualPlay(enum.Enum):
yes = 1
no = 0
none = 2
class RightFoot(enum.Enum):
yes = 1
no = 0
none = 2
class LeftFoot(enum.Enum):
yes = 1
no = 0
none = 2
class OtherBodyPart(enum.Enum):
yes = 1
no = 0
none = 2
class Header(enum.Enum):
yes = 1
no = 0
none = 2
class FastBreak(enum.Enum):
yes = 1
no = 0
none = 2
class SetPiece(enum.Enum):
yes = 1
no = 0
none = 2
class FromKick(enum.Enum):
yes = 1
no = 0
none = 2
class Made(enum.Enum):
yes = 1
no = 0
class AlwaysTrue(enum.Enum):
yes = 1
no = 0
class WhichTeam(enum.Enum):
home = 1
away = 0
# + code_folding=[] hideCode=false hidePrompt=false
class Shot(object):
def __init__(self, item):
self.item = item
self.pitch_length = 105.0
self.pitch_width = 68.0
@property
def always_true(self):
return AlwaysTrue.yes
@property
def game(self):
return self.item['@game_id']
@property
def home_team_id(self):
return self.item['@home_team_id']
@property
def home_team_name(self):
return self.item['@home_team_name']
@property
def away_team_id(self):
return self.item['@away_team_id']
@property
def away_team_name(self):
return self.item['@away_team_name']
@property
def x_raw(self):
return float(self.item['@x'])
@property
def y_raw(self):
return float(self.item['@y'])
@property
def x(se
| 12,288 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.