file_id
stringlengths 5
9
| content
stringlengths 100
5.25M
| local_path
stringlengths 66
70
| kaggle_dataset_name
stringlengths 3
50
⌀ | kaggle_dataset_owner
stringlengths 3
20
⌀ | kversion
stringlengths 497
763
⌀ | kversion_datasetsources
stringlengths 71
5.46k
⌀ | dataset_versions
stringlengths 338
235k
⌀ | datasets
stringlengths 334
371
⌀ | users
stringlengths 111
264
⌀ | script
stringlengths 100
5.25M
| df_info
stringlengths 0
4.87M
| has_data_info
bool 2
classes | nb_filenames
int64 0
370
| retreived_data_description
stringlengths 0
4.44M
| script_nb_tokens
int64 25
663k
| upvotes
int64 0
1.65k
| tokens_description
int64 25
663k
| tokens_script
int64 25
663k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
129545025
|
<jupyter_start><jupyter_text>Cirrhosis Prediction Dataset
### Similar Datasets
- Hepatitis C Dataset: [LINK](https://www.kaggle.com/fedesoriano/hepatitis-c-dataset)
- Body Fat Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/body-fat-prediction-dataset)
- Stroke Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/stroke-prediction-dataset)
- Wind Speed Prediction Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/wind-speed-prediction-dataset)
- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)
### Context
Cirrhosis is a late stage of scarring (fibrosis) of the liver caused by many forms of liver diseases and conditions, such as hepatitis and chronic alcoholism. The following data contains the information collected from the Mayo Clinic trial in primary biliary cirrhosis (PBC) of the liver conducted between 1974 and 1984. A description of the clinical background for the trial and the covariates recorded here is in Chapter 0, especially Section 0.2 of Fleming and Harrington, Counting
Processes and Survival Analysis, Wiley, 1991. A more extended discussion can be found in Dickson, et al., Hepatology 10:1-7 (1989) and in Markus, et al., N Eng J of Med 320:1709-13 (1989).
A total of 424 PBC patients, referred to Mayo Clinic during that ten-year interval, met eligibility criteria for the randomized placebo-controlled trial of the drug D-penicillamine. The first 312 cases in the dataset participated in the randomized trial and contain largely complete data. The additional 112 cases did not participate in the clinical trial but consented to have basic measurements recorded and to be followed for survival. Six of those cases were lost to follow-up shortly after diagnosis, so the data here are on an additional 106 cases as well as the 312 randomized participants.
### Attribute Information
1) ID: unique identifier
2) N\_Days: number of days between registration and the earlier of death, transplantation, or study analysis time in July 1986
3) Status: status of the patient C (censored), CL (censored due to liver tx), or D (death)
4) Drug: type of drug D-penicillamine or placebo
5) Age: age in [days]
6) Sex: M (male) or F (female)
7) Ascites: presence of ascites N (No) or Y (Yes)
8) Hepatomegaly: presence of hepatomegaly N (No) or Y (Yes)
9) Spiders: presence of spiders N (No) or Y (Yes)
10) Edema: presence of edema N (no edema and no diuretic therapy for edema), S (edema present without diuretics, or edema resolved by diuretics), or Y (edema despite diuretic therapy)
11) Bilirubin: serum bilirubin in [mg/dl]
12) Cholesterol: serum cholesterol in [mg/dl]
13) Albumin: albumin in [gm/dl]
14) Copper: urine copper in [ug/day]
15) Alk\_Phos: alkaline phosphatase in [U/liter]
16) SGOT: SGOT in [U/ml]
17) Triglycerides: triglicerides in [mg/dl]
18) Platelets: platelets per cubic [ml/1000]
19) Prothrombin: prothrombin time in seconds [s]
20) Stage: histologic stage of disease (1, 2, 3, or 4)
Kaggle dataset identifier: cirrhosis-prediction-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('cirrhosis-prediction-dataset/cirrhosis.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID 418 non-null int64
1 N_Days 418 non-null int64
2 Status 418 non-null object
3 Drug 312 non-null object
4 Age 418 non-null int64
5 Sex 418 non-null object
6 Ascites 312 non-null object
7 Hepatomegaly 312 non-null object
8 Spiders 312 non-null object
9 Edema 418 non-null object
10 Bilirubin 418 non-null float64
11 Cholesterol 284 non-null float64
12 Albumin 418 non-null float64
13 Copper 310 non-null float64
14 Alk_Phos 312 non-null float64
15 SGOT 312 non-null float64
16 Tryglicerides 282 non-null float64
17 Platelets 407 non-null float64
18 Prothrombin 416 non-null float64
19 Stage 412 non-null float64
dtypes: float64(10), int64(3), object(7)
memory usage: 65.4+ KB
<jupyter_text>Examples:
{
"ID": 1,
"N_Days": 400,
"Status": "D",
"Drug": "D-penicillamine",
"Age": 21464,
"Sex": "F",
"Ascites": "Y",
"Hepatomegaly": "Y",
"Spiders": "Y",
"Edema": "Y",
"Bilirubin": 14.5,
"Cholesterol": 261,
"Albumin": 2.6,
"Copper": 156,
"Alk_Phos": 1718.0,
"SGOT": 137.95,
"Tryglicerides": 172,
"Platelets": 190,
"Prothrombin": 12.2,
"Stage": 4
}
{
"ID": 2,
"N_Days": 4500,
"Status": "C",
"Drug": "D-penicillamine",
"Age": 20617,
"Sex": "F",
"Ascites": "N",
"Hepatomegaly": "Y",
"Spiders": "Y",
"Edema": "N",
"Bilirubin": 1.1,
"Cholesterol": 302,
"Albumin": 4.14,
"Copper": 54,
"Alk_Phos": 7394.8,
"SGOT": 113.52,
"Tryglicerides": 88,
"Platelets": 221,
"Prothrombin": 10.6,
"Stage": 3
}
{
"ID": 3,
"N_Days": 1012,
"Status": "D",
"Drug": "D-penicillamine",
"Age": 25594,
"Sex": "M",
"Ascites": "N",
"Hepatomegaly": "N",
"Spiders": "N",
"Edema": "S",
"Bilirubin": 1.4,
"Cholesterol": 176,
"Albumin": 3.48,
"Copper": 210,
"Alk_Phos": 516.0,
"SGOT": 96.1,
"Tryglicerides": 55,
"Platelets": 151,
"Prothrombin": 12.0,
"Stage": 4
}
{
"ID": 4,
"N_Days": 1925,
"Status": "D",
"Drug": "D-penicillamine",
"Age": 19994,
"Sex": "F",
"Ascites": "N",
"Hepatomegaly": "Y",
"Spiders": "Y",
"Edema": "S",
"Bilirubin": 1.8,
"Cholesterol": 244,
"Albumin": 2.54,
"Copper": 64,
"Alk_Phos": 6121.8,
"SGOT": 60.63,
"Tryglicerides": 92,
"Platelets": 183,
"Prothrombin": 10.3,
"Stage": 4
}
<jupyter_script># # Cirrhosis Prediction
# **SCENARIO**
# Cirrhosis is a chronic liver disease that occurs when healthy liver tissue is replaced by scar tissue, leading to a progressive loss of liver function. The scar tissue that forms in the liver can block blood flow through the liver and prevent it from working properly.Cirrhosis can have a number of causes, including chronic hepatitis B or C, alcohol abuse, nonalcoholic fatty liver disease, and autoimmune disorders. Symptoms of cirrhosis can include fatigue, jaundice, itching, bruising easily, and abdominal swelling.
# There is no cure for cirrhosis, but treatment can help manage the symptoms and slow the progression of the disease. Diagnosis of this disease includes procedures like blood tests, medical history analysis, physical examination, imaging studies and so on. The fatal stages of cirrhosis are easily diagnosed but at this stage the liver is severely damaged and is no longer able to function properly.
# Early diagnosis is necessary to save the liver of a patient. Liver biopsy can identify cirrhosis at early stage but it require to operate on the patient and also is time consuming. So, there is a necessity of a intelligent system which could identify cirrhosis even at earlier stages with the help of different biomarkers.
# **PROBLEM STATEMENT**
# In this project our goal is to identify liver cirrhosis at different stages. This problem is formulated as classification problem to identify cirrhosis at four different stages (from 1 to 4) using different biomarkers.
# **DATASET**
# The dataset used contains the information collected from the Mayo Clinic trial in primary biliary cirrhosis (PBC) of the liver conducted between 1974 and 1984.
# A total of 424 PBC patients, referred to Mayo Clinic during that ten-year interval, met eligibility criteria for the randomized placebo-controlled trial of the drug D-penicillamine. The first 312 cases in the dataset participated in the randomized trial and contain largely complete data. The additional 112 cases did not participate in the clinical trial but consented to have basic measurements recorded and to be followed for survival. Six of those cases were lost to follow-up shortly after diagnosis, so the data here are on an additional 106 cases as well as the 312 randomized participants.
# **Attribute Information**
# 1. ID: unique identifier
# 2. N_Days: number of days between registration and the earlier of death, transplantation, or study analysis time in July 1986
# 3. Status: status of the patient C (censored), CL (censored due to liver tx), or D (death)
# 4. Drug: type of drug D-penicillamine or placebo
# 5. Age: age in [days]
# 6. Sex: M (male) or F (female)
# 7. Ascites: presence of ascites N (No) or Y (Yes)
# 8. Hepatomegaly: presence of hepatomegaly N (No) or Y (Yes)
# 9. Spiders: presence of spiders N (No) or Y (Yes)
# 10. Edema: presence of edema N (no edema and no diuretic therapy for edema), S (edema present without diuretics, or edema resolved by diuretics), or Y (edema despite diuretic therapy)
# 11. Bilirubin: serum bilirubin in [mg/dl]
# 12. Cholesterol: serum cholesterol in [mg/dl]
# 13. Albumin: albumin in [gm/dl]
# 14. Copper: urine copper in [ug/day]
# 15. Alk_Phos: alkaline phosphatase in [U/liter]
# 16. SGOT: SGOT in [U/ml]
# 17. Triglycerides: triglicerides in [mg/dl]
# 18. Platelets: platelets per cubic [ml/1000]
# 19. Prothrombin: prothrombin time in seconds [s]
# 20. Stage: histologic stage of disease (1, 2, 3, or 4)
# import necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
# ## Learn the data
cir = pd.read_csv("../input/cirrhosis-prediction-dataset/cirrhosis.csv")
cir.head()
# After observing data using embedding projector, we found that data for stage 1(healthy liver) is very noisy, so we will stick with only stage 2,3 and 4
cir = cir[cir["Stage"] != 1]
# drop irrelevant columns
cir.drop(["ID"], axis=1, inplace=True)
# convert age to years from days
cir["Age"] = cir["Age"] / 365
# look at the possible values in categorical columns
for col in cir.columns:
if cir[col].dtype == "O":
print(col, ": ", cir[col].unique())
# short glance of numerical data
cir.describe().T.style.background_gradient(cmap="BuGn")
# The numerical attributes are at varying range. We will perform feature scaling later which is required for models which does distance based calculation.
# ## Work on missing values
# First lets check number of null values in each column.
a = cir.isna().sum()
sns.barplot(y=a.index, x=a.values, palette="flare")
plt.title("Missing value count")
plt.xlabel("Columns")
plt.ylabel("Columns")
# We got too many null values. Looking at number of rows, its not feasible to drop rows containing null values.
cir[cir["Stage"].isna()]
# With too many missing values in rows where the target value is also missing, so lets drop rows with missing target value.
# drop rows with missing target(stage) value
cir.dropna(axis=0, subset=["Stage"], inplace=True)
cir["Stage"] = cir["Stage"].astype(int)
# checking data distributions for numerical features
# age doesn't have missing values, used here just to balance subplot
missing_nums = [
["Cholesterol", "Copper", "Alk_Phos", "SGOT"],
["Tryglicerides", "Platelets", "Prothrombin", "Age"],
]
fig, axs = plt.subplots(2, 4, figsize=(10, 5))
plt.figure(figsize=(12, 8))
for i in range(2):
for j in range(4):
sns.kdeplot(cir, x=missing_nums[i][j], ax=axs[i, j])
axs[i, j].label_outer()
axs[i, j].set_xlabel("", fontsize=15)
axs[i, j].set_title(missing_nums[i][j])
plt.show()
# All the numerical features with missing values follows gaussian distributiom with some degree of skewness, so lets use median to impute missing values in those colums.
# Also, some outlier can be seen, to fix skewness and outliers we will use log transformation(natural log being it a medical dataset)
# impute numerical values with median
numerical_columns = cir.select_dtypes(include=(["int64", "float64"])).columns
for c in numerical_columns:
if c != "Stage":
cir[c] = np.log(cir[c]) # log transformation to remove outliers
cir[c].fillna(cir[c].median(), inplace=True)
categorical_columns = cir.select_dtypes(include=("object")).columns
for c in categorical_columns:
cir[c].fillna(cir[c].mode().values[0], inplace=True)
# # Data Visualization
sns.countplot(x="Stage", hue="Sex", data=cir)
plt.title("Sex wise cirrhosis stages")
# It's clear that female aremore prone to liver cirrhosis at any stage
# One of the most common causes of liver cirrhosis is chronic alcohol consumption, and women may be more susceptible to alcohol-related liver damage than men. This is because women tend to have lower levels of an enzyme called alcohol dehydrogenase, which is involved in metabolizing alcohol. As a result, women may experience more severe liver damage from the same amount of alcohol consumption as men.
# But due to almost same number of cases in all stages for both male and female, this feature might not be the best one to predict cirrhosis stage.
sns.countplot(x="Stage", hue="Drug", data=cir)
# #Column Encoding
# We have some categorical columns with string values.Many ML algorithms couldn't work with such values. So, lets use appropriate encoding to encode them. We should have used two types of encoding as below.
# One hot Encoding : Drugs,Sex
# Label Encoding : Status,Ascites, Hepatomegaly, Spiders, Edema
# But, even in the column "Sex", some ordinal relation can be seen, so its also encoded using label encoder.
# Also, the drug is label encoded based on their strength.
# A placebo is a substance or treatment that has no therapeutic effect. Placebo drugs are often used in medical research to help determine the effectiveness of a new treatment by comparing it to the placebo.
# Whereas D-penicillamine is used to balance excess body parameters like copper.
cir["Sex"] = cir["Sex"].replace({"M": 0, "F": 1})
cir["Ascites"] = cir["Ascites"].replace({"N": 0, "Y": 1})
cir["Drug"] = cir["Drug"].replace({"D-penicillamine": 1, "Placebo": 0})
cir["Hepatomegaly"] = cir["Hepatomegaly"].replace({"N": 0, "Y": 1})
cir["Spiders"] = cir["Spiders"].replace({"N": 0, "Y": 1})
cir["Edema"] = cir["Edema"].replace({"N": 0, "Y": 1, "S": -1})
cir["Status"] = cir["Status"].replace({"C": 0, "CL": 1, "D": -1})
# cir['Stage'] = cir['Stage'].replace({2:0,3:1,4:2})
# cir['Stage'] = cir['Stage'].replace({1:0,2:1,3:2,4:3})
# ## Dataset balancing
# First, we will check number of samples in our dataset for each stage of liver cirrhosis.
def plot_target_count(data):
"""Function to plot number of samples in each class"""
plt.figure(figsize=(4, 4))
counts = data.value_counts()
plt.bar(x=counts.index, height=counts, color="orange")
plt.xticks(rotation=90)
plt.title("Target Counts")
plt.xlabel("Stages")
plt.ylabel("Counts")
plt.xticks([2, 3, 4], ["Stage 2", "Stage 3", "Stage 4"], rotation=0)
plt.show()
plot_target_count(cir["Stage"])
# The dataset is imbalanced with less number of rows for earlier stage of cirrhosis. We will perform oversampling using SMOTE(Synthetic Minority Oversampling Technique).
# Before performing oversamling we will split our existing dataset into train and test set. We will use 80-20 split for train and test set.
# separate features and target
X = cir.drop(["Status", "Stage"], axis=1)
y = cir["Stage"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=23
)
# Using Smote for upsampling
sm = SMOTE(k_neighbors=3)
X_train, y_train = sm.fit_resample(X_train, y_train)
X_train.shape, y_train.shape
# # Feature Scaling
# As we saw before, the numerical columns have varying range, so we will perform feature selection to scale them.
# We will experiment with two kind of scaler: StandardScaler and MinMax Scaler.
# Using standard Scaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# # Using MinMax Scaler
# scaler = MinMaxScaler()
# scaler.fit(X_train)
# X_train = scaler.transform(X_train)
# X_test = scaler.transform(X_test)
# ## Feature Selection
# Our dataset
# has a comparatively more number of features considering the number of records. So, in order to
# select the best feature, we will use a statistical method called the ANOVA test.
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
best = SelectKBest(score_func=f_classif, k=12)
best.fit(X_train, y_train)
best_cols = best.get_support(indices=True)
best_cols
best_train = X_train[:, best_cols]
best_test = X_test[:, best_cols]
scores = best.scores_
pvalues = best.pvalues_
cols = X.columns
for idx in range(X.shape[1]):
print(cols[idx], scores[idx])
sns.barplot(y=cols, x=scores, palette="flare")
plt.title("Feature Importance with ANOVA TEST")
plt.xlabel("F-Score")
plt.ylabel("Features")
# ## Setup MLFlow
# !pip install mlflow
# !databricks configure --host https://community.cloud.databricks.com/
# %env ML_FLOW_EXP=<place you databrick experiment uri string here>
# import os
# import mlflow
# mlflow.set_tracking_uri("databricks")
# mlflow.set_experiment(os.environ['ML_FLOW_EXP'])
# function to track experiment results with mlflow
def mlflow_track(model, exp_name, train_scores, test_scores):
# set name of experiment
params = model.get_params() #
with mlflow.start_run(run_name=exp_name):
mlflow.log_metric("recall_train", train_scores[0])
mlflow.log_metric("precision_train", train_scores[1])
mlflow.log_metric("f1_train", train_scores[2])
mlflow.log_metric("accuracy_train", train_scores[3])
mlflow.log_metric("recall_test", test_scores[0])
mlflow.log_metric("precision_test", test_scores[1])
mlflow.log_metric("f1_test", test_scores[2])
mlflow.log_metric("accuracy_test", test_scores[3])
mlflow.log_params(params)
from sklearn.metrics import (
f1_score,
accuracy_score,
confusion_matrix,
recall_score,
precision_score,
classification_report,
)
def evaluate_model(y_train, y_pred_train, y_test, y_pred_test):
print("-" * 30, "FOR TRAIN SET", "-" * 30)
recall_train = recall_score(y_train, y_pred_train, average="macro")
precision_train = precision_score(y_train, y_pred_train, average="macro")
f1_train = f1_score(y_train, y_pred_train, average="macro")
accuracy_train = accuracy_score(y_train, y_pred_train)
rep_train = classification_report(y_train, y_pred_train)
print(rep_train)
print("-" * 30, "FOR TEST SET", "-" * 30)
recall_test = recall_score(y_test, y_pred_test, average="macro")
precision_test = precision_score(y_test, y_pred_test, average="macro")
f1_test = f1_score(y_test, y_pred_test, average="macro")
accuracy_test = accuracy_score(y_test, y_pred_test)
rep_test = classification_report(y_test, y_pred_test)
print(rep_test)
return (recall_train, precision_train, f1_train, accuracy_train), (
recall_test,
precision_test,
f1_test,
accuracy_test,
)
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import (
RandomForestClassifier,
AdaBoostClassifier,
BaggingClassifier,
)
from xgboost import XGBClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
log_reg = LogisticRegression(max_iter=500, random_state=28)
log_reg.fit(best_train, y_train)
log_pred = log_reg.predict(best_test)
log_pred_train = log_reg.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, log_pred_train, y_test, log_pred)
# mlflow_track(log_reg,"logistic_reg-500iter",train_scores,test_scores)
dt = DecisionTreeClassifier(criterion="log_loss", max_depth=50, random_state=20)
dt.fit(best_train, y_train)
dt_pred = dt.predict(best_test)
dt_pred_train = dt.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, dt_pred_train, y_test, dt_pred)
# mlflow_track(log_reg,"dtree",train_scores,test_scores)
rf = RandomForestClassifier(
n_estimators=120,
criterion="log_loss",
max_depth=10,
min_samples_leaf=4,
random_state=20,
)
rf.fit(X_train, y_train)
rf_pred = rf.predict(X_test)
rf_pred_train = rf.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, rf_pred_train, y_test, rf_pred)
# mlflow_track(rf,"rf-log_loss",train_scores,test_scores)
rf = RandomForestClassifier(
n_estimators=25,
criterion="log_loss",
max_depth=25,
min_samples_leaf=4,
random_state=20,
)
# rf = RandomForestClassifier()
rf.fit(X_train, y_train)
rf_pred = rf.predict(X_test)
rf_pred_train = rf.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, rf_pred_train, y_test, rf_pred)
# mlflow_track(rf,"rf2",train_scores,test_scores)
bag = BaggingClassifier(rf, bootstrap_features=True, random_state=22)
bag.fit(X_train, y_train)
bag_pred = bag.predict(X_test)
bag_pred_train = bag.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, bag_pred_train, y_test, bag_pred)
# mlflow_track(bag,"bagging-rf",train_scores,test_scores)
bag_fs = BaggingClassifier(rf, bootstrap_features=True, random_state=22)
bag_fs.fit(best_train, y_train)
bag_fs_pred = bag_fs.predict(best_test)
bag_fs_pred_train = bag_fs.predict(best_train)
train_scores, test_scores = evaluate_model(
y_train, bag_fs_pred_train, y_test, bag_fs_pred
)
# mlflow_track(bag,"bagging-rf featSel",train_scores,test_scores)
ada = AdaBoostClassifier(n_estimators=50, random_state=20)
ada.fit(X_train, y_train)
ada_pred = ada.predict(X_test)
ada_pred_train = ada.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, ada_pred_train, y_test, ada_pred)
# mlflow_track(ada,"ada boost1",train_scores,test_scores)
ada = AdaBoostClassifier(random_state=28)
ada.fit(best_train, y_train)
ada_pred = ada.predict(best_test)
ada_pred_train = ada.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, ada_pred_train, y_test, ada_pred)
# mlflow_track(ada,"ada boost2- featSel",train_scores,test_scores)
svm = SVC(kernel="rbf")
svm.fit(best_train, y_train)
svm_pred = svm.predict(best_test)
svm_pred_train = svm.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, svm_pred_train, y_test, svm_pred)
# mlflow_track(svm,"svm-feat sel",train_scores,test_scores)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/545/129545025.ipynb
|
cirrhosis-prediction-dataset
|
fedesoriano
|
[{"Id": 129545025, "ScriptId": 38519800, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4911991, "CreationDate": "05/14/2023 17:37:15", "VersionNumber": 2.0, "Title": "Liver Cirrhosis Stage Prediction", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 392.0, "LinesInsertedFromPrevious": 4.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 388.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185715053, "KernelVersionId": 129545025, "SourceDatasetVersionId": 2492225}]
|
[{"Id": 2492225, "DatasetId": 1508604, "DatasourceVersionId": 2534803, "CreatorUserId": 6402661, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "08/02/2021 15:36:59", "VersionNumber": 2.0, "Title": "Cirrhosis Prediction Dataset", "Slug": "cirrhosis-prediction-dataset", "Subtitle": "18 clinical features for predicting liver cirrhosis stage", "Description": "### Similar Datasets\n\n- Hepatitis C Dataset: [LINK](https://www.kaggle.com/fedesoriano/hepatitis-c-dataset)\n- Body Fat Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/body-fat-prediction-dataset)\n- Stroke Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/stroke-prediction-dataset)\n- Wind Speed Prediction Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/wind-speed-prediction-dataset)\n- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)\n\n\n### Context\n\nCirrhosis is a late stage of scarring (fibrosis) of the liver caused by many forms of liver diseases and conditions, such as hepatitis and chronic alcoholism. The following data contains the information collected from the Mayo Clinic trial in primary biliary cirrhosis (PBC) of the liver conducted between 1974 and 1984. A description of the clinical background for the trial and the covariates recorded here is in Chapter 0, especially Section 0.2 of Fleming and Harrington, Counting\nProcesses and Survival Analysis, Wiley, 1991. A more extended discussion can be found in Dickson, et al., Hepatology 10:1-7 (1989) and in Markus, et al., N Eng J of Med 320:1709-13 (1989). \n\nA total of 424 PBC patients, referred to Mayo Clinic during that ten-year interval, met eligibility criteria for the randomized placebo-controlled trial of the drug D-penicillamine. The first 312 cases in the dataset participated in the randomized trial and contain largely complete data. The additional 112 cases did not participate in the clinical trial but consented to have basic measurements recorded and to be followed for survival. Six of those cases were lost to follow-up shortly after diagnosis, so the data here are on an additional 106 cases as well as the 312 randomized participants.\n\n\n### Attribute Information\n\n1) ID: unique identifier\n2) N\\_Days: number of days between registration and the earlier of death, transplantation, or study analysis time in July 1986\n3) Status: status of the patient C (censored), CL (censored due to liver tx), or D (death)\n4) Drug: type of drug D-penicillamine or placebo\n5) Age: age in [days]\n6) Sex: M (male) or F (female)\n7) Ascites: presence of ascites N (No) or Y (Yes)\n8) Hepatomegaly: presence of hepatomegaly N (No) or Y (Yes)\n9) Spiders: presence of spiders N (No) or Y (Yes)\n10) Edema: presence of edema N (no edema and no diuretic therapy for edema), S (edema present without diuretics, or edema resolved by diuretics), or Y (edema despite diuretic therapy)\n11) Bilirubin: serum bilirubin in [mg/dl]\n12) Cholesterol: serum cholesterol in [mg/dl]\n13) Albumin: albumin in [gm/dl]\n14) Copper: urine copper in [ug/day]\n15) Alk\\_Phos: alkaline phosphatase in [U/liter]\n16) SGOT: SGOT in [U/ml]\n17) Triglycerides: triglicerides in [mg/dl]\n18) Platelets: platelets per cubic [ml/1000]\n19) Prothrombin: prothrombin time in seconds [s]\n20) Stage: histologic stage of disease (1, 2, 3, or 4)\n\n### Acknowledgements\n\nThe dataset can be found in appendix D of:\n> Fleming, T.R. and Harrington, D.P. (1991) Counting Processes and Survival Analysis. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley and Sons Inc., New York.\n\nIf you want to cite this data:\n> fedesoriano. (August 2021). Cirrhosis Prediction Dataset. Retrieved [Date Retrieved] from https://www.kaggle.com/fedesoriano/cirrhosis-prediction-dataset.", "VersionNotes": "v2", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1508604, "CreatorUserId": 6402661, "OwnerUserId": 6402661.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2492225.0, "CurrentDatasourceVersionId": 2534803.0, "ForumId": 1528351, "Type": 2, "CreationDate": "08/02/2021 14:47:06", "LastActivityDate": "08/02/2021", "TotalViews": 96634, "TotalDownloads": 8794, "TotalVotes": 127, "TotalKernels": 24}]
|
[{"Id": 6402661, "UserName": "fedesoriano", "DisplayName": "fedesoriano", "RegisterDate": "12/18/2020", "PerformanceTier": 4}]
|
# # Cirrhosis Prediction
# **SCENARIO**
# Cirrhosis is a chronic liver disease that occurs when healthy liver tissue is replaced by scar tissue, leading to a progressive loss of liver function. The scar tissue that forms in the liver can block blood flow through the liver and prevent it from working properly.Cirrhosis can have a number of causes, including chronic hepatitis B or C, alcohol abuse, nonalcoholic fatty liver disease, and autoimmune disorders. Symptoms of cirrhosis can include fatigue, jaundice, itching, bruising easily, and abdominal swelling.
# There is no cure for cirrhosis, but treatment can help manage the symptoms and slow the progression of the disease. Diagnosis of this disease includes procedures like blood tests, medical history analysis, physical examination, imaging studies and so on. The fatal stages of cirrhosis are easily diagnosed but at this stage the liver is severely damaged and is no longer able to function properly.
# Early diagnosis is necessary to save the liver of a patient. Liver biopsy can identify cirrhosis at early stage but it require to operate on the patient and also is time consuming. So, there is a necessity of a intelligent system which could identify cirrhosis even at earlier stages with the help of different biomarkers.
# **PROBLEM STATEMENT**
# In this project our goal is to identify liver cirrhosis at different stages. This problem is formulated as classification problem to identify cirrhosis at four different stages (from 1 to 4) using different biomarkers.
# **DATASET**
# The dataset used contains the information collected from the Mayo Clinic trial in primary biliary cirrhosis (PBC) of the liver conducted between 1974 and 1984.
# A total of 424 PBC patients, referred to Mayo Clinic during that ten-year interval, met eligibility criteria for the randomized placebo-controlled trial of the drug D-penicillamine. The first 312 cases in the dataset participated in the randomized trial and contain largely complete data. The additional 112 cases did not participate in the clinical trial but consented to have basic measurements recorded and to be followed for survival. Six of those cases were lost to follow-up shortly after diagnosis, so the data here are on an additional 106 cases as well as the 312 randomized participants.
# **Attribute Information**
# 1. ID: unique identifier
# 2. N_Days: number of days between registration and the earlier of death, transplantation, or study analysis time in July 1986
# 3. Status: status of the patient C (censored), CL (censored due to liver tx), or D (death)
# 4. Drug: type of drug D-penicillamine or placebo
# 5. Age: age in [days]
# 6. Sex: M (male) or F (female)
# 7. Ascites: presence of ascites N (No) or Y (Yes)
# 8. Hepatomegaly: presence of hepatomegaly N (No) or Y (Yes)
# 9. Spiders: presence of spiders N (No) or Y (Yes)
# 10. Edema: presence of edema N (no edema and no diuretic therapy for edema), S (edema present without diuretics, or edema resolved by diuretics), or Y (edema despite diuretic therapy)
# 11. Bilirubin: serum bilirubin in [mg/dl]
# 12. Cholesterol: serum cholesterol in [mg/dl]
# 13. Albumin: albumin in [gm/dl]
# 14. Copper: urine copper in [ug/day]
# 15. Alk_Phos: alkaline phosphatase in [U/liter]
# 16. SGOT: SGOT in [U/ml]
# 17. Triglycerides: triglicerides in [mg/dl]
# 18. Platelets: platelets per cubic [ml/1000]
# 19. Prothrombin: prothrombin time in seconds [s]
# 20. Stage: histologic stage of disease (1, 2, 3, or 4)
# import necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
# ## Learn the data
cir = pd.read_csv("../input/cirrhosis-prediction-dataset/cirrhosis.csv")
cir.head()
# After observing data using embedding projector, we found that data for stage 1(healthy liver) is very noisy, so we will stick with only stage 2,3 and 4
cir = cir[cir["Stage"] != 1]
# drop irrelevant columns
cir.drop(["ID"], axis=1, inplace=True)
# convert age to years from days
cir["Age"] = cir["Age"] / 365
# look at the possible values in categorical columns
for col in cir.columns:
if cir[col].dtype == "O":
print(col, ": ", cir[col].unique())
# short glance of numerical data
cir.describe().T.style.background_gradient(cmap="BuGn")
# The numerical attributes are at varying range. We will perform feature scaling later which is required for models which does distance based calculation.
# ## Work on missing values
# First lets check number of null values in each column.
a = cir.isna().sum()
sns.barplot(y=a.index, x=a.values, palette="flare")
plt.title("Missing value count")
plt.xlabel("Columns")
plt.ylabel("Columns")
# We got too many null values. Looking at number of rows, its not feasible to drop rows containing null values.
cir[cir["Stage"].isna()]
# With too many missing values in rows where the target value is also missing, so lets drop rows with missing target value.
# drop rows with missing target(stage) value
cir.dropna(axis=0, subset=["Stage"], inplace=True)
cir["Stage"] = cir["Stage"].astype(int)
# checking data distributions for numerical features
# age doesn't have missing values, used here just to balance subplot
missing_nums = [
["Cholesterol", "Copper", "Alk_Phos", "SGOT"],
["Tryglicerides", "Platelets", "Prothrombin", "Age"],
]
fig, axs = plt.subplots(2, 4, figsize=(10, 5))
plt.figure(figsize=(12, 8))
for i in range(2):
for j in range(4):
sns.kdeplot(cir, x=missing_nums[i][j], ax=axs[i, j])
axs[i, j].label_outer()
axs[i, j].set_xlabel("", fontsize=15)
axs[i, j].set_title(missing_nums[i][j])
plt.show()
# All the numerical features with missing values follows gaussian distributiom with some degree of skewness, so lets use median to impute missing values in those colums.
# Also, some outlier can be seen, to fix skewness and outliers we will use log transformation(natural log being it a medical dataset)
# impute numerical values with median
numerical_columns = cir.select_dtypes(include=(["int64", "float64"])).columns
for c in numerical_columns:
if c != "Stage":
cir[c] = np.log(cir[c]) # log transformation to remove outliers
cir[c].fillna(cir[c].median(), inplace=True)
categorical_columns = cir.select_dtypes(include=("object")).columns
for c in categorical_columns:
cir[c].fillna(cir[c].mode().values[0], inplace=True)
# # Data Visualization
sns.countplot(x="Stage", hue="Sex", data=cir)
plt.title("Sex wise cirrhosis stages")
# It's clear that female aremore prone to liver cirrhosis at any stage
# One of the most common causes of liver cirrhosis is chronic alcohol consumption, and women may be more susceptible to alcohol-related liver damage than men. This is because women tend to have lower levels of an enzyme called alcohol dehydrogenase, which is involved in metabolizing alcohol. As a result, women may experience more severe liver damage from the same amount of alcohol consumption as men.
# But due to almost same number of cases in all stages for both male and female, this feature might not be the best one to predict cirrhosis stage.
sns.countplot(x="Stage", hue="Drug", data=cir)
# #Column Encoding
# We have some categorical columns with string values.Many ML algorithms couldn't work with such values. So, lets use appropriate encoding to encode them. We should have used two types of encoding as below.
# One hot Encoding : Drugs,Sex
# Label Encoding : Status,Ascites, Hepatomegaly, Spiders, Edema
# But, even in the column "Sex", some ordinal relation can be seen, so its also encoded using label encoder.
# Also, the drug is label encoded based on their strength.
# A placebo is a substance or treatment that has no therapeutic effect. Placebo drugs are often used in medical research to help determine the effectiveness of a new treatment by comparing it to the placebo.
# Whereas D-penicillamine is used to balance excess body parameters like copper.
cir["Sex"] = cir["Sex"].replace({"M": 0, "F": 1})
cir["Ascites"] = cir["Ascites"].replace({"N": 0, "Y": 1})
cir["Drug"] = cir["Drug"].replace({"D-penicillamine": 1, "Placebo": 0})
cir["Hepatomegaly"] = cir["Hepatomegaly"].replace({"N": 0, "Y": 1})
cir["Spiders"] = cir["Spiders"].replace({"N": 0, "Y": 1})
cir["Edema"] = cir["Edema"].replace({"N": 0, "Y": 1, "S": -1})
cir["Status"] = cir["Status"].replace({"C": 0, "CL": 1, "D": -1})
# cir['Stage'] = cir['Stage'].replace({2:0,3:1,4:2})
# cir['Stage'] = cir['Stage'].replace({1:0,2:1,3:2,4:3})
# ## Dataset balancing
# First, we will check number of samples in our dataset for each stage of liver cirrhosis.
def plot_target_count(data):
"""Function to plot number of samples in each class"""
plt.figure(figsize=(4, 4))
counts = data.value_counts()
plt.bar(x=counts.index, height=counts, color="orange")
plt.xticks(rotation=90)
plt.title("Target Counts")
plt.xlabel("Stages")
plt.ylabel("Counts")
plt.xticks([2, 3, 4], ["Stage 2", "Stage 3", "Stage 4"], rotation=0)
plt.show()
plot_target_count(cir["Stage"])
# The dataset is imbalanced with less number of rows for earlier stage of cirrhosis. We will perform oversampling using SMOTE(Synthetic Minority Oversampling Technique).
# Before performing oversamling we will split our existing dataset into train and test set. We will use 80-20 split for train and test set.
# separate features and target
X = cir.drop(["Status", "Stage"], axis=1)
y = cir["Stage"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=23
)
# Using Smote for upsampling
sm = SMOTE(k_neighbors=3)
X_train, y_train = sm.fit_resample(X_train, y_train)
X_train.shape, y_train.shape
# # Feature Scaling
# As we saw before, the numerical columns have varying range, so we will perform feature selection to scale them.
# We will experiment with two kind of scaler: StandardScaler and MinMax Scaler.
# Using standard Scaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# # Using MinMax Scaler
# scaler = MinMaxScaler()
# scaler.fit(X_train)
# X_train = scaler.transform(X_train)
# X_test = scaler.transform(X_test)
# ## Feature Selection
# Our dataset
# has a comparatively more number of features considering the number of records. So, in order to
# select the best feature, we will use a statistical method called the ANOVA test.
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
best = SelectKBest(score_func=f_classif, k=12)
best.fit(X_train, y_train)
best_cols = best.get_support(indices=True)
best_cols
best_train = X_train[:, best_cols]
best_test = X_test[:, best_cols]
scores = best.scores_
pvalues = best.pvalues_
cols = X.columns
for idx in range(X.shape[1]):
print(cols[idx], scores[idx])
sns.barplot(y=cols, x=scores, palette="flare")
plt.title("Feature Importance with ANOVA TEST")
plt.xlabel("F-Score")
plt.ylabel("Features")
# ## Setup MLFlow
# !pip install mlflow
# !databricks configure --host https://community.cloud.databricks.com/
# %env ML_FLOW_EXP=<place you databrick experiment uri string here>
# import os
# import mlflow
# mlflow.set_tracking_uri("databricks")
# mlflow.set_experiment(os.environ['ML_FLOW_EXP'])
# function to track experiment results with mlflow
def mlflow_track(model, exp_name, train_scores, test_scores):
# set name of experiment
params = model.get_params() #
with mlflow.start_run(run_name=exp_name):
mlflow.log_metric("recall_train", train_scores[0])
mlflow.log_metric("precision_train", train_scores[1])
mlflow.log_metric("f1_train", train_scores[2])
mlflow.log_metric("accuracy_train", train_scores[3])
mlflow.log_metric("recall_test", test_scores[0])
mlflow.log_metric("precision_test", test_scores[1])
mlflow.log_metric("f1_test", test_scores[2])
mlflow.log_metric("accuracy_test", test_scores[3])
mlflow.log_params(params)
from sklearn.metrics import (
f1_score,
accuracy_score,
confusion_matrix,
recall_score,
precision_score,
classification_report,
)
def evaluate_model(y_train, y_pred_train, y_test, y_pred_test):
print("-" * 30, "FOR TRAIN SET", "-" * 30)
recall_train = recall_score(y_train, y_pred_train, average="macro")
precision_train = precision_score(y_train, y_pred_train, average="macro")
f1_train = f1_score(y_train, y_pred_train, average="macro")
accuracy_train = accuracy_score(y_train, y_pred_train)
rep_train = classification_report(y_train, y_pred_train)
print(rep_train)
print("-" * 30, "FOR TEST SET", "-" * 30)
recall_test = recall_score(y_test, y_pred_test, average="macro")
precision_test = precision_score(y_test, y_pred_test, average="macro")
f1_test = f1_score(y_test, y_pred_test, average="macro")
accuracy_test = accuracy_score(y_test, y_pred_test)
rep_test = classification_report(y_test, y_pred_test)
print(rep_test)
return (recall_train, precision_train, f1_train, accuracy_train), (
recall_test,
precision_test,
f1_test,
accuracy_test,
)
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import (
RandomForestClassifier,
AdaBoostClassifier,
BaggingClassifier,
)
from xgboost import XGBClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
log_reg = LogisticRegression(max_iter=500, random_state=28)
log_reg.fit(best_train, y_train)
log_pred = log_reg.predict(best_test)
log_pred_train = log_reg.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, log_pred_train, y_test, log_pred)
# mlflow_track(log_reg,"logistic_reg-500iter",train_scores,test_scores)
dt = DecisionTreeClassifier(criterion="log_loss", max_depth=50, random_state=20)
dt.fit(best_train, y_train)
dt_pred = dt.predict(best_test)
dt_pred_train = dt.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, dt_pred_train, y_test, dt_pred)
# mlflow_track(log_reg,"dtree",train_scores,test_scores)
rf = RandomForestClassifier(
n_estimators=120,
criterion="log_loss",
max_depth=10,
min_samples_leaf=4,
random_state=20,
)
rf.fit(X_train, y_train)
rf_pred = rf.predict(X_test)
rf_pred_train = rf.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, rf_pred_train, y_test, rf_pred)
# mlflow_track(rf,"rf-log_loss",train_scores,test_scores)
rf = RandomForestClassifier(
n_estimators=25,
criterion="log_loss",
max_depth=25,
min_samples_leaf=4,
random_state=20,
)
# rf = RandomForestClassifier()
rf.fit(X_train, y_train)
rf_pred = rf.predict(X_test)
rf_pred_train = rf.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, rf_pred_train, y_test, rf_pred)
# mlflow_track(rf,"rf2",train_scores,test_scores)
bag = BaggingClassifier(rf, bootstrap_features=True, random_state=22)
bag.fit(X_train, y_train)
bag_pred = bag.predict(X_test)
bag_pred_train = bag.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, bag_pred_train, y_test, bag_pred)
# mlflow_track(bag,"bagging-rf",train_scores,test_scores)
bag_fs = BaggingClassifier(rf, bootstrap_features=True, random_state=22)
bag_fs.fit(best_train, y_train)
bag_fs_pred = bag_fs.predict(best_test)
bag_fs_pred_train = bag_fs.predict(best_train)
train_scores, test_scores = evaluate_model(
y_train, bag_fs_pred_train, y_test, bag_fs_pred
)
# mlflow_track(bag,"bagging-rf featSel",train_scores,test_scores)
ada = AdaBoostClassifier(n_estimators=50, random_state=20)
ada.fit(X_train, y_train)
ada_pred = ada.predict(X_test)
ada_pred_train = ada.predict(X_train)
train_scores, test_scores = evaluate_model(y_train, ada_pred_train, y_test, ada_pred)
# mlflow_track(ada,"ada boost1",train_scores,test_scores)
ada = AdaBoostClassifier(random_state=28)
ada.fit(best_train, y_train)
ada_pred = ada.predict(best_test)
ada_pred_train = ada.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, ada_pred_train, y_test, ada_pred)
# mlflow_track(ada,"ada boost2- featSel",train_scores,test_scores)
svm = SVC(kernel="rbf")
svm.fit(best_train, y_train)
svm_pred = svm.predict(best_test)
svm_pred_train = svm.predict(best_train)
train_scores, test_scores = evaluate_model(y_train, svm_pred_train, y_test, svm_pred)
# mlflow_track(svm,"svm-feat sel",train_scores,test_scores)
|
[{"cirrhosis-prediction-dataset/cirrhosis.csv": {"column_names": "[\"ID\", \"N_Days\", \"Status\", \"Drug\", \"Age\", \"Sex\", \"Ascites\", \"Hepatomegaly\", \"Spiders\", \"Edema\", \"Bilirubin\", \"Cholesterol\", \"Albumin\", \"Copper\", \"Alk_Phos\", \"SGOT\", \"Tryglicerides\", \"Platelets\", \"Prothrombin\", \"Stage\"]", "column_data_types": "{\"ID\": \"int64\", \"N_Days\": \"int64\", \"Status\": \"object\", \"Drug\": \"object\", \"Age\": \"int64\", \"Sex\": \"object\", \"Ascites\": \"object\", \"Hepatomegaly\": \"object\", \"Spiders\": \"object\", \"Edema\": \"object\", \"Bilirubin\": \"float64\", \"Cholesterol\": \"float64\", \"Albumin\": \"float64\", \"Copper\": \"float64\", \"Alk_Phos\": \"float64\", \"SGOT\": \"float64\", \"Tryglicerides\": \"float64\", \"Platelets\": \"float64\", \"Prothrombin\": \"float64\", \"Stage\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 418 entries, 0 to 417\nData columns (total 20 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 ID 418 non-null int64 \n 1 N_Days 418 non-null int64 \n 2 Status 418 non-null object \n 3 Drug 312 non-null object \n 4 Age 418 non-null int64 \n 5 Sex 418 non-null object \n 6 Ascites 312 non-null object \n 7 Hepatomegaly 312 non-null object \n 8 Spiders 312 non-null object \n 9 Edema 418 non-null object \n 10 Bilirubin 418 non-null float64\n 11 Cholesterol 284 non-null float64\n 12 Albumin 418 non-null float64\n 13 Copper 310 non-null float64\n 14 Alk_Phos 312 non-null float64\n 15 SGOT 312 non-null float64\n 16 Tryglicerides 282 non-null float64\n 17 Platelets 407 non-null float64\n 18 Prothrombin 416 non-null float64\n 19 Stage 412 non-null float64\ndtypes: float64(10), int64(3), object(7)\nmemory usage: 65.4+ KB\n", "summary": "{\"ID\": {\"count\": 418.0, \"mean\": 209.5, \"std\": 120.81045760473994, \"min\": 1.0, \"25%\": 105.25, \"50%\": 209.5, \"75%\": 313.75, \"max\": 418.0}, \"N_Days\": {\"count\": 418.0, \"mean\": 1917.7822966507176, \"std\": 1104.6729923907321, \"min\": 41.0, \"25%\": 1092.75, \"50%\": 1730.0, \"75%\": 2613.5, \"max\": 4795.0}, \"Age\": {\"count\": 418.0, \"mean\": 18533.351674641148, \"std\": 3815.8450545514697, \"min\": 9598.0, \"25%\": 15644.5, \"50%\": 18628.0, \"75%\": 21272.5, \"max\": 28650.0}, \"Bilirubin\": {\"count\": 418.0, \"mean\": 3.2208133971291866, \"std\": 4.407506384141372, \"min\": 0.3, \"25%\": 0.8, \"50%\": 1.4, \"75%\": 3.4, \"max\": 28.0}, \"Cholesterol\": {\"count\": 284.0, \"mean\": 369.51056338028167, \"std\": 231.944545037874, \"min\": 120.0, \"25%\": 249.5, \"50%\": 309.5, \"75%\": 400.0, \"max\": 1775.0}, \"Albumin\": {\"count\": 418.0, \"mean\": 3.4974401913875592, \"std\": 0.4249716057796193, \"min\": 1.96, \"25%\": 3.2425, \"50%\": 3.53, \"75%\": 3.77, \"max\": 4.64}, \"Copper\": {\"count\": 310.0, \"mean\": 97.64838709677419, \"std\": 85.61391990897141, \"min\": 4.0, \"25%\": 41.25, \"50%\": 73.0, \"75%\": 123.0, \"max\": 588.0}, \"Alk_Phos\": {\"count\": 312.0, \"mean\": 1982.6557692307692, \"std\": 2140.388824451761, \"min\": 289.0, \"25%\": 871.5, \"50%\": 1259.0, \"75%\": 1980.0, \"max\": 13862.4}, \"SGOT\": {\"count\": 312.0, \"mean\": 122.55634615384616, \"std\": 56.699524863313016, \"min\": 26.35, \"25%\": 80.6, \"50%\": 114.7, \"75%\": 151.9, \"max\": 457.25}, \"Tryglicerides\": {\"count\": 282.0, \"mean\": 124.70212765957447, \"std\": 65.14863866583947, \"min\": 33.0, \"25%\": 84.25, \"50%\": 108.0, \"75%\": 151.0, \"max\": 598.0}, \"Platelets\": {\"count\": 407.0, \"mean\": 257.02457002457004, \"std\": 98.32558454996843, \"min\": 62.0, \"25%\": 188.5, \"50%\": 251.0, \"75%\": 318.0, \"max\": 721.0}, \"Prothrombin\": {\"count\": 416.0, \"mean\": 10.731730769230769, \"std\": 1.0220003464104215, \"min\": 9.0, \"25%\": 10.0, \"50%\": 10.6, \"75%\": 11.1, \"max\": 18.0}, \"Stage\": {\"count\": 412.0, \"mean\": 3.0242718446601944, \"std\": 0.8820420919404809, \"min\": 1.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 4.0, \"max\": 4.0}}", "examples": "{\"ID\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"N_Days\":{\"0\":400,\"1\":4500,\"2\":1012,\"3\":1925},\"Status\":{\"0\":\"D\",\"1\":\"C\",\"2\":\"D\",\"3\":\"D\"},\"Drug\":{\"0\":\"D-penicillamine\",\"1\":\"D-penicillamine\",\"2\":\"D-penicillamine\",\"3\":\"D-penicillamine\"},\"Age\":{\"0\":21464,\"1\":20617,\"2\":25594,\"3\":19994},\"Sex\":{\"0\":\"F\",\"1\":\"F\",\"2\":\"M\",\"3\":\"F\"},\"Ascites\":{\"0\":\"Y\",\"1\":\"N\",\"2\":\"N\",\"3\":\"N\"},\"Hepatomegaly\":{\"0\":\"Y\",\"1\":\"Y\",\"2\":\"N\",\"3\":\"Y\"},\"Spiders\":{\"0\":\"Y\",\"1\":\"Y\",\"2\":\"N\",\"3\":\"Y\"},\"Edema\":{\"0\":\"Y\",\"1\":\"N\",\"2\":\"S\",\"3\":\"S\"},\"Bilirubin\":{\"0\":14.5,\"1\":1.1,\"2\":1.4,\"3\":1.8},\"Cholesterol\":{\"0\":261.0,\"1\":302.0,\"2\":176.0,\"3\":244.0},\"Albumin\":{\"0\":2.6,\"1\":4.14,\"2\":3.48,\"3\":2.54},\"Copper\":{\"0\":156.0,\"1\":54.0,\"2\":210.0,\"3\":64.0},\"Alk_Phos\":{\"0\":1718.0,\"1\":7394.8,\"2\":516.0,\"3\":6121.8},\"SGOT\":{\"0\":137.95,\"1\":113.52,\"2\":96.1,\"3\":60.63},\"Tryglicerides\":{\"0\":172.0,\"1\":88.0,\"2\":55.0,\"3\":92.0},\"Platelets\":{\"0\":190.0,\"1\":221.0,\"2\":151.0,\"3\":183.0},\"Prothrombin\":{\"0\":12.2,\"1\":10.6,\"2\":12.0,\"3\":10.3},\"Stage\":{\"0\":4.0,\"1\":3.0,\"2\":4.0,\"3\":4.0}}"}}]
| true | 1 |
<start_data_description><data_path>cirrhosis-prediction-dataset/cirrhosis.csv:
<column_names>
['ID', 'N_Days', 'Status', 'Drug', 'Age', 'Sex', 'Ascites', 'Hepatomegaly', 'Spiders', 'Edema', 'Bilirubin', 'Cholesterol', 'Albumin', 'Copper', 'Alk_Phos', 'SGOT', 'Tryglicerides', 'Platelets', 'Prothrombin', 'Stage']
<column_types>
{'ID': 'int64', 'N_Days': 'int64', 'Status': 'object', 'Drug': 'object', 'Age': 'int64', 'Sex': 'object', 'Ascites': 'object', 'Hepatomegaly': 'object', 'Spiders': 'object', 'Edema': 'object', 'Bilirubin': 'float64', 'Cholesterol': 'float64', 'Albumin': 'float64', 'Copper': 'float64', 'Alk_Phos': 'float64', 'SGOT': 'float64', 'Tryglicerides': 'float64', 'Platelets': 'float64', 'Prothrombin': 'float64', 'Stage': 'float64'}
<dataframe_Summary>
{'ID': {'count': 418.0, 'mean': 209.5, 'std': 120.81045760473994, 'min': 1.0, '25%': 105.25, '50%': 209.5, '75%': 313.75, 'max': 418.0}, 'N_Days': {'count': 418.0, 'mean': 1917.7822966507176, 'std': 1104.6729923907321, 'min': 41.0, '25%': 1092.75, '50%': 1730.0, '75%': 2613.5, 'max': 4795.0}, 'Age': {'count': 418.0, 'mean': 18533.351674641148, 'std': 3815.8450545514697, 'min': 9598.0, '25%': 15644.5, '50%': 18628.0, '75%': 21272.5, 'max': 28650.0}, 'Bilirubin': {'count': 418.0, 'mean': 3.2208133971291866, 'std': 4.407506384141372, 'min': 0.3, '25%': 0.8, '50%': 1.4, '75%': 3.4, 'max': 28.0}, 'Cholesterol': {'count': 284.0, 'mean': 369.51056338028167, 'std': 231.944545037874, 'min': 120.0, '25%': 249.5, '50%': 309.5, '75%': 400.0, 'max': 1775.0}, 'Albumin': {'count': 418.0, 'mean': 3.4974401913875592, 'std': 0.4249716057796193, 'min': 1.96, '25%': 3.2425, '50%': 3.53, '75%': 3.77, 'max': 4.64}, 'Copper': {'count': 310.0, 'mean': 97.64838709677419, 'std': 85.61391990897141, 'min': 4.0, '25%': 41.25, '50%': 73.0, '75%': 123.0, 'max': 588.0}, 'Alk_Phos': {'count': 312.0, 'mean': 1982.6557692307692, 'std': 2140.388824451761, 'min': 289.0, '25%': 871.5, '50%': 1259.0, '75%': 1980.0, 'max': 13862.4}, 'SGOT': {'count': 312.0, 'mean': 122.55634615384616, 'std': 56.699524863313016, 'min': 26.35, '25%': 80.6, '50%': 114.7, '75%': 151.9, 'max': 457.25}, 'Tryglicerides': {'count': 282.0, 'mean': 124.70212765957447, 'std': 65.14863866583947, 'min': 33.0, '25%': 84.25, '50%': 108.0, '75%': 151.0, 'max': 598.0}, 'Platelets': {'count': 407.0, 'mean': 257.02457002457004, 'std': 98.32558454996843, 'min': 62.0, '25%': 188.5, '50%': 251.0, '75%': 318.0, 'max': 721.0}, 'Prothrombin': {'count': 416.0, 'mean': 10.731730769230769, 'std': 1.0220003464104215, 'min': 9.0, '25%': 10.0, '50%': 10.6, '75%': 11.1, 'max': 18.0}, 'Stage': {'count': 412.0, 'mean': 3.0242718446601944, 'std': 0.8820420919404809, 'min': 1.0, '25%': 2.0, '50%': 3.0, '75%': 4.0, 'max': 4.0}}
<dataframe_info>
RangeIndex: 418 entries, 0 to 417
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID 418 non-null int64
1 N_Days 418 non-null int64
2 Status 418 non-null object
3 Drug 312 non-null object
4 Age 418 non-null int64
5 Sex 418 non-null object
6 Ascites 312 non-null object
7 Hepatomegaly 312 non-null object
8 Spiders 312 non-null object
9 Edema 418 non-null object
10 Bilirubin 418 non-null float64
11 Cholesterol 284 non-null float64
12 Albumin 418 non-null float64
13 Copper 310 non-null float64
14 Alk_Phos 312 non-null float64
15 SGOT 312 non-null float64
16 Tryglicerides 282 non-null float64
17 Platelets 407 non-null float64
18 Prothrombin 416 non-null float64
19 Stage 412 non-null float64
dtypes: float64(10), int64(3), object(7)
memory usage: 65.4+ KB
<some_examples>
{'ID': {'0': 1, '1': 2, '2': 3, '3': 4}, 'N_Days': {'0': 400, '1': 4500, '2': 1012, '3': 1925}, 'Status': {'0': 'D', '1': 'C', '2': 'D', '3': 'D'}, 'Drug': {'0': 'D-penicillamine', '1': 'D-penicillamine', '2': 'D-penicillamine', '3': 'D-penicillamine'}, 'Age': {'0': 21464, '1': 20617, '2': 25594, '3': 19994}, 'Sex': {'0': 'F', '1': 'F', '2': 'M', '3': 'F'}, 'Ascites': {'0': 'Y', '1': 'N', '2': 'N', '3': 'N'}, 'Hepatomegaly': {'0': 'Y', '1': 'Y', '2': 'N', '3': 'Y'}, 'Spiders': {'0': 'Y', '1': 'Y', '2': 'N', '3': 'Y'}, 'Edema': {'0': 'Y', '1': 'N', '2': 'S', '3': 'S'}, 'Bilirubin': {'0': 14.5, '1': 1.1, '2': 1.4, '3': 1.8}, 'Cholesterol': {'0': 261.0, '1': 302.0, '2': 176.0, '3': 244.0}, 'Albumin': {'0': 2.6, '1': 4.14, '2': 3.48, '3': 2.54}, 'Copper': {'0': 156.0, '1': 54.0, '2': 210.0, '3': 64.0}, 'Alk_Phos': {'0': 1718.0, '1': 7394.8, '2': 516.0, '3': 6121.8}, 'SGOT': {'0': 137.95, '1': 113.52, '2': 96.1, '3': 60.63}, 'Tryglicerides': {'0': 172.0, '1': 88.0, '2': 55.0, '3': 92.0}, 'Platelets': {'0': 190.0, '1': 221.0, '2': 151.0, '3': 183.0}, 'Prothrombin': {'0': 12.2, '1': 10.6, '2': 12.0, '3': 10.3}, 'Stage': {'0': 4.0, '1': 3.0, '2': 4.0, '3': 4.0}}
<end_description>
| 5,318 | 0 | 7,640 | 5,318 |
129532785
|
<jupyter_start><jupyter_text>UFO Sightings
# Context
This dataset contains over 80,000 reports of UFO sightings over the last century.
# Content
There are two versions of this dataset: scrubbed and complete. The complete data includes entries where the location of the sighting was not found or blank (0.8146%) or have an erroneous or blank time (8.0237%). Since the reports date back to the 20th century, some older data might be obscured. Data contains city, state, time, description, and duration of each sighting.
# Inspiration
* What areas of the country are most likely to have UFO sightings?
* Are there any trends in UFO sightings over time? Do they tend to be clustered or seasonal?
* Do clusters of UFO sightings correlate with landmarks, such as airports or government research centers?
* What are the most common UFO descriptions?
# Acknowledgement
This dataset was scraped, geolocated, and time standardized from NUFORC data by Sigmond Axel [here](https://github.com/planetsig/ufo-reports).
Kaggle dataset identifier: ufo-sightings
<jupyter_script># ## **Description**
# #### This project analyzed the occurance, location and prevalence of UFO sightings in the United States in the last century. For further analysis we will also see if there are any correlations with UFO sightings and US general elections.
# ## Summary of the question(s) sought and the answers
# 1. How does term number effect the number of UFO sightings? Is this also dependent on the political party? ANSWER: Term length does not appear to effect the number of UFO sightings.
# 2. Are UFO sightings increasing over time? ANSWER: Yes, they are increasing over time.
# 3. If UFO sightings do increase over time, how is that correlated with the turn out rate for elections? ANSWER: UFO sightings are increasing over time and there is a correlation with an increase in total votes cast in each election.
# 4. In states that vote Republican are there more or less UFO sightings? ANSWER: In Republican voting states there are fewer UFO sightings, Democratic voting states have an average of 3 sightings for ever 2 there are in a Republican voting state.
# ## Application of this knowledge:
# Our project can be used to understand the trends in UFO sightings over the last century such as the frequency of sightings, how the language used to describe them has changed, and when sightings are most likely to occur. On top of this, it shows how UFO sightings have correlated with other world events, like elections and their turnouts during the last half century.
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
import warnings
warnings.filterwarnings("ignore")
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv(
"/kaggle/input/ufo-sightings/complete.csv",
low_memory=False,
warn_bad_lines=False,
error_bad_lines=False,
)
df.head()
df.describe()
df.info()
# Date
df["datetime"] = df["datetime"].apply(lambda x: x.replace("24:00", "00:00"))
df["datetime"] = pd.to_datetime(df["datetime"])
# Seconds datastype to numeric
df["duration (seconds)"] = pd.to_numeric(df["duration (seconds)"], errors="coerce")
# creating year seperate columns
df["year"] = df["datetime"].dt.year
df.shape
# number of Null
df.isnull().sum()
def missing(df):
missing_number = df.isnull().sum().sort_values(ascending=False)
missing_percent = (df.isnull().sum() / df.isnull().count()).sort_values(
ascending=False
) * 100
missing_percent = round(missing_percent, 2)
missing_values = pd.concat(
[missing_number, missing_percent],
axis=1,
keys=["Missing_Number", "Missing_Percent"],
)
return missing_values
(df.isnull().sum() / df.isnull().count()).sort_values(ascending=False) * 100
missing(df)
# ## Shape of UFO - 1
# Shape of UFO - 1
appear = pd.DataFrame(df["shape"].value_counts().head(4)).reset_index()
ax = sns.barplot(x="index", y="shape", data=appear).set(
title="How most extraterrestrial species loves to design space vehicles:"
)
# ## Shape of UFO - 2
# Shape of UFO - 2
appear = pd.DataFrame(df["shape"].value_counts()).reset_index()
appear = appear[(appear["shape"] >= 250) & (appear["shape"] <= 1120)]
ax = sns.barplot(x="index", y="shape", data=appear).set(
title="Also patented shape of Extraterrestrial species vehicle:"
)
# ## UFO - Most Visited Place in Earth
#
# UFO - Most Visited Place
plt.subplots(figsize=(18, 8))
expl = (0.1, 0.05, 0.2, 0.4, 0.8)
colors = ["cornflowerblue", "lightcoral", "lightgreen", "gold", "black"]
labels = ["United States", "Canada", "United Kingdom", "Australia", "Germany"]
df["country"].value_counts().plot(
kind="pie",
fontsize=12,
colors=colors,
explode=expl,
figsize=(8, 8),
autopct="%.1f%%",
pctdistance=1.15,
labels=None,
)
plt.legend(labels=labels, loc="upper right")
plt.title("UFO Sightings by Country", size=24)
plt.xticks(rotation=45, fontsize=15)
plt.show()
# ## UFO - Most Visited city in US
#
usa_stats = df["country"] == "us"
usdf = df[usa_stats]
state_stats = usdf.state.value_counts()
state_index = state_stats.index
state_values = state_stats.values
plt.figure(figsize=(15, 8))
plt.title("UFO Sightings by US State", fontsize=24)
plt.xlabel("State", fontsize=14)
plt.ylabel("Number of reports", fontsize=14)
plt.xticks(rotation=60, size=12)
state_plot = sns.barplot(x=state_index[:60], y=state_values[:60], palette="RdBu_r")
# ## UFO - count of visits over years
#
# UFO - count of visits over years
yr_plot = pd.DataFrame(df["year"].value_counts().head(10)).reset_index()
plt.plot(yr_plot["index"], yr_plot["year"], marker="o", color="b")
plt.title(
"Count of Extraterrestrial species that visited earth in the last 10 years (according to available data)"
)
plt.xlabel("Year")
plt.ylabel("Count")
# ## UFO - Average time period stayed in earth in top 10 years
#
# UFO - Average time period stayed in earth in top 10 years
time_stay = df[["year", "duration (seconds)"]][df["duration (seconds)"] >= 20]
time_stay_sec = pd.DataFrame(
time_stay.groupby("year")["duration (seconds)"].mean().tail(10)
)
time_stay_min = time_stay_sec["duration (seconds)"] / 60
plt.plot(time_stay_min)
plt.ylabel("In minutes")
plt.xlabel("year")
plt.title(
"Extraterrestial species VISA average time period to stay on earth between years"
)
# ## Let's see how UFO reports have changed in the last 70 years
years_data = df["year"].value_counts()
years_index = years_data.index
years_values = years_data.values
plt.figure(figsize=(15, 8))
plt.xticks(rotation=60)
plt.title("UFO Sightings since 1943", fontsize=18)
plt.xlabel("Year", fontsize=14)
plt.ylabel("Number of reports", fontsize=14)
years_plot = sns.barplot(x=years_index[:70], y=years_values[:70], palette="RdPu_r")
# # Can you guess in which month the most sightings have been reported?
#
order = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
df["month"] = df["datetime"].dt.month_name() # turns month numbers into month names
month_data = df["month"].value_counts()
month_index = month_data.index
month_values = month_data.values
plt.figure(figsize=(15, 8))
plt.title("UFO Sightings by Month", fontsize=18)
plt.xlabel("Month", fontsize=14)
plt.ylabel("Number of reports", fontsize=14)
month_plot = sns.barplot(
x=month_index[:60], y=month_values[:60], palette="Blues", order=order
)
# ## I wonder what people say the most when they see an UFO
cmt = [item for item in df.comments.dropna()]
cmt = " ".join(cmt)
plt.figure(figsize=(18, 12))
wordcloud = WordCloud(
background_color="whitesmoke", width=2000, height=1000, stopwords=None
).generate(cmt)
plt.imshow(wordcloud, interpolation="nearest", aspect="auto")
plt.axis("off")
plt.savefig("wordcloud.png")
plt.title("Comment Wordcloud", size=40)
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/532/129532785.ipynb
|
ufo-sightings
| null |
[{"Id": 129532785, "ScriptId": 38511783, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12220567, "CreationDate": "05/14/2023 15:35:32", "VersionNumber": 1.0, "Title": "UFO Sightings", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 187.0, "LinesInsertedFromPrevious": 187.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 14}]
|
[{"Id": 185688057, "KernelVersionId": 129532785, "SourceDatasetVersionId": 793053}]
|
[{"Id": 793053, "DatasetId": 388, "DatasourceVersionId": 814842, "CreatorUserId": 998023, "LicenseName": "Unknown", "CreationDate": "11/13/2019 19:45:57", "VersionNumber": 2.0, "Title": "UFO Sightings", "Slug": "ufo-sightings", "Subtitle": "Reports of unidentified flying object reports in the last century", "Description": "# Context\n\nThis dataset contains over 80,000 reports of UFO sightings over the last century. \n\n# Content\n\nThere are two versions of this dataset: scrubbed and complete. The complete data includes entries where the location of the sighting was not found or blank (0.8146%) or have an erroneous or blank time (8.0237%). Since the reports date back to the 20th century, some older data might be obscured. Data contains city, state, time, description, and duration of each sighting.\n\n# Inspiration\n\n* What areas of the country are most likely to have UFO sightings?\n* Are there any trends in UFO sightings over time? Do they tend to be clustered or seasonal?\n* Do clusters of UFO sightings correlate with landmarks, such as airports or government research centers?\n* What are the most common UFO descriptions? \n\n# Acknowledgement\n\nThis dataset was scraped, geolocated, and time standardized from NUFORC data by Sigmond Axel [here](https://github.com/planetsig/ufo-reports).", "VersionNotes": "Fix data", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 388, "CreatorUserId": 270995, "OwnerUserId": NaN, "OwnerOrganizationId": 222.0, "CurrentDatasetVersionId": 793053.0, "CurrentDatasourceVersionId": 814842.0, "ForumId": 1968, "Type": 2, "CreationDate": "11/17/2016 03:50:44", "LastActivityDate": "02/06/2018", "TotalViews": 248610, "TotalDownloads": 35131, "TotalVotes": 620, "TotalKernels": 194}]
| null |
# ## **Description**
# #### This project analyzed the occurance, location and prevalence of UFO sightings in the United States in the last century. For further analysis we will also see if there are any correlations with UFO sightings and US general elections.
# ## Summary of the question(s) sought and the answers
# 1. How does term number effect the number of UFO sightings? Is this also dependent on the political party? ANSWER: Term length does not appear to effect the number of UFO sightings.
# 2. Are UFO sightings increasing over time? ANSWER: Yes, they are increasing over time.
# 3. If UFO sightings do increase over time, how is that correlated with the turn out rate for elections? ANSWER: UFO sightings are increasing over time and there is a correlation with an increase in total votes cast in each election.
# 4. In states that vote Republican are there more or less UFO sightings? ANSWER: In Republican voting states there are fewer UFO sightings, Democratic voting states have an average of 3 sightings for ever 2 there are in a Republican voting state.
# ## Application of this knowledge:
# Our project can be used to understand the trends in UFO sightings over the last century such as the frequency of sightings, how the language used to describe them has changed, and when sightings are most likely to occur. On top of this, it shows how UFO sightings have correlated with other world events, like elections and their turnouts during the last half century.
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
import warnings
warnings.filterwarnings("ignore")
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv(
"/kaggle/input/ufo-sightings/complete.csv",
low_memory=False,
warn_bad_lines=False,
error_bad_lines=False,
)
df.head()
df.describe()
df.info()
# Date
df["datetime"] = df["datetime"].apply(lambda x: x.replace("24:00", "00:00"))
df["datetime"] = pd.to_datetime(df["datetime"])
# Seconds datastype to numeric
df["duration (seconds)"] = pd.to_numeric(df["duration (seconds)"], errors="coerce")
# creating year seperate columns
df["year"] = df["datetime"].dt.year
df.shape
# number of Null
df.isnull().sum()
def missing(df):
missing_number = df.isnull().sum().sort_values(ascending=False)
missing_percent = (df.isnull().sum() / df.isnull().count()).sort_values(
ascending=False
) * 100
missing_percent = round(missing_percent, 2)
missing_values = pd.concat(
[missing_number, missing_percent],
axis=1,
keys=["Missing_Number", "Missing_Percent"],
)
return missing_values
(df.isnull().sum() / df.isnull().count()).sort_values(ascending=False) * 100
missing(df)
# ## Shape of UFO - 1
# Shape of UFO - 1
appear = pd.DataFrame(df["shape"].value_counts().head(4)).reset_index()
ax = sns.barplot(x="index", y="shape", data=appear).set(
title="How most extraterrestrial species loves to design space vehicles:"
)
# ## Shape of UFO - 2
# Shape of UFO - 2
appear = pd.DataFrame(df["shape"].value_counts()).reset_index()
appear = appear[(appear["shape"] >= 250) & (appear["shape"] <= 1120)]
ax = sns.barplot(x="index", y="shape", data=appear).set(
title="Also patented shape of Extraterrestrial species vehicle:"
)
# ## UFO - Most Visited Place in Earth
#
# UFO - Most Visited Place
plt.subplots(figsize=(18, 8))
expl = (0.1, 0.05, 0.2, 0.4, 0.8)
colors = ["cornflowerblue", "lightcoral", "lightgreen", "gold", "black"]
labels = ["United States", "Canada", "United Kingdom", "Australia", "Germany"]
df["country"].value_counts().plot(
kind="pie",
fontsize=12,
colors=colors,
explode=expl,
figsize=(8, 8),
autopct="%.1f%%",
pctdistance=1.15,
labels=None,
)
plt.legend(labels=labels, loc="upper right")
plt.title("UFO Sightings by Country", size=24)
plt.xticks(rotation=45, fontsize=15)
plt.show()
# ## UFO - Most Visited city in US
#
usa_stats = df["country"] == "us"
usdf = df[usa_stats]
state_stats = usdf.state.value_counts()
state_index = state_stats.index
state_values = state_stats.values
plt.figure(figsize=(15, 8))
plt.title("UFO Sightings by US State", fontsize=24)
plt.xlabel("State", fontsize=14)
plt.ylabel("Number of reports", fontsize=14)
plt.xticks(rotation=60, size=12)
state_plot = sns.barplot(x=state_index[:60], y=state_values[:60], palette="RdBu_r")
# ## UFO - count of visits over years
#
# UFO - count of visits over years
yr_plot = pd.DataFrame(df["year"].value_counts().head(10)).reset_index()
plt.plot(yr_plot["index"], yr_plot["year"], marker="o", color="b")
plt.title(
"Count of Extraterrestrial species that visited earth in the last 10 years (according to available data)"
)
plt.xlabel("Year")
plt.ylabel("Count")
# ## UFO - Average time period stayed in earth in top 10 years
#
# UFO - Average time period stayed in earth in top 10 years
time_stay = df[["year", "duration (seconds)"]][df["duration (seconds)"] >= 20]
time_stay_sec = pd.DataFrame(
time_stay.groupby("year")["duration (seconds)"].mean().tail(10)
)
time_stay_min = time_stay_sec["duration (seconds)"] / 60
plt.plot(time_stay_min)
plt.ylabel("In minutes")
plt.xlabel("year")
plt.title(
"Extraterrestial species VISA average time period to stay on earth between years"
)
# ## Let's see how UFO reports have changed in the last 70 years
years_data = df["year"].value_counts()
years_index = years_data.index
years_values = years_data.values
plt.figure(figsize=(15, 8))
plt.xticks(rotation=60)
plt.title("UFO Sightings since 1943", fontsize=18)
plt.xlabel("Year", fontsize=14)
plt.ylabel("Number of reports", fontsize=14)
years_plot = sns.barplot(x=years_index[:70], y=years_values[:70], palette="RdPu_r")
# # Can you guess in which month the most sightings have been reported?
#
order = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
df["month"] = df["datetime"].dt.month_name() # turns month numbers into month names
month_data = df["month"].value_counts()
month_index = month_data.index
month_values = month_data.values
plt.figure(figsize=(15, 8))
plt.title("UFO Sightings by Month", fontsize=18)
plt.xlabel("Month", fontsize=14)
plt.ylabel("Number of reports", fontsize=14)
month_plot = sns.barplot(
x=month_index[:60], y=month_values[:60], palette="Blues", order=order
)
# ## I wonder what people say the most when they see an UFO
cmt = [item for item in df.comments.dropna()]
cmt = " ".join(cmt)
plt.figure(figsize=(18, 12))
wordcloud = WordCloud(
background_color="whitesmoke", width=2000, height=1000, stopwords=None
).generate(cmt)
plt.imshow(wordcloud, interpolation="nearest", aspect="auto")
plt.axis("off")
plt.savefig("wordcloud.png")
plt.title("Comment Wordcloud", size=40)
plt.show()
| false | 0 | 2,340 | 14 | 2,627 | 2,340 |
||
129532110
|
<jupyter_start><jupyter_text>Breast Cancer Wisconsin (Diagnostic) Data Set
Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image.
n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
Attribute Information:
1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32)
Ten real-valued features are computed for each cell nucleus:
a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry
j) fractal dimension ("coastline approximation" - 1)
The mean, standard error and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.
All feature values are recoded with four significant digits.
Missing attribute values: none
Class distribution: 357 benign, 212 malignant
Kaggle dataset identifier: breast-cancer-wisconsin-data
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Part 1: EDA
# Libraries
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# No warnings (seaborn)
import warnings
warnings.filterwarnings("ignore")
# ## 1. Dataset
data = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
data.head()
data.columns
data.columns[1:12]
# We will consider only the 'mean' features
data = data[data.columns[1:12]]
data.head()
data.info()
# ## 2. Data processing
# Null elements
data.isnull().sum()
data.isnull().any()
# Encoding of the target:
data["diagnosis"].replace(to_replace=["B", "M"], value=[0, 1], inplace=True)
# # 3. Exploratory Data Analysis
# ## 3.1. Countplot
#
# Malignant vs. benign
print("Number of malignant samples: ", len(data[data["diagnosis"] == 1]))
print("Number of benign samples: ", len(data[data["diagnosis"] == 0]))
print(
"Malignant percentage: ", len(data[data["diagnosis"] == 1]) / len(data) * 100, "%"
)
print("Benign percentage: ", len(data[data["diagnosis"] == 0]) / len(data) * 100, "%")
sns.countplot(x="diagnosis", data=data)
plt.title("Benign vs. Malignant")
plt.ylabel(None)
plt.show()
features = data[list(data.columns)[1:12]]
features.columns
# Equivalently (more fast)
features = data.drop("diagnosis", axis=1)
features.columns
# ## 3.2 Hisplot
def histplot_continuous(x, data, hue):
sns.histplot(
data=data,
x=x,
hue=hue,
kde=True,
bins=15,
palette="tab10",
multiple="stack",
line_kws={"lw": 5},
)
plt.figure(figsize=(18, 6))
n_row = 2
n_col = 5
hue = "diagnosis"
for i in range(len(features.columns)):
plt.subplot(n_row, n_col, i + 1)
histplot_continuous(x=list(features.columns)[i], data=data, hue=hue)
plt.title(list(features.columns)[i])
plt.ylabel(None)
plt.xlabel(None)
plt.subplots_adjust(wspace=0.3, hspace=0.3)
# ## 3.3. Violin plot, Boxplot, Swarmplot (std features)
# The dataset need to be changed:
feat_std = (features - features.mean()) / (features.std())
data_std = pd.concat([data["diagnosis"], feat_std], axis=1)
data_std = pd.melt(
data_std, id_vars="diagnosis", var_name="features", value_name="value"
)
data_std
# Violinplot
plt.figure(figsize=(16, 5))
sns.violinplot(
x="features", y="value", hue="diagnosis", data=data_std, split=True, inner="quart"
)
plt.xticks(rotation=45)
plt.xlabel(None)
# Box plot
plt.figure(figsize=(16, 5))
sns.boxplot(data=data_std, x="features", y="value", hue="diagnosis")
plt.xticks(rotation=45)
plt.xlabel(None)
# The boxplot seems to show a clear separation between the values associated with 'benign' vs. 'malignant' (in all features). This was also seen in the violinplot, but is particularly highlighted here.
# Swarmplot
plt.figure(figsize=(16, 5))
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data_std)
plt.xticks(rotation=45)
plt.xlabel(None)
# ## 3.4. Correlations
# Correlation matrix
correlation_matrix = features.corr().round(2)
plt.figure(figsize=(8, 5))
sns.heatmap(data=correlation_matrix, annot=True, cmap="coolwarm", vmin=-1, vmax=1)
# The correlation coefficient ranges from -1 to 1:
# * If the value is close to +1, there is a strong positive correlation between the two variables
# * If the value is close to -1, the variables have a strong negative correlation
# An important point in selecting features for any model is to check for **multi-co-linearity**.
# There are many features that have a high correlation: it will probably be a case of not considering them together.
sns.clustermap(
correlation_matrix, annot=True, vmin=-1, vmax=1, cmap="coolwarm", figsize=(8, 8)
)
# Correlation of the features with the target
plt.figure(figsize=(13, 5))
features.corrwith(data["diagnosis"]).plot(
kind="bar",
grid=True,
color="cornflowerblue",
title='Correlation of the features with "diagnosis"',
)
plt.xticks(rotation=45)
# **Insights:**
# * fractal_dimension_mean is the least correlated with the target variable (as also seen previously)
# * all other features have a significant correlation with the target variable
# Note. Since we have seen that the variables are strongly correlated with each other, if we have to choose which ones to keep and which ones not to keep, we can rely on this plot.
# ## 3.5. Pairplot
sns.pairplot(data=data, hue="diagnosis", palette="tab10", corner=True)
# # Part 2: PCA, t-SNE
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# No warnings (seaborn)
import warnings
warnings.filterwarnings("ignore")
# ## 1. PCA
# ### 1. PCA fit
# Extract only the features
# Standardize the features
feat_std
# Define and fit the PCA
pca = PCA(n_components=len(features.columns))
pca.fit(feat_std)
# ### 3.2. PCA weights
# Principal components vectors
# Every row is a vector (every element of the vector is the weight associated to a particular feature)
pca_components = pd.DataFrame(data=pca.components_, columns=features.columns)
pca_components
# View as 'columns = PC vectors':
pca_components.T
# Let's visualize the weights
plt.figure(figsize=(18, 10))
n_row = 2
n_col = 5
for i in range(len(features.columns)):
plt.subplot(n_row, n_col, i + 1)
sns.barplot(data=pca_components[i : (i + 1)], color="cornflowerblue")
plt.title(f"PC{i+1}")
plt.xticks(rotation=90)
plt.subplots_adjust(hspace=0.8)
# ### 3.3. Explained variance
# PCA variance explaination
explained_var = pca.explained_variance_ratio_
indeces = np.arange(1, len(explained_var) + 1)
plt.figure(figsize=(10, 5))
sns.barplot(x=indeces, y=explained_var, color="cornflowerblue")
plt.title("Percentage of explained variance for each feature")
plt.ylabel("Proportion of Variance Explained")
plt.xlabel("Principal Component")
# Cumulative explained variance
plt.figure(figsize=(10, 5))
sns.barplot(x=indeces, y=explained_var.cumsum(), color="cornflowerblue")
plt.title("Cumulative percentage of explained variance for each feature")
plt.ylabel("Proportion of Variance Explained")
plt.xlabel("Principal Component")
# A good percentage is around 80%
# How many PCs should we consider?
plt.figure(figsize=(10, 5))
plt.title("Cumulative percentage of explained variance for each feature")
ax = sns.barplot(x=indeces, y=explained_var.cumsum(), color="cornflowerblue")
ax.bar_label(ax.containers[0])
plt.plot(indeces - 1, explained_var.cumsum(), "bo-", label="Cumulative sum")
plt.plot(
[0.8] + 0 * np.arange(1, 11),
color="red",
linestyle="--",
label="Good percentage of explaination",
)
plt.ylabel("Proportion of Variance Explained")
plt.xlabel("Principal Component")
plt.legend(loc="lower right")
# ### 3.4. Transformed dataset
# Transformed dataset (feat_std)
pca_data = pca.fit_transform(feat_std)
pca_data = pd.DataFrame(
data=pca_data,
columns=[
"PC 1",
"PC 2",
"PC 3",
"PC 4",
"PC 5",
"PC 6",
"PC 7",
"PC 8",
"PC 9",
"PC 10",
],
)
pca_data.head()
# Plot 2 PCs
plt.figure(figsize=(10, 6))
sns.scatterplot(
x="PC 1",
y="PC 2",
hue="diagnosis",
data=pd.concat([pca_data, data["diagnosis"]], axis=1),
)
plt.title("Original dataset represented with 2 PCs")
# ## 4. t-SNE
# ### 4.1. t-SNE fit
# Define and fit t-SNE directly in 2 dimensions
tsne = TSNE(n_components=2)
tsne = tsne.fit_transform(feat_std)
# ### 4.2. Transformed dataset
#
# Transformed dataset (feat_std)
tsne_data = pd.DataFrame(data=tsne, columns=["tSNE 1", "tSNE 2"])
tsne_data.head()
# Plot 2 t-SNEs
plt.figure(figsize=(10, 6))
sns.scatterplot(
x="tSNE 1",
y="tSNE 2",
hue="diagnosis",
data=pd.concat([tsne_data, data["diagnosis"]], axis=1),
)
plt.title("Original dataset represented with 2 t-SNEs")
# ## 5. PCA vs. t-SNE
# ### 5.1. Dataset 2-D visualization
# Let's plot PCA and t-SNE
plt.figure(figsize=(15, 4))
# PC
plt.subplot(1, 2, 1)
sns.scatterplot(
x="PC 1",
y="PC 2",
hue="diagnosis",
data=pd.concat([pca_data, data["diagnosis"]], axis=1),
)
plt.title("2D PCA")
# t-SNE
plt.subplot(1, 2, 2)
sns.scatterplot(
x="tSNE 1",
y="tSNE 2",
hue="diagnosis",
data=pd.concat([tsne_data, data["diagnosis"]], axis=1),
)
plt.title("2D t-SNE")
# ### 5.2. (Extra) Logistic regression comparison
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
coefficients = {}
### PCA
###----------------------------------------------------------------------------------------------------------------------------
# Train-test split
X_train, x_test, y_train, y_test = train_test_split(
pca_data, data["diagnosis"], test_size=1 / 3, shuffle=True, random_state=42
)
# Model fitting
model = LogisticRegression(
penalty="l2", C=100, max_iter=300, solver="liblinear", random_state=42
)
model.fit(X_train, y_train)
# # Coefficients
coefficients["pca"] = (
model,
np.append(model.coef_[0], model.intercept_[0]),
) # w(coef).x + b(intercept)
### t-SNE
###----------------------------------------------------------------------------------------------------------------------------
# Train-test splot
x_train, x_test, y_train, y_test = train_test_split(
tsne_data, data["diagnosis"], test_size=1 / 3, shuffle=True, random_state=42
)
# Model fitting
model = LogisticRegression(
penalty="l2", C=100, max_iter=300, solver="liblinear", random_state=42
)
model.fit(x_train, y_train)
# Coefficients
coefficients["tsne"] = (model, np.append(model.coef_[0], model.intercept_[0]))
## Plot logistic regression boundary for PCs data
###--------------------------------------------------------------------------
# Add the 'diagnosis' column to 'pca_data'
pca_data["diagnosis"] = data["diagnosis"]
# Coefficients
b = coefficients["pca"][1][2]
w1, w2 = coefficients["pca"][1][0:2]
# Calculate the intercept and gradient of the decision boundary
c = -b / w2
m = -w1 / w2
# Plot limits
xmin, xmax = np.min(pca_data["PC 1"]) - 2, np.max(pca_data["PC 1"]) + 2
ymin, ymax = np.min(pca_data["PC 2"]) - 2, np.max(pca_data["PC 2"]) + 2
xd = np.array([xmin, xmax])
yd = m * xd + c
c
# Plot
plt.figure(figsize=(10, 6))
plt.plot(xd, yd, "k", lw=1, ls="--")
plt.xlim([xmin, xmax])
plt.ylim([ymin, ymax])
sns.scatterplot(x="PC 1", y="PC 2", hue="diagnosis", data=pca_data)
plt.fill_between(xd, yd, ymin, color="tab:orange", alpha=0.2)
plt.fill_between(xd, yd, ymax, color="tab:blue", alpha=0.2)
plt.title("Logistic regression with PCs")
## Plot logistic regression boundary for t-SNE data
###--------------------------------------------------------------------------
# Add the 'diagnosis' column to 'tsne_data'
tsne_data["diagnosis"] = data["diagnosis"]
# Coefficients
b = coefficients["tsne"][1][2]
w1, w2 = coefficients["tsne"][1][0:2]
# Calculate the intercept and gradient of the decision boundary
c = -b / w2
m = -w1 / w2
# Plot limits
xmin, xmax = np.min(tsne_data["tSNE 1"]) - 2, np.max(tsne_data["tSNE 1"]) + 2
ymin, ymax = np.min(tsne_data["tSNE 2"]) - 2, np.max(tsne_data["tSNE 2"]) + 2
xd = np.array([xmin, xmax])
yd = m * xd + c
# Plot
plt.figure(figsize=(10, 6))
plt.plot(xd, yd, "k", lw=1, ls="--")
plt.xlim([xmin, xmax])
plt.ylim([ymin, ymax])
sns.scatterplot(x="tSNE 1", y="tSNE 2", hue="diagnosis", data=tsne_data)
plt.fill_between(
xd,
yd,
ymin,
color="tab:blue",
alpha=0.2,
)
plt.fill_between(xd, yd, ymax, color="tab:orange", alpha=0.2)
plt.title("Logistic regression with tSNE")
# # 3. Classification
# Evaluation Procedures
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
# Classification methods
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
# Evaluation Metrics
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import f1_score
random_state = 42
# Data and target definition
x = data.drop("diagnosis", axis=1)
y = data["diagnosis"]
# Split the data in train and test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=1 / 3, shuffle=True, random_state=random_state
)
# For the ensamble models let's consider:
n_estimators = 50
# Models definition
models = {
"Ridge Unregularized": LogisticRegression(
penalty="l2", C=1e5, max_iter=300, solver="liblinear", random_state=random_state
),
"Ridge": LogisticRegression(
penalty="l2", C=100, max_iter=300, solver="liblinear", random_state=random_state
),
"Lasso": LogisticRegression(
penalty="l1", C=100, max_iter=300, solver="liblinear", random_state=random_state
),
"kNN_5": KNeighborsClassifier(n_neighbors=5),
"kNN_100": KNeighborsClassifier(n_neighbors=100),
"Decision Tree": DecisionTreeClassifier(max_depth=None, random_state=random_state),
"Random Forest": RandomForestClassifier(
n_estimators=n_estimators, max_depth=3, random_state=random_state
),
"Ada Boost": AdaBoostClassifier(
DecisionTreeClassifier(max_depth=3),
n_estimators=n_estimators,
random_state=random_state,
),
"XGB": XGBClassifier(eval_metric="mlogloss", random_state=random_state),
}
# ### 3.2. Fitting
#
# For every model let's collect:
# * roc_result : informations used to plot the ROC curves afterwards
# * accuracy_mean : crossvalidation result (accuracy mean)
# * accuracy_sd : crossvalidation result (accuracy standard deviation)
# * precision
# * recall
# * f1
# * auc
crossvalidation = StratifiedKFold(n_splits=10, shuffle=True)
roc_results = {}
model = []
accuracy_mean = []
accuracy_std = []
accuracy_test = []
precision = []
recall = []
f1 = []
auc = []
for model_name in models:
# Model to evaluate
current_model = models[model_name]
# Crossvalidation evaluation
cv_score = cross_val_score(current_model, x, y, cv=crossvalidation)
# Accuracy mean and standard deviation
accuracy_mean.append(np.average(cv_score))
accuracy_std.append(np.std(cv_score))
# Model fit
current_model.fit(x_train, y_train)
# Deterministic prediction (threshold=0.5) and probabilistic prediction
y_pred = current_model.predict(x_test)
y_pred_prob = current_model.predict_proba(x_test)
# ROC, AUC
fpr, tpr, thresholds = roc_curve(
y_true=y_test, y_score=y_pred_prob[:, 1], pos_label=1
)
roc_auc = roc_auc_score(y_true=y_test, y_score=y_pred_prob[:, 1])
auc.append(roc_auc)
# Store the information to plot the ROC curves afterwards
roc_results[model_name] = (fpr, tpr, thresholds, roc_auc)
# Other evaluation metrics
accuracy_test.append(accuracy_score(y_test, y_pred))
precision.append(precision_score(y_test, y_pred))
recall.append(recall_score(y_test, y_pred))
f1.append(f1_score(y_test, y_pred))
model.append(model_name)
print("Done with", model_name)
# ### 3.3. Results
# Let's summarize the results
results = {}
results["Model"] = model
results["Accuracy_CV_mean"] = accuracy_mean
results["Accuracy_CV_std"] = accuracy_std
results["Accuracy_test"] = accuracy_test
results["Precision"] = precision
results["Recall"] = recall
results["F1"] = f1
results = pd.DataFrame(data=results)
results
# Best results highlighted
results.set_index("Model").style.highlight_max(color="lightgreen", axis=0)
# Let's plot it
plt.figure(figsize=(18, 3))
for i in range(0, 4):
plt.subplot(1, 4, i + 1)
colors = [
"grey" if (x < max(results.iloc[:, 3 + i])) else "red"
for x in results.iloc[:, 3 + i]
]
sns.barplot(x=results["Model"], y=results.iloc[:, 3 + i], palette=colors)
plt.title(results.columns[3 + i])
plt.xticks(rotation=90)
plt.ylabel(None)
plt.xlabel(None)
# Let's plot it sorted
plt.figure(figsize=(18, 3))
for i in range(0, 4):
plt.subplot(1, 4, i + 1)
names = results["Model"]
values = results.iloc[:, 3 + i]
df = pd.DataFrame(data=pd.concat([names, values], axis=1))
df = df.sort_values(by=df.columns[1], ascending=False)
colors = ["grey" if (x < max(df.iloc[:, 1])) else "red" for x in df.iloc[:, 1]]
sns.barplot(x=df.iloc[:, 0], y=df.iloc[:, 1], palette=colors)
plt.title(results.columns[3 + i])
plt.xticks(rotation=90)
plt.ylabel(None)
plt.xlabel(None)
# ### 3.4. ROC
#
# ROC_results
# For every method there are : fpr, tpr, thresholds, roc_auc
pd.DataFrame(data=roc_results)
# For convenience, let's transpose them (to sort them)
roc_info = pd.DataFrame(data=roc_results).T
roc_info.columns = ["fpr", "tpr", "tresholds", "roc_auc"]
# Let's sort them based on AUC (to simplify the plot)
roc_info = roc_info.sort_values(by=["roc_auc"], ascending=True)
# Plot
plt.figure(figsize=(10, 8))
for i in range(0, len(models)):
plt.plot(
roc_info.iloc[i, 0],
roc_info.iloc[i, 1],
label="AUC = %.4f, %s" % (roc_info.iloc[i, 3], model[i]),
)
plt.title("Receiver Operating Characteristic (ROC)")
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.ylabel("True Positive Rate (TPR)")
plt.xlabel("False Positive Rate (FPR)")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/532/129532110.ipynb
|
breast-cancer-wisconsin-data
| null |
[{"Id": 129532110, "ScriptId": 38506005, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11652162, "CreationDate": "05/14/2023 15:29:30", "VersionNumber": 1.0, "Title": "Breast-Cancer \ud83d\udd25", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 580.0, "LinesInsertedFromPrevious": 580.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185686290, "KernelVersionId": 129532110, "SourceDatasetVersionId": 408}]
|
[{"Id": 408, "DatasetId": 180, "DatasourceVersionId": 408, "CreatorUserId": 711301, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "09/25/2016 10:49:04", "VersionNumber": 2.0, "Title": "Breast Cancer Wisconsin (Diagnostic) Data Set", "Slug": "breast-cancer-wisconsin-data", "Subtitle": "Predict whether the cancer is benign or malignant", "Description": "Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. \nn the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: \"Robust Linear Programming Discrimination of Two Linearly Inseparable Sets\", Optimization Methods and Software 1, 1992, 23-34]. \n\nThis database is also available through the UW CS ftp server: \nftp ftp.cs.wisc.edu \ncd math-prog/cpo-dataset/machine-learn/WDBC/\n\nAlso can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29\n\nAttribute Information:\n\n1) ID number \n2) Diagnosis (M = malignant, B = benign) \n3-32) \n\nTen real-valued features are computed for each cell nucleus: \n\na) radius (mean of distances from center to points on the perimeter) \nb) texture (standard deviation of gray-scale values) \nc) perimeter \nd) area \ne) smoothness (local variation in radius lengths) \nf) compactness (perimeter^2 / area - 1.0) \ng) concavity (severity of concave portions of the contour) \nh) concave points (number of concave portions of the contour) \ni) symmetry \nj) fractal dimension (\"coastline approximation\" - 1)\n\nThe mean, standard error and \"worst\" or largest (mean of the three\nlargest values) of these features were computed for each image,\nresulting in 30 features. For instance, field 3 is Mean Radius, field\n13 is Radius SE, field 23 is Worst Radius.\n\nAll feature values are recoded with four significant digits.\n\nMissing attribute values: none\n\nClass distribution: 357 benign, 212 malignant", "VersionNotes": "This updated dataset has column names added", "TotalCompressedBytes": 125204.0, "TotalUncompressedBytes": 125204.0}]
|
[{"Id": 180, "CreatorUserId": 711301, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 408.0, "CurrentDatasourceVersionId": 408.0, "ForumId": 1547, "Type": 2, "CreationDate": "09/19/2016 20:27:05", "LastActivityDate": "02/06/2018", "TotalViews": 1744898, "TotalDownloads": 301790, "TotalVotes": 3191, "TotalKernels": 2628}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Part 1: EDA
# Libraries
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# No warnings (seaborn)
import warnings
warnings.filterwarnings("ignore")
# ## 1. Dataset
data = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
data.head()
data.columns
data.columns[1:12]
# We will consider only the 'mean' features
data = data[data.columns[1:12]]
data.head()
data.info()
# ## 2. Data processing
# Null elements
data.isnull().sum()
data.isnull().any()
# Encoding of the target:
data["diagnosis"].replace(to_replace=["B", "M"], value=[0, 1], inplace=True)
# # 3. Exploratory Data Analysis
# ## 3.1. Countplot
#
# Malignant vs. benign
print("Number of malignant samples: ", len(data[data["diagnosis"] == 1]))
print("Number of benign samples: ", len(data[data["diagnosis"] == 0]))
print(
"Malignant percentage: ", len(data[data["diagnosis"] == 1]) / len(data) * 100, "%"
)
print("Benign percentage: ", len(data[data["diagnosis"] == 0]) / len(data) * 100, "%")
sns.countplot(x="diagnosis", data=data)
plt.title("Benign vs. Malignant")
plt.ylabel(None)
plt.show()
features = data[list(data.columns)[1:12]]
features.columns
# Equivalently (more fast)
features = data.drop("diagnosis", axis=1)
features.columns
# ## 3.2 Hisplot
def histplot_continuous(x, data, hue):
sns.histplot(
data=data,
x=x,
hue=hue,
kde=True,
bins=15,
palette="tab10",
multiple="stack",
line_kws={"lw": 5},
)
plt.figure(figsize=(18, 6))
n_row = 2
n_col = 5
hue = "diagnosis"
for i in range(len(features.columns)):
plt.subplot(n_row, n_col, i + 1)
histplot_continuous(x=list(features.columns)[i], data=data, hue=hue)
plt.title(list(features.columns)[i])
plt.ylabel(None)
plt.xlabel(None)
plt.subplots_adjust(wspace=0.3, hspace=0.3)
# ## 3.3. Violin plot, Boxplot, Swarmplot (std features)
# The dataset need to be changed:
feat_std = (features - features.mean()) / (features.std())
data_std = pd.concat([data["diagnosis"], feat_std], axis=1)
data_std = pd.melt(
data_std, id_vars="diagnosis", var_name="features", value_name="value"
)
data_std
# Violinplot
plt.figure(figsize=(16, 5))
sns.violinplot(
x="features", y="value", hue="diagnosis", data=data_std, split=True, inner="quart"
)
plt.xticks(rotation=45)
plt.xlabel(None)
# Box plot
plt.figure(figsize=(16, 5))
sns.boxplot(data=data_std, x="features", y="value", hue="diagnosis")
plt.xticks(rotation=45)
plt.xlabel(None)
# The boxplot seems to show a clear separation between the values associated with 'benign' vs. 'malignant' (in all features). This was also seen in the violinplot, but is particularly highlighted here.
# Swarmplot
plt.figure(figsize=(16, 5))
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data_std)
plt.xticks(rotation=45)
plt.xlabel(None)
# ## 3.4. Correlations
# Correlation matrix
correlation_matrix = features.corr().round(2)
plt.figure(figsize=(8, 5))
sns.heatmap(data=correlation_matrix, annot=True, cmap="coolwarm", vmin=-1, vmax=1)
# The correlation coefficient ranges from -1 to 1:
# * If the value is close to +1, there is a strong positive correlation between the two variables
# * If the value is close to -1, the variables have a strong negative correlation
# An important point in selecting features for any model is to check for **multi-co-linearity**.
# There are many features that have a high correlation: it will probably be a case of not considering them together.
sns.clustermap(
correlation_matrix, annot=True, vmin=-1, vmax=1, cmap="coolwarm", figsize=(8, 8)
)
# Correlation of the features with the target
plt.figure(figsize=(13, 5))
features.corrwith(data["diagnosis"]).plot(
kind="bar",
grid=True,
color="cornflowerblue",
title='Correlation of the features with "diagnosis"',
)
plt.xticks(rotation=45)
# **Insights:**
# * fractal_dimension_mean is the least correlated with the target variable (as also seen previously)
# * all other features have a significant correlation with the target variable
# Note. Since we have seen that the variables are strongly correlated with each other, if we have to choose which ones to keep and which ones not to keep, we can rely on this plot.
# ## 3.5. Pairplot
sns.pairplot(data=data, hue="diagnosis", palette="tab10", corner=True)
# # Part 2: PCA, t-SNE
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# No warnings (seaborn)
import warnings
warnings.filterwarnings("ignore")
# ## 1. PCA
# ### 1. PCA fit
# Extract only the features
# Standardize the features
feat_std
# Define and fit the PCA
pca = PCA(n_components=len(features.columns))
pca.fit(feat_std)
# ### 3.2. PCA weights
# Principal components vectors
# Every row is a vector (every element of the vector is the weight associated to a particular feature)
pca_components = pd.DataFrame(data=pca.components_, columns=features.columns)
pca_components
# View as 'columns = PC vectors':
pca_components.T
# Let's visualize the weights
plt.figure(figsize=(18, 10))
n_row = 2
n_col = 5
for i in range(len(features.columns)):
plt.subplot(n_row, n_col, i + 1)
sns.barplot(data=pca_components[i : (i + 1)], color="cornflowerblue")
plt.title(f"PC{i+1}")
plt.xticks(rotation=90)
plt.subplots_adjust(hspace=0.8)
# ### 3.3. Explained variance
# PCA variance explaination
explained_var = pca.explained_variance_ratio_
indeces = np.arange(1, len(explained_var) + 1)
plt.figure(figsize=(10, 5))
sns.barplot(x=indeces, y=explained_var, color="cornflowerblue")
plt.title("Percentage of explained variance for each feature")
plt.ylabel("Proportion of Variance Explained")
plt.xlabel("Principal Component")
# Cumulative explained variance
plt.figure(figsize=(10, 5))
sns.barplot(x=indeces, y=explained_var.cumsum(), color="cornflowerblue")
plt.title("Cumulative percentage of explained variance for each feature")
plt.ylabel("Proportion of Variance Explained")
plt.xlabel("Principal Component")
# A good percentage is around 80%
# How many PCs should we consider?
plt.figure(figsize=(10, 5))
plt.title("Cumulative percentage of explained variance for each feature")
ax = sns.barplot(x=indeces, y=explained_var.cumsum(), color="cornflowerblue")
ax.bar_label(ax.containers[0])
plt.plot(indeces - 1, explained_var.cumsum(), "bo-", label="Cumulative sum")
plt.plot(
[0.8] + 0 * np.arange(1, 11),
color="red",
linestyle="--",
label="Good percentage of explaination",
)
plt.ylabel("Proportion of Variance Explained")
plt.xlabel("Principal Component")
plt.legend(loc="lower right")
# ### 3.4. Transformed dataset
# Transformed dataset (feat_std)
pca_data = pca.fit_transform(feat_std)
pca_data = pd.DataFrame(
data=pca_data,
columns=[
"PC 1",
"PC 2",
"PC 3",
"PC 4",
"PC 5",
"PC 6",
"PC 7",
"PC 8",
"PC 9",
"PC 10",
],
)
pca_data.head()
# Plot 2 PCs
plt.figure(figsize=(10, 6))
sns.scatterplot(
x="PC 1",
y="PC 2",
hue="diagnosis",
data=pd.concat([pca_data, data["diagnosis"]], axis=1),
)
plt.title("Original dataset represented with 2 PCs")
# ## 4. t-SNE
# ### 4.1. t-SNE fit
# Define and fit t-SNE directly in 2 dimensions
tsne = TSNE(n_components=2)
tsne = tsne.fit_transform(feat_std)
# ### 4.2. Transformed dataset
#
# Transformed dataset (feat_std)
tsne_data = pd.DataFrame(data=tsne, columns=["tSNE 1", "tSNE 2"])
tsne_data.head()
# Plot 2 t-SNEs
plt.figure(figsize=(10, 6))
sns.scatterplot(
x="tSNE 1",
y="tSNE 2",
hue="diagnosis",
data=pd.concat([tsne_data, data["diagnosis"]], axis=1),
)
plt.title("Original dataset represented with 2 t-SNEs")
# ## 5. PCA vs. t-SNE
# ### 5.1. Dataset 2-D visualization
# Let's plot PCA and t-SNE
plt.figure(figsize=(15, 4))
# PC
plt.subplot(1, 2, 1)
sns.scatterplot(
x="PC 1",
y="PC 2",
hue="diagnosis",
data=pd.concat([pca_data, data["diagnosis"]], axis=1),
)
plt.title("2D PCA")
# t-SNE
plt.subplot(1, 2, 2)
sns.scatterplot(
x="tSNE 1",
y="tSNE 2",
hue="diagnosis",
data=pd.concat([tsne_data, data["diagnosis"]], axis=1),
)
plt.title("2D t-SNE")
# ### 5.2. (Extra) Logistic regression comparison
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
coefficients = {}
### PCA
###----------------------------------------------------------------------------------------------------------------------------
# Train-test split
X_train, x_test, y_train, y_test = train_test_split(
pca_data, data["diagnosis"], test_size=1 / 3, shuffle=True, random_state=42
)
# Model fitting
model = LogisticRegression(
penalty="l2", C=100, max_iter=300, solver="liblinear", random_state=42
)
model.fit(X_train, y_train)
# # Coefficients
coefficients["pca"] = (
model,
np.append(model.coef_[0], model.intercept_[0]),
) # w(coef).x + b(intercept)
### t-SNE
###----------------------------------------------------------------------------------------------------------------------------
# Train-test splot
x_train, x_test, y_train, y_test = train_test_split(
tsne_data, data["diagnosis"], test_size=1 / 3, shuffle=True, random_state=42
)
# Model fitting
model = LogisticRegression(
penalty="l2", C=100, max_iter=300, solver="liblinear", random_state=42
)
model.fit(x_train, y_train)
# Coefficients
coefficients["tsne"] = (model, np.append(model.coef_[0], model.intercept_[0]))
## Plot logistic regression boundary for PCs data
###--------------------------------------------------------------------------
# Add the 'diagnosis' column to 'pca_data'
pca_data["diagnosis"] = data["diagnosis"]
# Coefficients
b = coefficients["pca"][1][2]
w1, w2 = coefficients["pca"][1][0:2]
# Calculate the intercept and gradient of the decision boundary
c = -b / w2
m = -w1 / w2
# Plot limits
xmin, xmax = np.min(pca_data["PC 1"]) - 2, np.max(pca_data["PC 1"]) + 2
ymin, ymax = np.min(pca_data["PC 2"]) - 2, np.max(pca_data["PC 2"]) + 2
xd = np.array([xmin, xmax])
yd = m * xd + c
c
# Plot
plt.figure(figsize=(10, 6))
plt.plot(xd, yd, "k", lw=1, ls="--")
plt.xlim([xmin, xmax])
plt.ylim([ymin, ymax])
sns.scatterplot(x="PC 1", y="PC 2", hue="diagnosis", data=pca_data)
plt.fill_between(xd, yd, ymin, color="tab:orange", alpha=0.2)
plt.fill_between(xd, yd, ymax, color="tab:blue", alpha=0.2)
plt.title("Logistic regression with PCs")
## Plot logistic regression boundary for t-SNE data
###--------------------------------------------------------------------------
# Add the 'diagnosis' column to 'tsne_data'
tsne_data["diagnosis"] = data["diagnosis"]
# Coefficients
b = coefficients["tsne"][1][2]
w1, w2 = coefficients["tsne"][1][0:2]
# Calculate the intercept and gradient of the decision boundary
c = -b / w2
m = -w1 / w2
# Plot limits
xmin, xmax = np.min(tsne_data["tSNE 1"]) - 2, np.max(tsne_data["tSNE 1"]) + 2
ymin, ymax = np.min(tsne_data["tSNE 2"]) - 2, np.max(tsne_data["tSNE 2"]) + 2
xd = np.array([xmin, xmax])
yd = m * xd + c
# Plot
plt.figure(figsize=(10, 6))
plt.plot(xd, yd, "k", lw=1, ls="--")
plt.xlim([xmin, xmax])
plt.ylim([ymin, ymax])
sns.scatterplot(x="tSNE 1", y="tSNE 2", hue="diagnosis", data=tsne_data)
plt.fill_between(
xd,
yd,
ymin,
color="tab:blue",
alpha=0.2,
)
plt.fill_between(xd, yd, ymax, color="tab:orange", alpha=0.2)
plt.title("Logistic regression with tSNE")
# # 3. Classification
# Evaluation Procedures
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
# Classification methods
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
# Evaluation Metrics
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import f1_score
random_state = 42
# Data and target definition
x = data.drop("diagnosis", axis=1)
y = data["diagnosis"]
# Split the data in train and test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=1 / 3, shuffle=True, random_state=random_state
)
# For the ensamble models let's consider:
n_estimators = 50
# Models definition
models = {
"Ridge Unregularized": LogisticRegression(
penalty="l2", C=1e5, max_iter=300, solver="liblinear", random_state=random_state
),
"Ridge": LogisticRegression(
penalty="l2", C=100, max_iter=300, solver="liblinear", random_state=random_state
),
"Lasso": LogisticRegression(
penalty="l1", C=100, max_iter=300, solver="liblinear", random_state=random_state
),
"kNN_5": KNeighborsClassifier(n_neighbors=5),
"kNN_100": KNeighborsClassifier(n_neighbors=100),
"Decision Tree": DecisionTreeClassifier(max_depth=None, random_state=random_state),
"Random Forest": RandomForestClassifier(
n_estimators=n_estimators, max_depth=3, random_state=random_state
),
"Ada Boost": AdaBoostClassifier(
DecisionTreeClassifier(max_depth=3),
n_estimators=n_estimators,
random_state=random_state,
),
"XGB": XGBClassifier(eval_metric="mlogloss", random_state=random_state),
}
# ### 3.2. Fitting
#
# For every model let's collect:
# * roc_result : informations used to plot the ROC curves afterwards
# * accuracy_mean : crossvalidation result (accuracy mean)
# * accuracy_sd : crossvalidation result (accuracy standard deviation)
# * precision
# * recall
# * f1
# * auc
crossvalidation = StratifiedKFold(n_splits=10, shuffle=True)
roc_results = {}
model = []
accuracy_mean = []
accuracy_std = []
accuracy_test = []
precision = []
recall = []
f1 = []
auc = []
for model_name in models:
# Model to evaluate
current_model = models[model_name]
# Crossvalidation evaluation
cv_score = cross_val_score(current_model, x, y, cv=crossvalidation)
# Accuracy mean and standard deviation
accuracy_mean.append(np.average(cv_score))
accuracy_std.append(np.std(cv_score))
# Model fit
current_model.fit(x_train, y_train)
# Deterministic prediction (threshold=0.5) and probabilistic prediction
y_pred = current_model.predict(x_test)
y_pred_prob = current_model.predict_proba(x_test)
# ROC, AUC
fpr, tpr, thresholds = roc_curve(
y_true=y_test, y_score=y_pred_prob[:, 1], pos_label=1
)
roc_auc = roc_auc_score(y_true=y_test, y_score=y_pred_prob[:, 1])
auc.append(roc_auc)
# Store the information to plot the ROC curves afterwards
roc_results[model_name] = (fpr, tpr, thresholds, roc_auc)
# Other evaluation metrics
accuracy_test.append(accuracy_score(y_test, y_pred))
precision.append(precision_score(y_test, y_pred))
recall.append(recall_score(y_test, y_pred))
f1.append(f1_score(y_test, y_pred))
model.append(model_name)
print("Done with", model_name)
# ### 3.3. Results
# Let's summarize the results
results = {}
results["Model"] = model
results["Accuracy_CV_mean"] = accuracy_mean
results["Accuracy_CV_std"] = accuracy_std
results["Accuracy_test"] = accuracy_test
results["Precision"] = precision
results["Recall"] = recall
results["F1"] = f1
results = pd.DataFrame(data=results)
results
# Best results highlighted
results.set_index("Model").style.highlight_max(color="lightgreen", axis=0)
# Let's plot it
plt.figure(figsize=(18, 3))
for i in range(0, 4):
plt.subplot(1, 4, i + 1)
colors = [
"grey" if (x < max(results.iloc[:, 3 + i])) else "red"
for x in results.iloc[:, 3 + i]
]
sns.barplot(x=results["Model"], y=results.iloc[:, 3 + i], palette=colors)
plt.title(results.columns[3 + i])
plt.xticks(rotation=90)
plt.ylabel(None)
plt.xlabel(None)
# Let's plot it sorted
plt.figure(figsize=(18, 3))
for i in range(0, 4):
plt.subplot(1, 4, i + 1)
names = results["Model"]
values = results.iloc[:, 3 + i]
df = pd.DataFrame(data=pd.concat([names, values], axis=1))
df = df.sort_values(by=df.columns[1], ascending=False)
colors = ["grey" if (x < max(df.iloc[:, 1])) else "red" for x in df.iloc[:, 1]]
sns.barplot(x=df.iloc[:, 0], y=df.iloc[:, 1], palette=colors)
plt.title(results.columns[3 + i])
plt.xticks(rotation=90)
plt.ylabel(None)
plt.xlabel(None)
# ### 3.4. ROC
#
# ROC_results
# For every method there are : fpr, tpr, thresholds, roc_auc
pd.DataFrame(data=roc_results)
# For convenience, let's transpose them (to sort them)
roc_info = pd.DataFrame(data=roc_results).T
roc_info.columns = ["fpr", "tpr", "tresholds", "roc_auc"]
# Let's sort them based on AUC (to simplify the plot)
roc_info = roc_info.sort_values(by=["roc_auc"], ascending=True)
# Plot
plt.figure(figsize=(10, 8))
for i in range(0, len(models)):
plt.plot(
roc_info.iloc[i, 0],
roc_info.iloc[i, 1],
label="AUC = %.4f, %s" % (roc_info.iloc[i, 3], model[i]),
)
plt.title("Receiver Operating Characteristic (ROC)")
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.ylabel("True Positive Rate (TPR)")
plt.xlabel("False Positive Rate (FPR)")
| false | 0 | 6,035 | 0 | 6,561 | 6,035 |
||
129532557
|
# import numpy as np # linear algebra
# import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
# import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import cv2
import numpy as np
import matplotlib.pyplot as plt
import os
from tqdm.notebook import tqdm
from random import shuffle
import torch
from torch import nn
import math
from glob import glob
import sys
import shutil
BGR_classes = {
"A": [255, 0, 0],
"B": [128, 0, 0],
"C": [128, 128, 128],
"D": [0, 0, 255],
"E": [0, 0, 0],
}
a = 0
b = 0
c = 0
d = 0
e = 0
er = 0
tot = 0
root = "/kaggle/input"
MASK_NAMES = sorted(glob(root + "/cell-data/masks/*"))
# print(MASK_NAMES)
for idx in range(len(MASK_NAMES)):
# print("___")
mask_path = MASK_NAMES[idx]
mask = cv2.imread(mask_path)
for i in range(mask.shape[0]):
for j in range(mask.shape[1]):
tot += 1
if (mask[i][j] == BGR_classes["A"]).all():
a += 1
elif (mask[i][j] == BGR_classes["B"]).all():
b += 1
elif (mask[i][j] == BGR_classes["C"]).all():
c += 1
elif (mask[i][j] == BGR_classes["D"]).all():
d += 1
elif (mask[i][j] == BGR_classes["E"]).all():
e += 1
else:
print("ERROR", i, j, mask[i][j])
er += 1
print(tot, a, b, c, d, e, er)
print(tot / tot, a / tot, b / tot, c / tot, d / tot, e / tot, er / tot)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/532/129532557.ipynb
| null | null |
[{"Id": 129532557, "ScriptId": 36006189, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4755951, "CreationDate": "05/14/2023 15:33:17", "VersionNumber": 1.0, "Title": "mask_devision", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 74.0, "LinesInsertedFromPrevious": 74.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# import numpy as np # linear algebra
# import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
# import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import cv2
import numpy as np
import matplotlib.pyplot as plt
import os
from tqdm.notebook import tqdm
from random import shuffle
import torch
from torch import nn
import math
from glob import glob
import sys
import shutil
BGR_classes = {
"A": [255, 0, 0],
"B": [128, 0, 0],
"C": [128, 128, 128],
"D": [0, 0, 255],
"E": [0, 0, 0],
}
a = 0
b = 0
c = 0
d = 0
e = 0
er = 0
tot = 0
root = "/kaggle/input"
MASK_NAMES = sorted(glob(root + "/cell-data/masks/*"))
# print(MASK_NAMES)
for idx in range(len(MASK_NAMES)):
# print("___")
mask_path = MASK_NAMES[idx]
mask = cv2.imread(mask_path)
for i in range(mask.shape[0]):
for j in range(mask.shape[1]):
tot += 1
if (mask[i][j] == BGR_classes["A"]).all():
a += 1
elif (mask[i][j] == BGR_classes["B"]).all():
b += 1
elif (mask[i][j] == BGR_classes["C"]).all():
c += 1
elif (mask[i][j] == BGR_classes["D"]).all():
d += 1
elif (mask[i][j] == BGR_classes["E"]).all():
e += 1
else:
print("ERROR", i, j, mask[i][j])
er += 1
print(tot, a, b, c, d, e, er)
print(tot / tot, a / tot, b / tot, c / tot, d / tot, e / tot, er / tot)
| false | 0 | 656 | 0 | 656 | 656 |
||
129532987
|
<jupyter_start><jupyter_text>Body performance Data
## dataset
This is data that confirmed the grade of performance with age and some exercise performance data.
## columns
**data shape : (13393, 12)**
- age : 20 ~64
- gender : F,M
- height_cm : (If you want to convert to feet, divide by 30.48)
- weight_kg
- body fat_%
- diastolic : diastolic blood pressure (min)
- systolic : systolic blood pressure (min)
- gripForce
- sit and bend forward_cm
- sit-ups counts
- broad jump_cm
- class : A,B,C,D ( A: best) / stratified
### Source
[link](https://www.bigdata-culture.kr/bigdata/user/data_market/detail.do?id=ace0aea7-5eee-48b9-b616-637365d665c1) (Korea Sports Promotion Foundation)
Some post-processing and filtering has done from the raw data.
Kaggle dataset identifier: body-performance-data
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/body-performance-data/bodyPerformance.csv")
df.head(100)
df.describe(include="all")
df.info()
print(df.columns)
print(df.index)
# DIMENSION
print("Dimension of Orignal data : ", np.ndim(df))
print("Shape of Orignal data : ", np.shape(df))
print("Type : ", type(df))
print("DataTypes --->\n", df.dtypes)
plt.figure(figsize=(15, 10))
sns.heatmap(df.corr(), annot=True, center=0)
plt.show()
sns.pairplot(df)
df.hist(bins=30, figsize=(15, 10))
DF = df.copy()
DF.head(10)
DF.isnull().sum()
DF[DF.duplicated()]
DF.drop_duplicates(inplace=True)
# # Changing the categorical to numercial values
DF["class"].replace(["A", "B", "C", "D"], [0, 1, 2, 3], inplace=True)
DF["gender"].replace(["M", "F"], [0, 1], inplace=True)
DF.head(10)
plt.figure(figsize=(15, 10))
sns.heatmap(df.corr(), annot=True, center=0)
plt.show()
from sklearn.preprocessing import StandardScaler
def normalize(X):
print("Mean and Standard Deviation Before")
print(X.mean(axis=0), X.std(axis=0))
sc = StandardScaler()
XScaled = sc.fit_transform(X)
print("Mean and Standard Deviation After")
print(XScaled.mean(axis=0).round(4), XScaled.std(axis=0))
return XScaled
sns.set(rc={"figure.figsize": (21, 10)})
sns.boxplot(data=DF.iloc[:, :])
features = [
"age",
"gender",
"height_cm",
"weight_kg",
"body fat_%",
"diastolic",
"systolic",
"gripForce",
"sit and bend forward_cm",
"sit-ups counts",
"broad jump_cm",
]
X = DF[features].values
Y = DF["class"].values
from sklearn.metrics import confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
print("*************Normalization/Standardization*************")
XScaled = normalize(X)
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier()
from sklearn.tree import DecisionTreeClassifier
treemodel = DecisionTreeClassifier(
criterion="entropy",
)
from sklearn.naive_bayes import GaussianNB
gaus = GaussianNB()
LRModel = LogisticRegression()
# function for k neirest neighbour
def KNN(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
# LRModel = LogisticRegression() # change
cla.fit(X_train, Y_train) # change
Y_testPred = cla.predict(X_test) # change
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
# function for decission tree
def DTree(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
# function for naive bays classifier
def NB(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
#
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
# function for logistic regression
def LR(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
from warnings import simplefilter
simplefilter(action="ignore", category=FutureWarning)
print("application of logistics regression ")
LR_matrix, LR_ori, LR_pre, LR_acc = LR(LRModel)
print("application of Decision Tree classificaiton ")
DTree_matrix, DTree_ori, DTree_pre, DTree_acc = DTree(treemodel)
print("application of Naive Bays classfication ")
NB_matrix, NB_ori, NB_pre, NB_acc = NB(gaus)
print("application of K nearest neigbhour classification ")
KNN_matrix, KNN_ori, KNN_pre, KNN_acc = KNN(neigh)
# calculating the net confusion matrix for each classification
from sklearn.metrics import classification_report
Dic_accuracy = {}
def avegcon(classifier, matrix, ori, pre, ac):
print("Report for the " + classifier + " classifier")
accc = sum(ac) / len(ac)
print("average accuracy ", accc)
plot_confusion_matrix(
matrix,
show_normed=True,
colorbar=True,
class_names=["A", "B", "C", "D"],
show_absolute=True,
figsize=(4, 4),
)
print("\n")
report = classification_report(
ori, pre, target_names=["A", "B", "C", "D"], output_dict=True
)
report_Df = pd.DataFrame(report)
print(report_Df)
plt.figure()
sns.heatmap(report_Df.T, annot=True)
return {classifier: accc}
cc1 = avegcon("logistic_regression", LR_matrix, LR_ori, LR_pre, LR_acc)
Dic_accuracy.update(cc1)
cc2 = avegcon("Decisoin_Tree_classifier", DTree_matrix, DTree_ori, DTree_pre, DTree_acc)
Dic_accuracy.update(cc2)
cc3 = avegcon("Naive_bays_classifier", NB_matrix, NB_ori, NB_pre, NB_acc)
Dic_accuracy.update(cc3)
cc4 = avegcon("K_Nearest_neighbour", KNN_matrix, KNN_ori, KNN_pre, KNN_acc)
Dic_accuracy.update(cc4)
Dic_accuracy
dffff = pd.DataFrame(Dic_accuracy.items(), columns=["Class", "Accuracy"])
# print(dffff)
sns.barplot(x="Class", y="Accuracy", data=dffff)
# sns.barplot(x=list(Dic_accur,acy.keys()),y=list(Dic_accuracy.values()))
dffff
# hidden_layer_sizes
# function for logistic regression
from sklearn.neural_network import MLPClassifier
AN = MLPClassifier(
hidden_layer_sizes=(34, 22),
activation="logistic",
solver="sgd",
max_iter=1000,
)
def ANN(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
net_mata, y_oria, y_prea, acca = ANN(AN)
cc5 = avegcon("ANN", net_mata, y_oria, y_prea, acca)
print(cc5)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/532/129532987.ipynb
|
body-performance-data
|
kukuroo3
|
[{"Id": 129532987, "ScriptId": 38499599, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10079230, "CreationDate": "05/14/2023 15:37:36", "VersionNumber": 1.0, "Title": "notebook713f4b540f", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 347.0, "LinesInsertedFromPrevious": 347.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185688587, "KernelVersionId": 129532987, "SourceDatasetVersionId": 3878811}]
|
[{"Id": 3878811, "DatasetId": 1732554, "DatasourceVersionId": 3933830, "CreatorUserId": 8392179, "LicenseName": "CC0: Public Domain", "CreationDate": "06/29/2022 09:42:21", "VersionNumber": 15.0, "Title": "Body performance Data", "Slug": "body-performance-data", "Subtitle": "multi class classification", "Description": "## dataset\n\nThis is data that confirmed the grade of performance with age and some exercise performance data.\n\n\n## columns\n\n**data shape : (13393, 12)**\n\n- age : 20 ~64 \n- gender : F,M\n- height_cm : (If you want to convert to feet, divide by 30.48)\n- weight_kg \n- body fat_%\n- diastolic : diastolic blood pressure (min)\n- systolic : systolic blood pressure (min)\n- gripForce\n- sit and bend forward_cm\n- sit-ups counts\n- broad jump_cm\n- class : A,B,C,D ( A: best) / stratified\n\n### Source\n[link](https://www.bigdata-culture.kr/bigdata/user/data_market/detail.do?id=ace0aea7-5eee-48b9-b616-637365d665c1) (Korea Sports Promotion Foundation)\nSome post-processing and filtering has done from the raw data.", "VersionNotes": "Data Update 2022/06/29", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1732554, "CreatorUserId": 8392179, "OwnerUserId": 8392179.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3878811.0, "CurrentDatasourceVersionId": 3933830.0, "ForumId": 1754498, "Type": 2, "CreationDate": "11/20/2021 09:28:32", "LastActivityDate": "11/20/2021", "TotalViews": 69909, "TotalDownloads": 10372, "TotalVotes": 200, "TotalKernels": 54}]
|
[{"Id": 8392179, "UserName": "kukuroo3", "DisplayName": "kukuroo3", "RegisterDate": "09/20/2021", "PerformanceTier": 3}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/body-performance-data/bodyPerformance.csv")
df.head(100)
df.describe(include="all")
df.info()
print(df.columns)
print(df.index)
# DIMENSION
print("Dimension of Orignal data : ", np.ndim(df))
print("Shape of Orignal data : ", np.shape(df))
print("Type : ", type(df))
print("DataTypes --->\n", df.dtypes)
plt.figure(figsize=(15, 10))
sns.heatmap(df.corr(), annot=True, center=0)
plt.show()
sns.pairplot(df)
df.hist(bins=30, figsize=(15, 10))
DF = df.copy()
DF.head(10)
DF.isnull().sum()
DF[DF.duplicated()]
DF.drop_duplicates(inplace=True)
# # Changing the categorical to numercial values
DF["class"].replace(["A", "B", "C", "D"], [0, 1, 2, 3], inplace=True)
DF["gender"].replace(["M", "F"], [0, 1], inplace=True)
DF.head(10)
plt.figure(figsize=(15, 10))
sns.heatmap(df.corr(), annot=True, center=0)
plt.show()
from sklearn.preprocessing import StandardScaler
def normalize(X):
print("Mean and Standard Deviation Before")
print(X.mean(axis=0), X.std(axis=0))
sc = StandardScaler()
XScaled = sc.fit_transform(X)
print("Mean and Standard Deviation After")
print(XScaled.mean(axis=0).round(4), XScaled.std(axis=0))
return XScaled
sns.set(rc={"figure.figsize": (21, 10)})
sns.boxplot(data=DF.iloc[:, :])
features = [
"age",
"gender",
"height_cm",
"weight_kg",
"body fat_%",
"diastolic",
"systolic",
"gripForce",
"sit and bend forward_cm",
"sit-ups counts",
"broad jump_cm",
]
X = DF[features].values
Y = DF["class"].values
from sklearn.metrics import confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
print("*************Normalization/Standardization*************")
XScaled = normalize(X)
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier()
from sklearn.tree import DecisionTreeClassifier
treemodel = DecisionTreeClassifier(
criterion="entropy",
)
from sklearn.naive_bayes import GaussianNB
gaus = GaussianNB()
LRModel = LogisticRegression()
# function for k neirest neighbour
def KNN(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
# LRModel = LogisticRegression() # change
cla.fit(X_train, Y_train) # change
Y_testPred = cla.predict(X_test) # change
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
# function for decission tree
def DTree(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
# function for naive bays classifier
def NB(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
#
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
# function for logistic regression
def LR(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
from warnings import simplefilter
simplefilter(action="ignore", category=FutureWarning)
print("application of logistics regression ")
LR_matrix, LR_ori, LR_pre, LR_acc = LR(LRModel)
print("application of Decision Tree classificaiton ")
DTree_matrix, DTree_ori, DTree_pre, DTree_acc = DTree(treemodel)
print("application of Naive Bays classfication ")
NB_matrix, NB_ori, NB_pre, NB_acc = NB(gaus)
print("application of K nearest neigbhour classification ")
KNN_matrix, KNN_ori, KNN_pre, KNN_acc = KNN(neigh)
# calculating the net confusion matrix for each classification
from sklearn.metrics import classification_report
Dic_accuracy = {}
def avegcon(classifier, matrix, ori, pre, ac):
print("Report for the " + classifier + " classifier")
accc = sum(ac) / len(ac)
print("average accuracy ", accc)
plot_confusion_matrix(
matrix,
show_normed=True,
colorbar=True,
class_names=["A", "B", "C", "D"],
show_absolute=True,
figsize=(4, 4),
)
print("\n")
report = classification_report(
ori, pre, target_names=["A", "B", "C", "D"], output_dict=True
)
report_Df = pd.DataFrame(report)
print(report_Df)
plt.figure()
sns.heatmap(report_Df.T, annot=True)
return {classifier: accc}
cc1 = avegcon("logistic_regression", LR_matrix, LR_ori, LR_pre, LR_acc)
Dic_accuracy.update(cc1)
cc2 = avegcon("Decisoin_Tree_classifier", DTree_matrix, DTree_ori, DTree_pre, DTree_acc)
Dic_accuracy.update(cc2)
cc3 = avegcon("Naive_bays_classifier", NB_matrix, NB_ori, NB_pre, NB_acc)
Dic_accuracy.update(cc3)
cc4 = avegcon("K_Nearest_neighbour", KNN_matrix, KNN_ori, KNN_pre, KNN_acc)
Dic_accuracy.update(cc4)
Dic_accuracy
dffff = pd.DataFrame(Dic_accuracy.items(), columns=["Class", "Accuracy"])
# print(dffff)
sns.barplot(x="Class", y="Accuracy", data=dffff)
# sns.barplot(x=list(Dic_accur,acy.keys()),y=list(Dic_accuracy.values()))
dffff
# hidden_layer_sizes
# function for logistic regression
from sklearn.neural_network import MLPClassifier
AN = MLPClassifier(
hidden_layer_sizes=(34, 22),
activation="logistic",
solver="sgd",
max_iter=1000,
)
def ANN(cla):
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=2)
acc = []
y_ori = np.array([], dtype=int)
y_pre = np.array([], dtype=int)
net_mat = np.zeros((4, 4))
for train_index, test_index in skf.split(X, Y):
X_train = X[train_index]
X_test = X[test_index]
Y_train = Y[train_index]
Y_test = Y[test_index]
cla.fit(X_train, Y_train)
Y_testPred = cla.predict(X_test)
y_ori = np.hstack((y_ori, Y_test))
y_pre = np.hstack((y_pre, Y_testPred))
testAccuracy = metrics.accuracy_score(Y_test, Y_testPred)
print("Test Accuracy", testAccuracy * 100)
acc.append(testAccuracy)
matrix1 = confusion_matrix(Y_test, Y_testPred)
# sum of the total confusion matirx
net_mat = net_mat + matrix1
plot_confusion_matrix(
matrix1,
class_names=["A", "B", "C", "D"],
show_normed=True,
colorbar=True,
show_absolute=True,
figsize=(4, 4),
)
plt.show()
return net_mat, y_ori, y_pre, acc
net_mata, y_oria, y_prea, acca = ANN(AN)
cc5 = avegcon("ANN", net_mata, y_oria, y_prea, acca)
print(cc5)
| false | 1 | 3,529 | 0 | 3,807 | 3,529 |
||
129968273
|
# # USGS Real-Time Water Data Downloader
# Use this code to download data from the US Geological Survey's [Instantaneous Water Services API](https://waterdata.usgs.gov/nwis/rt).
# The USGS tracks the waterflow at over 10,000 points along the rivers and waterways of United States. You can see a live map on the [National Water Dashboard](https://dashboard.waterdata.usgs.gov/app/nwd/en/?aoi=default).
# **You can use the code below for a quick way to get this data.**
# Check out [our tutorial on Medium.com](https://medium.com/@protobioengineering/how-to-get-real-time-river-data-from-the-usgs-api-25264da3b362) for more info.
# ## Steps
# * [Find the ID](https://dashboard.waterdata.usgs.gov/app/nwd/en/?aoi=default) of the monitoring location/river we want to know about
# * Plug the ID into the Water Services API's [URL building tool](https://waterservices.usgs.gov/rest/IV-Test-Tool.html)
# * (Optional) Specify the dates you want to get data for in the URL building tool. Leave blank if you just wanted the latest data point.
# * (Optional) Specify the format you want the data in (JSON, XML, or USGS RDB)
# * Use code (Python, R, curl, etc.) to download the data
# ## Quickstart
# If you know the ID of the [monitoring location](https://dashboard.waterdata.usgs.gov/app/nwd/en/?aoi=default) that you want data from, you can plug it into the Python code below to download the latest data.
# **NOTE: The code below won't run in Kaggle**, because Kaggle does not allow use of the `requests` library. Copy the code into a Python file on your computer.
# We pre-built the `water_api_url` below using [USGS's URL building tool](https://waterservices.usgs.gov/rest/IV-Test-Tool.html).
# NOTE: This code won't run on Kaggle, since Kaggle notebooks can't download data from APIs.
# Copy this code onto your own computer.
import requests
monitoring_location_id = "09504500" # Example ID = Oak Creek Near Cornville, Arizona
water_api_url = f"https://waterservices.usgs.gov/nwis/iv/?format=rdb&sites={monitoring_location_id}¶meterCd=00060,00065&siteStatus=all"
water_data = requests.get(water_api_url)
# ## Choosing the Data Format
# You can choose between **4 data** types from the Water Services API:
# * JSON
# * USGS RDB Version 1.0
# * USGS RDB (tab-delimited)
# * WaterML 2.0
# The two RDB options are identical. Both are tab-delimited.
# **We recommend using the RDB versions**, since the JSON and WaterML versions are insanely nested and hard to parse. RDB data is in neat rows.
# **You can choose the data type on the URL building tool** or manually put it into the URL in your code, as shown below.
monitoring_location_id = "09504500" # Example ID
json_format = "json"
rdb_1_0_format = "rdb,1.0"
rdb_format = "rdb"
waterml_format = "waterml,2.0"
# Replace {rdb_format} below with your desired format
water_api_url = f"https://waterservices.usgs.gov/nwis/iv/?format={rdb_format}&sites={monitoring_location_id}¶meterCd=00060,00065&siteStatus=all"
# ## Choosing Multiple Locations
# You can get data from multiple monitoring locations. Just separate them with a comma in the URL.
location_1 = "09504500"
location_2 = "09505350"
water_api_url = f"https://waterservices.usgs.gov/nwis/iv/?format=rdb&sites={location_1},{location_2}¶meterCd=00060,00065&siteStatus=all"
# ## Get a List of All Monitoring Locations for Your State
# The URL below can be used to get a list of all monitoring locations (names and IDs) from a single state, as well as the latest gage height and streamflow for each. Plug in the state code (`az`, `ny`, `ca`, etc.) below.
state = "az" # Arizona
water_api_url = f"https://nwis.waterservices.usgs.gov/nwis/iv/?format=rdb&stateCd={state}¶meterCd=00060,00065&siteStatus=all"
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/968/129968273.ipynb
| null | null |
[{"Id": 129968273, "ScriptId": 38629392, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14517655, "CreationDate": "05/17/2023 19:37:08", "VersionNumber": 3.0, "Title": "USGS Real-Time Water Data Downloader (Python)", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 72.0, "LinesInsertedFromPrevious": 9.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 63.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
# # USGS Real-Time Water Data Downloader
# Use this code to download data from the US Geological Survey's [Instantaneous Water Services API](https://waterdata.usgs.gov/nwis/rt).
# The USGS tracks the waterflow at over 10,000 points along the rivers and waterways of United States. You can see a live map on the [National Water Dashboard](https://dashboard.waterdata.usgs.gov/app/nwd/en/?aoi=default).
# **You can use the code below for a quick way to get this data.**
# Check out [our tutorial on Medium.com](https://medium.com/@protobioengineering/how-to-get-real-time-river-data-from-the-usgs-api-25264da3b362) for more info.
# ## Steps
# * [Find the ID](https://dashboard.waterdata.usgs.gov/app/nwd/en/?aoi=default) of the monitoring location/river we want to know about
# * Plug the ID into the Water Services API's [URL building tool](https://waterservices.usgs.gov/rest/IV-Test-Tool.html)
# * (Optional) Specify the dates you want to get data for in the URL building tool. Leave blank if you just wanted the latest data point.
# * (Optional) Specify the format you want the data in (JSON, XML, or USGS RDB)
# * Use code (Python, R, curl, etc.) to download the data
# ## Quickstart
# If you know the ID of the [monitoring location](https://dashboard.waterdata.usgs.gov/app/nwd/en/?aoi=default) that you want data from, you can plug it into the Python code below to download the latest data.
# **NOTE: The code below won't run in Kaggle**, because Kaggle does not allow use of the `requests` library. Copy the code into a Python file on your computer.
# We pre-built the `water_api_url` below using [USGS's URL building tool](https://waterservices.usgs.gov/rest/IV-Test-Tool.html).
# NOTE: This code won't run on Kaggle, since Kaggle notebooks can't download data from APIs.
# Copy this code onto your own computer.
import requests
monitoring_location_id = "09504500" # Example ID = Oak Creek Near Cornville, Arizona
water_api_url = f"https://waterservices.usgs.gov/nwis/iv/?format=rdb&sites={monitoring_location_id}¶meterCd=00060,00065&siteStatus=all"
water_data = requests.get(water_api_url)
# ## Choosing the Data Format
# You can choose between **4 data** types from the Water Services API:
# * JSON
# * USGS RDB Version 1.0
# * USGS RDB (tab-delimited)
# * WaterML 2.0
# The two RDB options are identical. Both are tab-delimited.
# **We recommend using the RDB versions**, since the JSON and WaterML versions are insanely nested and hard to parse. RDB data is in neat rows.
# **You can choose the data type on the URL building tool** or manually put it into the URL in your code, as shown below.
monitoring_location_id = "09504500" # Example ID
json_format = "json"
rdb_1_0_format = "rdb,1.0"
rdb_format = "rdb"
waterml_format = "waterml,2.0"
# Replace {rdb_format} below with your desired format
water_api_url = f"https://waterservices.usgs.gov/nwis/iv/?format={rdb_format}&sites={monitoring_location_id}¶meterCd=00060,00065&siteStatus=all"
# ## Choosing Multiple Locations
# You can get data from multiple monitoring locations. Just separate them with a comma in the URL.
location_1 = "09504500"
location_2 = "09505350"
water_api_url = f"https://waterservices.usgs.gov/nwis/iv/?format=rdb&sites={location_1},{location_2}¶meterCd=00060,00065&siteStatus=all"
# ## Get a List of All Monitoring Locations for Your State
# The URL below can be used to get a list of all monitoring locations (names and IDs) from a single state, as well as the latest gage height and streamflow for each. Plug in the state code (`az`, `ny`, `ca`, etc.) below.
state = "az" # Arizona
water_api_url = f"https://nwis.waterservices.usgs.gov/nwis/iv/?format=rdb&stateCd={state}¶meterCd=00060,00065&siteStatus=all"
| false | 0 | 1,202 | 1 | 1,202 | 1,202 |
||
129968294
|
<jupyter_start><jupyter_text>South Korean Lottery Numbers
### Background
The South Korean lottery pays out millions of dollars to the winners. To date, there have been over 1000 draws (1 a week). The numbers are drawn by a vacuum sucking up plastic balls with the winning numbers written on them. Many South Korean citizens speculate that this system is rigged (or at least not 100% fair) because many numbers have been chosen unproportionally. Is it possible that choosing certain numbers will improve one's chances of winning?
### Data
<ul>
<li><strong>TIME</strong> - The nth lottery draw</li>
<li><strong>NUM1</strong> - Winning number 1</li>
<li><strong>NUM2</strong> - Winning number 2</li>
<li><strong>NUM3</strong> - Winning number 3</li>
<li><strong>NUM4</strong> - Winning number 4</li>
<li><strong>NUM5</strong> - Winning number 5</li>
<li><strong>NUM6</strong> - Winning number 6</li>
<li><strong>BONUS</strong> - Winning bonus number</li>
</ul>
### Additional Info
Per draw, 6 numbers are chosen + 1 bonus number
Of the 6 primary numbers, if at least 3 are correct the ticket is a winner.
The bonus number will add a bonus if 5 out of 6 primary numbers are correct.
The order of the numbers do not matter. They will always be from least->greatest.
The following needs to be taken into consideration to calculate if your model is making money:
-Each guess of lottery numbers costs about $1 (you can guess an unlimited amount of times)
-If 3 numbers match you win $5 (ie. if you guess 5 times and only one ticket wins, you get your money back)
-if 4 numbers match you win $100
-if 5 numbers match you win $1,000
-if 5 numbers and the bonus number match you win $10,000
-if all 6 numbers are correct you get the jackpot (usually at least $100,000-> $10M)
### Source
https://m.dhlottery.co.kr/
Kaggle dataset identifier: south-korean-lottery-numbers
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/south-korean-lottery-numbers/fake_lotto.csv")
df
import seaborn as sns
countplot = sns.countplot(data=df, x="NUM1")
four_cy = df[df.NUM1 >= 20]
by_origin = four_cy.groupby("TIME", as_index=False)
mpg_by_origin = by_origin["NUM1"].mean()
mpg_by_origin = mpg_by_origin.head()
barplot = sns.barplot(x="TIME", y="NUM1", data=mpg_by_origin)
# ### if I take the first 5 values where the winning number1 >20, he got the maximum value in round number 6
avg_mpg = df.groupby("TIME", as_index=False).NUM2.mean().head()
relplot = sns.relplot(x="TIME", y="NUM2", data=avg_mpg)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/968/129968294.ipynb
|
south-korean-lottery-numbers
|
calebreigada
|
[{"Id": 129968294, "ScriptId": 38659267, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14989378, "CreationDate": "05/17/2023 19:37:31", "VersionNumber": 2.0, "Title": "Data_Visualization", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 40.0, "LinesInsertedFromPrevious": 23.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 17.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186406821, "KernelVersionId": 129968294, "SourceDatasetVersionId": 3233531}]
|
[{"Id": 3233531, "DatasetId": 1946000, "DatasourceVersionId": 3283627, "CreatorUserId": 5720392, "LicenseName": "CC0: Public Domain", "CreationDate": "02/27/2022 05:16:41", "VersionNumber": 2.0, "Title": "South Korean Lottery Numbers", "Slug": "south-korean-lottery-numbers", "Subtitle": "Determining the best numbers to choose in the South Korean Lottery", "Description": "### Background\nThe South Korean lottery pays out millions of dollars to the winners. To date, there have been over 1000 draws (1 a week). The numbers are drawn by a vacuum sucking up plastic balls with the winning numbers written on them. Many South Korean citizens speculate that this system is rigged (or at least not 100% fair) because many numbers have been chosen unproportionally. Is it possible that choosing certain numbers will improve one's chances of winning?\n\n### Data\n<ul>\n<li><strong>TIME</strong> - The nth lottery draw</li>\n<li><strong>NUM1</strong> - Winning number 1</li>\n<li><strong>NUM2</strong> - Winning number 2</li>\n<li><strong>NUM3</strong> - Winning number 3</li>\n<li><strong>NUM4</strong> - Winning number 4</li>\n<li><strong>NUM5</strong> - Winning number 5</li>\n<li><strong>NUM6</strong> - Winning number 6</li>\n<li><strong>BONUS</strong> - Winning bonus number</li>\n</ul>\n\n\n### Additional Info\nPer draw, 6 numbers are chosen + 1 bonus number\nOf the 6 primary numbers, if at least 3 are correct the ticket is a winner.\nThe bonus number will add a bonus if 5 out of 6 primary numbers are correct.\nThe order of the numbers do not matter. They will always be from least->greatest.\n\nThe following needs to be taken into consideration to calculate if your model is making money:\n-Each guess of lottery numbers costs about $1 (you can guess an unlimited amount of times)\n-If 3 numbers match you win $5 (ie. if you guess 5 times and only one ticket wins, you get your money back)\n-if 4 numbers match you win $100\n-if 5 numbers match you win $1,000\n-if 5 numbers and the bonus number match you win $10,000\n-if all 6 numbers are correct you get the jackpot (usually at least $100,000-> $10M)\n\n\n\n### Source\nhttps://m.dhlottery.co.kr/", "VersionNotes": "Data Update 2022/02/27", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1946000, "CreatorUserId": 5720392, "OwnerUserId": 5720392.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3233531.0, "CurrentDatasourceVersionId": 3283627.0, "ForumId": 1969854, "Type": 2, "CreationDate": "02/20/2022 10:08:58", "LastActivityDate": "02/20/2022", "TotalViews": 9985, "TotalDownloads": 762, "TotalVotes": 34, "TotalKernels": 3}]
|
[{"Id": 5720392, "UserName": "calebreigada", "DisplayName": "Caleb Reigada", "RegisterDate": "09/04/2020", "PerformanceTier": 3}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/south-korean-lottery-numbers/fake_lotto.csv")
df
import seaborn as sns
countplot = sns.countplot(data=df, x="NUM1")
four_cy = df[df.NUM1 >= 20]
by_origin = four_cy.groupby("TIME", as_index=False)
mpg_by_origin = by_origin["NUM1"].mean()
mpg_by_origin = mpg_by_origin.head()
barplot = sns.barplot(x="TIME", y="NUM1", data=mpg_by_origin)
# ### if I take the first 5 values where the winning number1 >20, he got the maximum value in round number 6
avg_mpg = df.groupby("TIME", as_index=False).NUM2.mean().head()
relplot = sns.relplot(x="TIME", y="NUM2", data=avg_mpg)
| false | 1 | 419 | 0 | 999 | 419 |
||
129968523
|
<jupyter_start><jupyter_text>Hourly Energy Consumption
### PJM Hourly Energy Consumption Data
PJM Interconnection LLC (PJM) is a regional transmission organization (RTO) in the United States. It is part of the Eastern Interconnection grid operating an electric transmission system serving all or parts of Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, and the District of Columbia.
The hourly power consumption data comes from PJM's website and are in megawatts (MW).
The regions have changed over the years so data may only appear for certain dates per region.
![Energy Plot][1]
## Ideas of what you could do with this dataset:
- Split the last year into a test set- can you build a model to predict energy consumption?
- Find trends in energy consumption around hours of the day, holidays, or long term trends?
- Understand how daily trends change depending of the time of year. Summer trends are very different than winter trends.
![PJM Regions][2]
[1]: https://s15.postimg.cc/8rdtgokpn/download.png
[2]: https://www.theenergytimes.com/sites/theenergytimes.com/files/styles/article_featured_retina/public/pjm-image.jpg.crop_display.jpg?itok=XLQYO4j-
Kaggle dataset identifier: hourly-energy-consumption
<jupyter_script>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
color_pal = sns.color_palette()
plt.style.use("fivethirtyeight")
from sklearn.metrics import mean_squared_error, mean_absolute_error
df = pd.read_csv("hourly_energy_consumption/PJME_hourly.csv")
# # **1. Data Visualization**
df.head()
df.plot(style=".", figsize=(15, 5), color=color_pal[0], title="PJME Energy Use in MW")
plt.show()
# # **2. Data Preprocessing**
# Since this is a TimeSeries problem, we have to parse the 'datetime' column
df = df.set_index("Datetime")
df.index = pd.to_datetime(df.index)
df.head()
# ## 2.1 Handle Missings
df.isna().sum()
# ## 2.2 Handle Outliers
df["PJME_MW"].plot(kind="hist", bins=500)
df.query("PJME_MW < 19_000")["PJME_MW"].plot(
style=".", figsize=(15, 5), color=color_pal[5], title="Outliers"
)
df = df.query("PJME_MW > 19_000").copy()
# # **3. Feature Engineering**
def create_features(df):
"""
Create time series features based on time series index.
"""
df = df.copy()
df["hour"] = df.index.hour
df["dayofweek"] = df.index.dayofweek
df["quarter"] = df.index.quarter
df["month"] = df.index.month
df["year"] = df.index.year
df["dayofyear"] = df.index.dayofyear
df["dayofmonth"] = df.index.day
df["weekofyear"] = df.index.isocalendar().week
df["season"] = df["month"] % 12 // 3 + 1
return df
season_names = {1: "Winter", 2: "Spring", 3: "Summer", 4: "Fall"}
df = create_features(df)
df["season"] = df["season"].map(season_names)
df.head()
# # **4. Exploratory Data Analysis**
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="hour", y="PJME_MW")
ax.set_title("Energy Consumption by Hour")
plt.show()
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="dayofweek", y="PJME_MW")
ax.set_title("Energy Consumption by Day of Week")
plt.show()
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="month", y="PJME_MW", palette="Blues")
ax.set_title("Energy Consumption by Month")
plt.show()
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="year", y="PJME_MW", palette="Blues")
ax.set_title("Energy Consumption by Year")
plt.show()
# Create a boxplot of energy consumption by season
sns.boxplot(x="season", y="PJME_MW", data=df)
# Set the plot title and axis labels
plt.title("Energy Consumption by Season")
plt.xlabel("Season")
plt.ylabel("Energy Consumption")
# Display the plot
plt.show()
df.loc[(df.index > "2010-01-01") & (df.index < "2010-01-12")]["PJME_MW"].plot(
figsize=(15, 5), title="Energy Consumption in one year (2010)"
)
plt.show()
# - Our data has seasonality.
# - Daily peak/highest is around 6 PM and the mininum is at 4 AM
# - Least energy consumption on weekends (Saturday/Sunday)
# - The highest energy consumption in a year is either in the end of the year or in the middle of the year
# - No significant trend or change in total energy consumption throughout the year 2002-2018
# - Highest energy consumption in summer, then winter
# # **5. Modeling Univariate Time Series**
univariate_df = df.copy()
univariate_df.drop(
[
"hour",
"dayofweek",
"quarter",
"month",
"year",
"dayofyear",
"dayofmonth",
"weekofyear",
"season",
],
axis=1,
inplace=True,
)
univariate_df.head()
# ## Resample Data (Downsampling)
# Resample to daily frequency
univariate_df = univariate_df.resample("D").mean()
univariate_df.head()
univariate_df.plot(
style=".", figsize=(15, 5), color=color_pal[0], title="PJME Energy Use in MW"
)
plt.show()
# ## Divide Data into Train / Test
train = univariate_df.loc[univariate_df.index < "2015-01-01"]
test = univariate_df.loc[univariate_df.index >= "2015-01-01"]
fig, ax = plt.subplots(figsize=(15, 5))
train["PJME_MW"].plot(ax=ax, label="Training Set", title="Data Train/Test Split")
test["PJME_MW"].plot(ax=ax, label="Test Set")
ax.axvline("2015-01-01", color="black", ls="--")
ax.legend(["Training Set", "Test Set"])
plt.show()
# # **5.1 ARIMA**
# ## Steps to analyze ARIMA
# - Step 1 — Check stationarity: If a time series has a trend or seasonality component, it must be made stationary before we can use ARIMA to forecast. .
# - Step 2 — Difference: If the time series is not stationary, it needs to be stationarized through differencing. Take the first difference, then check for stationarity. Take as many differences as it takes. Make sure you check seasonal differencing as well.
# - Step 3 — Filter out a validation sample: This will be used to validate how accurate our model is. Use train test validation split to achieve this
# - Step 4 — Select AR and MA terms: Use the ACF and PACF to decide whether to include an AR term(s), MA term(s), or both.
# - Step 5 — Build the model: Build the model and set the number of periods to forecast to N (depends on your needs).
# - Step 6 — Validate model: Compare the predicted values to the actuals in the validation sample.
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.seasonal import seasonal_decompose
from pmdarima import auto_arima
# ## Decompose time series to check for seasonality and trend
# Decompose time series
decompose = seasonal_decompose(train, model="additive", period=90)
decompose.plot()
plt.show()
# ### Very high seasonality! This is not ideal
# ## Check if the time series is stationary (Dickey-Fuller test)
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
# Determing rolling statistics
rolmean = timeseries.rolling(24).mean()
rolstd = timeseries.rolling(24).std()
# Plot rolling statistics:
plt.figure(figsize=(15, 5))
plt.plot(timeseries, color=color_pal[0], label="Original")
plt.plot(rolmean, color=color_pal[1], label="Rolling Mean")
plt.plot(rolstd, color=color_pal[2], label="Rolling Std")
plt.legend(loc="best")
plt.title("Rolling Mean & Standard Deviation")
plt.show()
# Perform Dickey-Fuller test:
print("Results of Dickey-Fuller Test:")
dftest = adfuller(timeseries, autolag="AIC")
dfoutput = pd.Series(
dftest[0:4],
index=[
"Test Statistic",
"p-value",
"#Lags Used",
"Number of Observations Used",
],
)
for key, value in dftest[4].items():
dfoutput["Critical Value (%s)" % key] = value
print(dfoutput)
test_stationarity(train)
# We use a rolling mean and standard deviation of 24 hours and plot these along with the original time series. If the rolling statistics do not change over time, it is an indication that the time series is stationary. We also perform the Dickey-Fuller test and check if the p-value is less than 0.05. If it is, we can reject the null hypothesis that the time series is non-stationary.
# Based on the results of the Dickey-Fuller test, the p-value is less than 0.05, and we can reject the null hypothesis. Therefore, the time series is stationary.
# ## Plot the ACF and PACF (Select AR and MA terms)
# ACF and PACF plots:
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 8))
# Plot ACF
plot_acf(train, lags=100, ax=ax1)
# Plot PACF
plot_pacf(train, lags=100, ax=ax2)
plt.tight_layout()
plt.show()
# ## Build the Model
from pmdarima.arima import auto_arima
auto_model = auto_arima(
train,
start_p=0,
start_q=0,
max_p=10,
max_q=10,
seasonal=True,
m=7,
d=None,
D=None,
trace=True,
error_action="ignore",
suppress_warnings=True,
stepwise=True,
seasonal_test="ch",
)
print(auto_model.summary())
# Fit the model
model = SARIMAX(
train,
order=(1, 0, 1),
seasonal_order=(2, 1, 1, 7),
enforce_stationarity=False,
enforce_invertibility=False,
)
result = model.fit()
result.summary()
# Predict on the test data
start = len(train)
end = len(train) + len(test) - 1
predictions = result.predict(start, end, typ="levels").rename("Predictions")
# Plot predictions against known values
title = "Daily Energy Consumption"
ylabel = "Energy Consumption (MW)"
xlabel = ""
ax = test["PJME_MW"].plot(legend=True, figsize=(12, 6), title=title)
predictions.plot(legend=True)
ax.autoscale(axis="x", tight=True)
ax.set(xlabel=xlabel, ylabel=ylabel)
plt.show()
# Because our data is showing multiple seasonalities, the (S)ARIMA model is not performing well.
# ## Score RMSE:
arima_rmse = mean_squared_error(test["PJME_MW"], predictions, squared=False)
arima_rmse
comparison = pd.concat([test, predictions], axis=1)
comparison_rounded = comparison.round(2)
comparison_rounded
# # 3.2 Prophet
from prophet import Prophet
# Format data for prophet model using ds and y
pjme_train_prophet = train.reset_index().rename(
columns={"Datetime": "ds", "PJME_MW": "y"}
)
model = Prophet()
model.fit(pjme_train_prophet)
# Predict on test set with model
pjme_test_prophet = test.reset_index().rename(
columns={"Datetime": "ds", "PJME_MW": "y"}
)
pjme_test_fcst = model.predict(pjme_test_prophet)
pjme_test_fcst.head()
fig, ax = plt.subplots(figsize=(10, 5))
fig = model.plot(pjme_test_fcst, ax=ax)
ax.set_title("Prophet Forecast")
plt.show()
fig = model.plot_components(pjme_test_fcst)
plt.show()
# ## Compare Forecast to Actuals
# Plot the forecast with the actuals
f, ax = plt.subplots(figsize=(15, 5))
ax.scatter(test.index, test["PJME_MW"], color="r")
fig = model.plot(pjme_test_fcst, ax=ax)
ax = pjme_test_fcst.set_index("ds")["yhat"].plot(figsize=(15, 5), lw=0, style=".")
test["PJME_MW"].plot(ax=ax, style=".", lw=1, alpha=0.5, color="black")
plt.legend(["Forecast", "Actual"])
plt.title("Forecast vs Actuals")
plt.show()
fig, ax = plt.subplots(figsize=(10, 6))
ax.scatter(test.index, test["PJME_MW"], color="black")
fig = model.plot(pjme_test_fcst, ax=ax)
ax.set_xlim(pd.to_datetime("2015-01-01"), pd.to_datetime("2015-02-01"))
ax.legend()
ax.set_xlabel("Date")
ax.set_ylabel("Power Consumption (MW)")
plot = plt.suptitle("January 2015 Forecast vs Actuals")
# Plot the forecast with the actuals
f, ax = plt.subplots(figsize=(15, 5))
ax.scatter(test.index, test["PJME_MW"], color="black")
fig = model.plot(pjme_test_fcst, ax=ax)
ax.set_xlim(pd.to_datetime("2015-01-01"), pd.to_datetime("2015-02-01"))
ax.set_ylim(0, 60000)
ax.set_title("First Week of January Forecast vs Actuals")
ax.legend()
ax.set_xlabel("Date")
ax.set_ylabel("Power Consumption (MW)")
plt.show()
# ## Score RMSE
prophet_rmse = mean_squared_error(
y_true=test["PJME_MW"], y_pred=pjme_test_fcst["yhat"], squared=False
)
prophet_rmse
# ## Adding Holidays
# Next we will see if adding holiday indicators will help the accuracy of the model. Prophet comes with a Holiday Effects parameter that can be provided to the model prior to training.
# We will use the built in pandas USFederalHolidayCalendar to pull the list of holidays
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
cal = calendar()
train_holidays = cal.holidays(start=train.index.min(), end=train.index.max())
test_holidays = cal.holidays(start=test.index.min(), end=test.index.max())
# Create a dataframe with holiday, ds columns
df["date"] = df.index.date
df["is_holiday"] = df.date.isin([d.date() for d in cal.holidays()])
holiday_df = df.loc[df["is_holiday"]].reset_index().rename(columns={"Datetime": "ds"})
holiday_df["holiday"] = "USFederalHoliday"
holiday_df = holiday_df.drop(["PJME_MW", "date", "is_holiday"], axis=1)
holiday_df.head()
holiday_df["ds"] = pd.to_datetime(holiday_df["ds"])
# Setup and train model with holidays
model_with_holidays = Prophet(holidays=holiday_df)
model_with_holidays.fit(
train.reset_index().rename(columns={"Datetime": "ds", "PJME_MW": "y"})
)
# ## Predict with holiday
# Predict on training set with model
pjme_test_fcst_with_hols = model_with_holidays.predict(
df=test.reset_index().rename(columns={"Datetime": "ds"})
)
# ## Plot Holiday Effect
fig2 = model_with_holidays.plot_components(pjme_test_fcst_with_hols)
# ## Score RMSE with Holidays:
prophet_rmse_holidays = mean_squared_error(
y_true=test["PJME_MW"], y_pred=pjme_test_fcst_with_hols["yhat"], squared=False
)
print(f"RMSE with holidays: {prophet_rmse_holidays}")
print(f"RMSE without holidays: {prophet_rmse}")
holiday_df["date"] = holiday_df["ds"].dt.date
for hol, d in holiday_df.groupby("date"):
holiday_list = d["ds"].tolist()
hols_test = test.query("Datetime in @holiday_list")
if len(hols_test) == 0:
continue
hols_pred = pjme_test_fcst.query("ds in @holiday_list")
hols_pred_holiday_model = pjme_test_fcst_with_hols.query("ds in @holiday_list")
non_hol_error = mean_absolute_error(
y_true=hols_test["PJME_MW"], y_pred=hols_pred["yhat"]
)
hol_model_error = mean_absolute_error(
y_true=hols_test["PJME_MW"], y_pred=hols_pred_holiday_model["yhat"]
)
diff = non_hol_error - hol_model_error
print(
f"Holiday: {hol:%B %d, %Y}: \n MAE (non-holiday model): {non_hol_error:0.1f} \n MAE (Holiday Model): {hol_model_error:0.1f} \n Diff {diff:0.1f}"
)
# ## Predict into the Future
# We can use the built in make_future_dataframe method to build our future dataframe and make predictions.
future = model.make_future_dataframe(
periods=365 * 24 * 5, freq="h", include_history=False
)
forecast = model_with_holidays.predict(future)
forecast[["ds", "yhat"]].head()
fig = model_with_holidays.plot(forecast)
plt.show()
# # 3.3 LSTM
# ## Preparing the Data for LSTM
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from sklearn.preprocessing import MinMaxScaler
from tensorflow import keras
from tensorflow.keras import layers
from kerastuner.tuners import RandomSearch
from keras.callbacks import EarlyStopping
# Data preprocessing
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_train = scaler.fit_transform(train)
# Create the training data
X_train = []
y_train = []
for i in range(60, len(train)):
X_train.append(scaled_train[i - 60 : i, 0])
y_train.append(scaled_train[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshape the data
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
# ## Build the Model (inlcuding Hyperparamter Tuning)
import shutil
def build_model(hp):
model = keras.Sequential()
model.add(
layers.LSTM(
units=hp.Int("units", min_value=32, max_value=128, step=32),
return_sequences=True,
input_shape=(X_train.shape[1], 1),
)
)
model.add(
layers.Dropout(rate=hp.Float("dropout", min_value=0.1, max_value=0.5, step=0.1))
)
model.add(
layers.LSTM(
units=hp.Int("units", min_value=32, max_value=128, step=32),
return_sequences=False,
)
)
model.add(
layers.Dropout(rate=hp.Float("dropout", min_value=0.1, max_value=0.5, step=0.1))
)
model.add(layers.Dense(units=1))
model.compile(optimizer="adam", loss="mean_squared_error")
return model
# Clear the tuner directory
# shutil.rmtree('project/Energy Consumption LSTM')
# Initialize Keras Tuner
tuner = RandomSearch(
build_model,
objective="val_loss",
max_trials=5, # how many model configurations would you like to test?
executions_per_trial=3, # how many trials per variation? (same model could perform differently)
directory="project",
project_name="Energy Consumption LSTM",
)
# Summary of the search space
tuner.search_space_summary()
# Perform hyperparameter search
tuner.search(X_train, y_train, epochs=5, validation_split=0.2)
# Summary of the results
tuner.results_summary()
from keras.callbacks import EarlyStopping
# Choose the best model
best_model = tuner.get_best_models(num_models=5)[3]
# Define early stopping
early_stop = keras.callbacks.EarlyStopping(monitor="val_loss", patience=5)
# Fit the model
history = best_model.fit(
X_train, y_train, epochs=50, validation_split=0.2, callbacks=[early_stop]
)
# ## Plot Validation Loss vs Training Loss
# Plot the training loss and validation loss
plt.figure(figsize=(8, 4))
plt.plot(history.history["loss"], label="Training loss")
plt.plot(history.history["val_loss"], label="Validation loss")
plt.title("Training and validation loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# ## Make predictions
# Prepare the test data similarly to the training data
inputs = univariate_df[len(univariate_df) - len(test) - 60 :].values
inputs = inputs.reshape(-1, 1)
inputs = scaler.transform(inputs)
X_test = []
for i in range(60, inputs.shape[0]):
X_test.append(inputs[i - 60 : i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
# Make predictions with the best model
predicted_energy_consumption = best_model.predict(X_test)
# Inverse transform to get real values
predicted_energy_consumption = scaler.inverse_transform(predicted_energy_consumption)
# Visualize the results
test_dates = univariate_df.index[len(univariate_df) - len(test) :]
plt.figure(figsize=(8, 4))
plt.plot(test_dates, test.values, color="blue", label="Actual energy consumption")
plt.plot(
test_dates,
predicted_energy_consumption.flatten(),
color="red",
label="Predicted energy consumption",
)
plt.title("Energy consumption prediction")
plt.xlabel("Time")
plt.ylabel("Energy consumption")
plt.legend()
plt.xticks(rotation=45)
plt.show()
# ## Score RMSE:
# Evaluate the Model
lstm_rmse = mean_squared_error(test.values, predicted_energy_consumption, squared=False)
lstm_rmse
# # 4. Comparison of Models
print(f"ARIMA RMSE: {arima_rmse:.2f}")
print(f"Prophet RMSE: {prophet_rmse:.2f}")
print(f"LSTM RMSE: {lstm_rmse:.2f}")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/968/129968523.ipynb
|
hourly-energy-consumption
|
robikscube
|
[{"Id": 129968523, "ScriptId": 38567535, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9025303, "CreationDate": "05/17/2023 19:40:16", "VersionNumber": 2.0, "Title": "Time Series Forecasting (ARIMA, Prophet, LSTM)", "EvaluationDate": NaN, "IsChange": false, "TotalLines": 600.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 600.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186407130, "KernelVersionId": 129968523, "SourceDatasetVersionId": 87794}]
|
[{"Id": 87794, "DatasetId": 48149, "DatasourceVersionId": 90269, "CreatorUserId": 644036, "LicenseName": "CC0: Public Domain", "CreationDate": "08/30/2018 14:17:03", "VersionNumber": 3.0, "Title": "Hourly Energy Consumption", "Slug": "hourly-energy-consumption", "Subtitle": "Over 10 years of hourly energy consumption data from PJM in Megawatts", "Description": "### PJM Hourly Energy Consumption Data\n\nPJM Interconnection LLC (PJM) is a regional transmission organization (RTO) in the United States. It is part of the Eastern Interconnection grid operating an electric transmission system serving all or parts of Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, and the District of Columbia.\n\nThe hourly power consumption data comes from PJM's website and are in megawatts (MW).\n\nThe regions have changed over the years so data may only appear for certain dates per region.\n\n![Energy Plot][1]\n\n## Ideas of what you could do with this dataset:\n- Split the last year into a test set- can you build a model to predict energy consumption?\n- Find trends in energy consumption around hours of the day, holidays, or long term trends?\n- Understand how daily trends change depending of the time of year. Summer trends are very different than winter trends.\n\n![PJM Regions][2]\n\n\n [1]: https://s15.postimg.cc/8rdtgokpn/download.png\n [2]: https://www.theenergytimes.com/sites/theenergytimes.com/files/styles/article_featured_retina/public/pjm-image.jpg.crop_display.jpg?itok=XLQYO4j-", "VersionNotes": "Added separate CSV files per region", "TotalCompressedBytes": 46281700.0, "TotalUncompressedBytes": 12578454.0}]
|
[{"Id": 48149, "CreatorUserId": 644036, "OwnerUserId": 644036.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 87794.0, "CurrentDatasourceVersionId": 90269.0, "ForumId": 56746, "Type": 2, "CreationDate": "08/30/2018 00:51:24", "LastActivityDate": "08/30/2018", "TotalViews": 369514, "TotalDownloads": 55164, "TotalVotes": 872, "TotalKernels": 166}]
|
[{"Id": 644036, "UserName": "robikscube", "DisplayName": "Rob Mulla", "RegisterDate": "06/18/2016", "PerformanceTier": 4}]
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
color_pal = sns.color_palette()
plt.style.use("fivethirtyeight")
from sklearn.metrics import mean_squared_error, mean_absolute_error
df = pd.read_csv("hourly_energy_consumption/PJME_hourly.csv")
# # **1. Data Visualization**
df.head()
df.plot(style=".", figsize=(15, 5), color=color_pal[0], title="PJME Energy Use in MW")
plt.show()
# # **2. Data Preprocessing**
# Since this is a TimeSeries problem, we have to parse the 'datetime' column
df = df.set_index("Datetime")
df.index = pd.to_datetime(df.index)
df.head()
# ## 2.1 Handle Missings
df.isna().sum()
# ## 2.2 Handle Outliers
df["PJME_MW"].plot(kind="hist", bins=500)
df.query("PJME_MW < 19_000")["PJME_MW"].plot(
style=".", figsize=(15, 5), color=color_pal[5], title="Outliers"
)
df = df.query("PJME_MW > 19_000").copy()
# # **3. Feature Engineering**
def create_features(df):
"""
Create time series features based on time series index.
"""
df = df.copy()
df["hour"] = df.index.hour
df["dayofweek"] = df.index.dayofweek
df["quarter"] = df.index.quarter
df["month"] = df.index.month
df["year"] = df.index.year
df["dayofyear"] = df.index.dayofyear
df["dayofmonth"] = df.index.day
df["weekofyear"] = df.index.isocalendar().week
df["season"] = df["month"] % 12 // 3 + 1
return df
season_names = {1: "Winter", 2: "Spring", 3: "Summer", 4: "Fall"}
df = create_features(df)
df["season"] = df["season"].map(season_names)
df.head()
# # **4. Exploratory Data Analysis**
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="hour", y="PJME_MW")
ax.set_title("Energy Consumption by Hour")
plt.show()
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="dayofweek", y="PJME_MW")
ax.set_title("Energy Consumption by Day of Week")
plt.show()
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="month", y="PJME_MW", palette="Blues")
ax.set_title("Energy Consumption by Month")
plt.show()
fig, ax = plt.subplots(figsize=(10, 8))
sns.boxplot(data=df, x="year", y="PJME_MW", palette="Blues")
ax.set_title("Energy Consumption by Year")
plt.show()
# Create a boxplot of energy consumption by season
sns.boxplot(x="season", y="PJME_MW", data=df)
# Set the plot title and axis labels
plt.title("Energy Consumption by Season")
plt.xlabel("Season")
plt.ylabel("Energy Consumption")
# Display the plot
plt.show()
df.loc[(df.index > "2010-01-01") & (df.index < "2010-01-12")]["PJME_MW"].plot(
figsize=(15, 5), title="Energy Consumption in one year (2010)"
)
plt.show()
# - Our data has seasonality.
# - Daily peak/highest is around 6 PM and the mininum is at 4 AM
# - Least energy consumption on weekends (Saturday/Sunday)
# - The highest energy consumption in a year is either in the end of the year or in the middle of the year
# - No significant trend or change in total energy consumption throughout the year 2002-2018
# - Highest energy consumption in summer, then winter
# # **5. Modeling Univariate Time Series**
univariate_df = df.copy()
univariate_df.drop(
[
"hour",
"dayofweek",
"quarter",
"month",
"year",
"dayofyear",
"dayofmonth",
"weekofyear",
"season",
],
axis=1,
inplace=True,
)
univariate_df.head()
# ## Resample Data (Downsampling)
# Resample to daily frequency
univariate_df = univariate_df.resample("D").mean()
univariate_df.head()
univariate_df.plot(
style=".", figsize=(15, 5), color=color_pal[0], title="PJME Energy Use in MW"
)
plt.show()
# ## Divide Data into Train / Test
train = univariate_df.loc[univariate_df.index < "2015-01-01"]
test = univariate_df.loc[univariate_df.index >= "2015-01-01"]
fig, ax = plt.subplots(figsize=(15, 5))
train["PJME_MW"].plot(ax=ax, label="Training Set", title="Data Train/Test Split")
test["PJME_MW"].plot(ax=ax, label="Test Set")
ax.axvline("2015-01-01", color="black", ls="--")
ax.legend(["Training Set", "Test Set"])
plt.show()
# # **5.1 ARIMA**
# ## Steps to analyze ARIMA
# - Step 1 — Check stationarity: If a time series has a trend or seasonality component, it must be made stationary before we can use ARIMA to forecast. .
# - Step 2 — Difference: If the time series is not stationary, it needs to be stationarized through differencing. Take the first difference, then check for stationarity. Take as many differences as it takes. Make sure you check seasonal differencing as well.
# - Step 3 — Filter out a validation sample: This will be used to validate how accurate our model is. Use train test validation split to achieve this
# - Step 4 — Select AR and MA terms: Use the ACF and PACF to decide whether to include an AR term(s), MA term(s), or both.
# - Step 5 — Build the model: Build the model and set the number of periods to forecast to N (depends on your needs).
# - Step 6 — Validate model: Compare the predicted values to the actuals in the validation sample.
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.seasonal import seasonal_decompose
from pmdarima import auto_arima
# ## Decompose time series to check for seasonality and trend
# Decompose time series
decompose = seasonal_decompose(train, model="additive", period=90)
decompose.plot()
plt.show()
# ### Very high seasonality! This is not ideal
# ## Check if the time series is stationary (Dickey-Fuller test)
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
# Determing rolling statistics
rolmean = timeseries.rolling(24).mean()
rolstd = timeseries.rolling(24).std()
# Plot rolling statistics:
plt.figure(figsize=(15, 5))
plt.plot(timeseries, color=color_pal[0], label="Original")
plt.plot(rolmean, color=color_pal[1], label="Rolling Mean")
plt.plot(rolstd, color=color_pal[2], label="Rolling Std")
plt.legend(loc="best")
plt.title("Rolling Mean & Standard Deviation")
plt.show()
# Perform Dickey-Fuller test:
print("Results of Dickey-Fuller Test:")
dftest = adfuller(timeseries, autolag="AIC")
dfoutput = pd.Series(
dftest[0:4],
index=[
"Test Statistic",
"p-value",
"#Lags Used",
"Number of Observations Used",
],
)
for key, value in dftest[4].items():
dfoutput["Critical Value (%s)" % key] = value
print(dfoutput)
test_stationarity(train)
# We use a rolling mean and standard deviation of 24 hours and plot these along with the original time series. If the rolling statistics do not change over time, it is an indication that the time series is stationary. We also perform the Dickey-Fuller test and check if the p-value is less than 0.05. If it is, we can reject the null hypothesis that the time series is non-stationary.
# Based on the results of the Dickey-Fuller test, the p-value is less than 0.05, and we can reject the null hypothesis. Therefore, the time series is stationary.
# ## Plot the ACF and PACF (Select AR and MA terms)
# ACF and PACF plots:
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 8))
# Plot ACF
plot_acf(train, lags=100, ax=ax1)
# Plot PACF
plot_pacf(train, lags=100, ax=ax2)
plt.tight_layout()
plt.show()
# ## Build the Model
from pmdarima.arima import auto_arima
auto_model = auto_arima(
train,
start_p=0,
start_q=0,
max_p=10,
max_q=10,
seasonal=True,
m=7,
d=None,
D=None,
trace=True,
error_action="ignore",
suppress_warnings=True,
stepwise=True,
seasonal_test="ch",
)
print(auto_model.summary())
# Fit the model
model = SARIMAX(
train,
order=(1, 0, 1),
seasonal_order=(2, 1, 1, 7),
enforce_stationarity=False,
enforce_invertibility=False,
)
result = model.fit()
result.summary()
# Predict on the test data
start = len(train)
end = len(train) + len(test) - 1
predictions = result.predict(start, end, typ="levels").rename("Predictions")
# Plot predictions against known values
title = "Daily Energy Consumption"
ylabel = "Energy Consumption (MW)"
xlabel = ""
ax = test["PJME_MW"].plot(legend=True, figsize=(12, 6), title=title)
predictions.plot(legend=True)
ax.autoscale(axis="x", tight=True)
ax.set(xlabel=xlabel, ylabel=ylabel)
plt.show()
# Because our data is showing multiple seasonalities, the (S)ARIMA model is not performing well.
# ## Score RMSE:
arima_rmse = mean_squared_error(test["PJME_MW"], predictions, squared=False)
arima_rmse
comparison = pd.concat([test, predictions], axis=1)
comparison_rounded = comparison.round(2)
comparison_rounded
# # 3.2 Prophet
from prophet import Prophet
# Format data for prophet model using ds and y
pjme_train_prophet = train.reset_index().rename(
columns={"Datetime": "ds", "PJME_MW": "y"}
)
model = Prophet()
model.fit(pjme_train_prophet)
# Predict on test set with model
pjme_test_prophet = test.reset_index().rename(
columns={"Datetime": "ds", "PJME_MW": "y"}
)
pjme_test_fcst = model.predict(pjme_test_prophet)
pjme_test_fcst.head()
fig, ax = plt.subplots(figsize=(10, 5))
fig = model.plot(pjme_test_fcst, ax=ax)
ax.set_title("Prophet Forecast")
plt.show()
fig = model.plot_components(pjme_test_fcst)
plt.show()
# ## Compare Forecast to Actuals
# Plot the forecast with the actuals
f, ax = plt.subplots(figsize=(15, 5))
ax.scatter(test.index, test["PJME_MW"], color="r")
fig = model.plot(pjme_test_fcst, ax=ax)
ax = pjme_test_fcst.set_index("ds")["yhat"].plot(figsize=(15, 5), lw=0, style=".")
test["PJME_MW"].plot(ax=ax, style=".", lw=1, alpha=0.5, color="black")
plt.legend(["Forecast", "Actual"])
plt.title("Forecast vs Actuals")
plt.show()
fig, ax = plt.subplots(figsize=(10, 6))
ax.scatter(test.index, test["PJME_MW"], color="black")
fig = model.plot(pjme_test_fcst, ax=ax)
ax.set_xlim(pd.to_datetime("2015-01-01"), pd.to_datetime("2015-02-01"))
ax.legend()
ax.set_xlabel("Date")
ax.set_ylabel("Power Consumption (MW)")
plot = plt.suptitle("January 2015 Forecast vs Actuals")
# Plot the forecast with the actuals
f, ax = plt.subplots(figsize=(15, 5))
ax.scatter(test.index, test["PJME_MW"], color="black")
fig = model.plot(pjme_test_fcst, ax=ax)
ax.set_xlim(pd.to_datetime("2015-01-01"), pd.to_datetime("2015-02-01"))
ax.set_ylim(0, 60000)
ax.set_title("First Week of January Forecast vs Actuals")
ax.legend()
ax.set_xlabel("Date")
ax.set_ylabel("Power Consumption (MW)")
plt.show()
# ## Score RMSE
prophet_rmse = mean_squared_error(
y_true=test["PJME_MW"], y_pred=pjme_test_fcst["yhat"], squared=False
)
prophet_rmse
# ## Adding Holidays
# Next we will see if adding holiday indicators will help the accuracy of the model. Prophet comes with a Holiday Effects parameter that can be provided to the model prior to training.
# We will use the built in pandas USFederalHolidayCalendar to pull the list of holidays
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
cal = calendar()
train_holidays = cal.holidays(start=train.index.min(), end=train.index.max())
test_holidays = cal.holidays(start=test.index.min(), end=test.index.max())
# Create a dataframe with holiday, ds columns
df["date"] = df.index.date
df["is_holiday"] = df.date.isin([d.date() for d in cal.holidays()])
holiday_df = df.loc[df["is_holiday"]].reset_index().rename(columns={"Datetime": "ds"})
holiday_df["holiday"] = "USFederalHoliday"
holiday_df = holiday_df.drop(["PJME_MW", "date", "is_holiday"], axis=1)
holiday_df.head()
holiday_df["ds"] = pd.to_datetime(holiday_df["ds"])
# Setup and train model with holidays
model_with_holidays = Prophet(holidays=holiday_df)
model_with_holidays.fit(
train.reset_index().rename(columns={"Datetime": "ds", "PJME_MW": "y"})
)
# ## Predict with holiday
# Predict on training set with model
pjme_test_fcst_with_hols = model_with_holidays.predict(
df=test.reset_index().rename(columns={"Datetime": "ds"})
)
# ## Plot Holiday Effect
fig2 = model_with_holidays.plot_components(pjme_test_fcst_with_hols)
# ## Score RMSE with Holidays:
prophet_rmse_holidays = mean_squared_error(
y_true=test["PJME_MW"], y_pred=pjme_test_fcst_with_hols["yhat"], squared=False
)
print(f"RMSE with holidays: {prophet_rmse_holidays}")
print(f"RMSE without holidays: {prophet_rmse}")
holiday_df["date"] = holiday_df["ds"].dt.date
for hol, d in holiday_df.groupby("date"):
holiday_list = d["ds"].tolist()
hols_test = test.query("Datetime in @holiday_list")
if len(hols_test) == 0:
continue
hols_pred = pjme_test_fcst.query("ds in @holiday_list")
hols_pred_holiday_model = pjme_test_fcst_with_hols.query("ds in @holiday_list")
non_hol_error = mean_absolute_error(
y_true=hols_test["PJME_MW"], y_pred=hols_pred["yhat"]
)
hol_model_error = mean_absolute_error(
y_true=hols_test["PJME_MW"], y_pred=hols_pred_holiday_model["yhat"]
)
diff = non_hol_error - hol_model_error
print(
f"Holiday: {hol:%B %d, %Y}: \n MAE (non-holiday model): {non_hol_error:0.1f} \n MAE (Holiday Model): {hol_model_error:0.1f} \n Diff {diff:0.1f}"
)
# ## Predict into the Future
# We can use the built in make_future_dataframe method to build our future dataframe and make predictions.
future = model.make_future_dataframe(
periods=365 * 24 * 5, freq="h", include_history=False
)
forecast = model_with_holidays.predict(future)
forecast[["ds", "yhat"]].head()
fig = model_with_holidays.plot(forecast)
plt.show()
# # 3.3 LSTM
# ## Preparing the Data for LSTM
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from sklearn.preprocessing import MinMaxScaler
from tensorflow import keras
from tensorflow.keras import layers
from kerastuner.tuners import RandomSearch
from keras.callbacks import EarlyStopping
# Data preprocessing
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_train = scaler.fit_transform(train)
# Create the training data
X_train = []
y_train = []
for i in range(60, len(train)):
X_train.append(scaled_train[i - 60 : i, 0])
y_train.append(scaled_train[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshape the data
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
# ## Build the Model (inlcuding Hyperparamter Tuning)
import shutil
def build_model(hp):
model = keras.Sequential()
model.add(
layers.LSTM(
units=hp.Int("units", min_value=32, max_value=128, step=32),
return_sequences=True,
input_shape=(X_train.shape[1], 1),
)
)
model.add(
layers.Dropout(rate=hp.Float("dropout", min_value=0.1, max_value=0.5, step=0.1))
)
model.add(
layers.LSTM(
units=hp.Int("units", min_value=32, max_value=128, step=32),
return_sequences=False,
)
)
model.add(
layers.Dropout(rate=hp.Float("dropout", min_value=0.1, max_value=0.5, step=0.1))
)
model.add(layers.Dense(units=1))
model.compile(optimizer="adam", loss="mean_squared_error")
return model
# Clear the tuner directory
# shutil.rmtree('project/Energy Consumption LSTM')
# Initialize Keras Tuner
tuner = RandomSearch(
build_model,
objective="val_loss",
max_trials=5, # how many model configurations would you like to test?
executions_per_trial=3, # how many trials per variation? (same model could perform differently)
directory="project",
project_name="Energy Consumption LSTM",
)
# Summary of the search space
tuner.search_space_summary()
# Perform hyperparameter search
tuner.search(X_train, y_train, epochs=5, validation_split=0.2)
# Summary of the results
tuner.results_summary()
from keras.callbacks import EarlyStopping
# Choose the best model
best_model = tuner.get_best_models(num_models=5)[3]
# Define early stopping
early_stop = keras.callbacks.EarlyStopping(monitor="val_loss", patience=5)
# Fit the model
history = best_model.fit(
X_train, y_train, epochs=50, validation_split=0.2, callbacks=[early_stop]
)
# ## Plot Validation Loss vs Training Loss
# Plot the training loss and validation loss
plt.figure(figsize=(8, 4))
plt.plot(history.history["loss"], label="Training loss")
plt.plot(history.history["val_loss"], label="Validation loss")
plt.title("Training and validation loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# ## Make predictions
# Prepare the test data similarly to the training data
inputs = univariate_df[len(univariate_df) - len(test) - 60 :].values
inputs = inputs.reshape(-1, 1)
inputs = scaler.transform(inputs)
X_test = []
for i in range(60, inputs.shape[0]):
X_test.append(inputs[i - 60 : i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
# Make predictions with the best model
predicted_energy_consumption = best_model.predict(X_test)
# Inverse transform to get real values
predicted_energy_consumption = scaler.inverse_transform(predicted_energy_consumption)
# Visualize the results
test_dates = univariate_df.index[len(univariate_df) - len(test) :]
plt.figure(figsize=(8, 4))
plt.plot(test_dates, test.values, color="blue", label="Actual energy consumption")
plt.plot(
test_dates,
predicted_energy_consumption.flatten(),
color="red",
label="Predicted energy consumption",
)
plt.title("Energy consumption prediction")
plt.xlabel("Time")
plt.ylabel("Energy consumption")
plt.legend()
plt.xticks(rotation=45)
plt.show()
# ## Score RMSE:
# Evaluate the Model
lstm_rmse = mean_squared_error(test.values, predicted_energy_consumption, squared=False)
lstm_rmse
# # 4. Comparison of Models
print(f"ARIMA RMSE: {arima_rmse:.2f}")
print(f"Prophet RMSE: {prophet_rmse:.2f}")
print(f"LSTM RMSE: {lstm_rmse:.2f}")
| false | 0 | 5,976 | 0 | 6,350 | 5,976 |
||
129968687
|
# This R environment comes with many helpful analytics packages installed
# It is defined by the kaggle/rstats Docker image: https://github.com/kaggle/docker-rstats
# For example, here's a helpful package to load
library(tidyverse) # metapackage of all tidyverse packages
install.packages("lattice")
library(lattice)
install.packages("reshape2")
library(reshape2)
install.packages("caTools")
library(caTools)
library(caret)
library(ggplot2)
library(corrplot)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
list.files(path="../input")
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# **import train and test data**
train_data = read.csv(
"/kaggle/input/ipba-15-regression-graded-paris-house/train.csv", header=TRUE
)
test_data = read.csv("/kaggle/input/ipba-15-regression-graded-paris-house/test.csv")
# summary of data
head(train_data)
cat("Dimension: ", dim(train_data))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/968/129968687.ipynb
| null | null |
[{"Id": 129968687, "ScriptId": 38661206, "ParentScriptVersionId": NaN, "ScriptLanguageId": 12, "AuthorUserId": 14328047, "CreationDate": "05/17/2023 19:42:37", "VersionNumber": 1.0, "Title": "Tarun Gupta housing price prediction", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 36.0, "LinesInsertedFromPrevious": 36.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# This R environment comes with many helpful analytics packages installed
# It is defined by the kaggle/rstats Docker image: https://github.com/kaggle/docker-rstats
# For example, here's a helpful package to load
library(tidyverse) # metapackage of all tidyverse packages
install.packages("lattice")
library(lattice)
install.packages("reshape2")
library(reshape2)
install.packages("caTools")
library(caTools)
library(caret)
library(ggplot2)
library(corrplot)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
list.files(path="../input")
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# **import train and test data**
train_data = read.csv(
"/kaggle/input/ipba-15-regression-graded-paris-house/train.csv", header=TRUE
)
test_data = read.csv("/kaggle/input/ipba-15-regression-graded-paris-house/test.csv")
# summary of data
head(train_data)
cat("Dimension: ", dim(train_data))
| false | 0 | 350 | 0 | 350 | 350 |
||
129968924
|
<jupyter_start><jupyter_text>Heart Attack Analysis & Prediction Dataset
## Hone your analytical and ML skills by participating in tasks of my other dataset's. Given below.
[Data Science Job Posting on Glassdoor](https://www.kaggle.com/rashikrahmanpritom/data-science-job-posting-on-glassdoor)
[Groceries dataset for Market Basket Analysis(MBA)](https://www.kaggle.com/rashikrahmanpritom/groceries-dataset-for-market-basket-analysismba)
[Dataset for Facial recognition using ML approach](https://www.kaggle.com/rashikrahmanpritom/dataset-for-facial-recognition-using-ml-approach)
[Covid_w/wo_Pneumonia Chest Xray](https://www.kaggle.com/rashikrahmanpritom/covid-wwo-pneumonia-chest-xray)
[Disney Movies 1937-2016 Gross Income](https://www.kaggle.com/rashikrahmanpritom/disney-movies-19372016-total-gross)
[Bollywood Movie data from 2000 to 2019](https://www.kaggle.com/rashikrahmanpritom/bollywood-movie-data-from-2000-to-2019)
[17.7K English song data from 2008-2017](https://www.kaggle.com/rashikrahmanpritom/177k-english-song-data-from-20082017)
## About this dataset
- Age : Age of the patient
- Sex : Sex of the patient
- exang: exercise induced angina (1 = yes; 0 = no)
- ca: number of major vessels (0-3)
- cp : Chest Pain type chest pain type
- Value 1: typical angina
- Value 2: atypical angina
- Value 3: non-anginal pain
- Value 4: asymptomatic
- trtbps : resting blood pressure (in mm Hg)
- chol : cholestoral in mg/dl fetched via BMI sensor
- fbs : (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)
- rest_ecg : resting electrocardiographic results
- Value 0: normal
- Value 1: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)
- Value 2: showing probable or definite left ventricular hypertrophy by Estes' criteria
- thalach : maximum heart rate achieved
- target : 0= less chance of heart attack 1= more chance of heart attack
n
Kaggle dataset identifier: heart-attack-analysis-prediction-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('heart-attack-analysis-prediction-dataset/heart.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 303 non-null int64
1 sex 303 non-null int64
2 cp 303 non-null int64
3 trtbps 303 non-null int64
4 chol 303 non-null int64
5 fbs 303 non-null int64
6 restecg 303 non-null int64
7 thalachh 303 non-null int64
8 exng 303 non-null int64
9 oldpeak 303 non-null float64
10 slp 303 non-null int64
11 caa 303 non-null int64
12 thall 303 non-null int64
13 output 303 non-null int64
dtypes: float64(1), int64(13)
memory usage: 33.3 KB
<jupyter_text>Examples:
{
"age": 63.0,
"sex": 1.0,
"cp": 3.0,
"trtbps": 145.0,
"chol": 233.0,
"fbs": 1.0,
"restecg": 0.0,
"thalachh": 150.0,
"exng": 0.0,
"oldpeak": 2.3,
"slp": 0.0,
"caa": 0.0,
"thall": 1.0,
"output": 1.0
}
{
"age": 37.0,
"sex": 1.0,
"cp": 2.0,
"trtbps": 130.0,
"chol": 250.0,
"fbs": 0.0,
"restecg": 1.0,
"thalachh": 187.0,
"exng": 0.0,
"oldpeak": 3.5,
"slp": 0.0,
"caa": 0.0,
"thall": 2.0,
"output": 1.0
}
{
"age": 41.0,
"sex": 0.0,
"cp": 1.0,
"trtbps": 130.0,
"chol": 204.0,
"fbs": 0.0,
"restecg": 0.0,
"thalachh": 172.0,
"exng": 0.0,
"oldpeak": 1.4,
"slp": 2.0,
"caa": 0.0,
"thall": 2.0,
"output": 1.0
}
{
"age": 56.0,
"sex": 1.0,
"cp": 1.0,
"trtbps": 120.0,
"chol": 236.0,
"fbs": 0.0,
"restecg": 1.0,
"thalachh": 178.0,
"exng": 0.0,
"oldpeak": 0.8,
"slp": 2.0,
"caa": 0.0,
"thall": 2.0,
"output": 1.0
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("/kaggle/input/heart-attack-analysis-prediction-dataset/heart.csv")
data.info()
data.head()
data.corr()
data.corr()["output"].sort_values(ascending=False)
X = data.drop("output", axis=1)
y = data["output"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, shuffle=True
)
X_train.head()
y_train.head()
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_jobs=-1)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
rfc.score(X_test, y_test)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/968/129968924.ipynb
|
heart-attack-analysis-prediction-dataset
|
rashikrahmanpritom
|
[{"Id": 129968924, "ScriptId": 38661973, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8192424, "CreationDate": "05/17/2023 19:45:36", "VersionNumber": 1.0, "Title": "notebookcf76eac8cf", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 49.0, "LinesInsertedFromPrevious": 49.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186407641, "KernelVersionId": 129968924, "SourceDatasetVersionId": 2047221}]
|
[{"Id": 2047221, "DatasetId": 1226038, "DatasourceVersionId": 2087216, "CreatorUserId": 4730101, "LicenseName": "CC0: Public Domain", "CreationDate": "03/22/2021 11:40:59", "VersionNumber": 2.0, "Title": "Heart Attack Analysis & Prediction Dataset", "Slug": "heart-attack-analysis-prediction-dataset", "Subtitle": "A dataset for heart attack classification", "Description": "## Hone your analytical and ML skills by participating in tasks of my other dataset's. Given below.\n\n\n[Data Science Job Posting on Glassdoor](https://www.kaggle.com/rashikrahmanpritom/data-science-job-posting-on-glassdoor)\n\n[Groceries dataset for Market Basket Analysis(MBA)](https://www.kaggle.com/rashikrahmanpritom/groceries-dataset-for-market-basket-analysismba)\n\n[Dataset for Facial recognition using ML approach](https://www.kaggle.com/rashikrahmanpritom/dataset-for-facial-recognition-using-ml-approach)\n\n[Covid_w/wo_Pneumonia Chest Xray](https://www.kaggle.com/rashikrahmanpritom/covid-wwo-pneumonia-chest-xray)\n\n[Disney Movies 1937-2016 Gross Income](https://www.kaggle.com/rashikrahmanpritom/disney-movies-19372016-total-gross)\n\n[Bollywood Movie data from 2000 to 2019](https://www.kaggle.com/rashikrahmanpritom/bollywood-movie-data-from-2000-to-2019)\n\n[17.7K English song data from 2008-2017](https://www.kaggle.com/rashikrahmanpritom/177k-english-song-data-from-20082017)\n\n## About this dataset\n\n- Age : Age of the patient\n\n- Sex : Sex of the patient\n\n- exang: exercise induced angina (1 = yes; 0 = no)\n\n- ca: number of major vessels (0-3)\n\n- cp : Chest Pain type chest pain type\n - Value 1: typical angina\n - Value 2: atypical angina\n - Value 3: non-anginal pain\n - Value 4: asymptomatic\n \n- trtbps : resting blood pressure (in mm Hg)\n- chol : cholestoral in mg/dl fetched via BMI sensor\n- fbs : (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)\n- rest_ecg : resting electrocardiographic results\n - Value 0: normal\n - Value 1: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)\n - Value 2: showing probable or definite left ventricular hypertrophy by Estes' criteria\n \n- thalach : maximum heart rate achieved\n- target : 0= less chance of heart attack 1= more chance of heart attack\n\nn", "VersionNotes": "heart csv update", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1226038, "CreatorUserId": 4730101, "OwnerUserId": 4730101.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2047221.0, "CurrentDatasourceVersionId": 2087216.0, "ForumId": 1244179, "Type": 2, "CreationDate": "03/22/2021 08:19:12", "LastActivityDate": "03/22/2021", "TotalViews": 870835, "TotalDownloads": 138216, "TotalVotes": 3197, "TotalKernels": 1050}]
|
[{"Id": 4730101, "UserName": "rashikrahmanpritom", "DisplayName": "Rashik Rahman", "RegisterDate": "03/24/2020", "PerformanceTier": 3}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("/kaggle/input/heart-attack-analysis-prediction-dataset/heart.csv")
data.info()
data.head()
data.corr()
data.corr()["output"].sort_values(ascending=False)
X = data.drop("output", axis=1)
y = data["output"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, shuffle=True
)
X_train.head()
y_train.head()
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_jobs=-1)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
rfc.score(X_test, y_test)
|
[{"heart-attack-analysis-prediction-dataset/heart.csv": {"column_names": "[\"age\", \"sex\", \"cp\", \"trtbps\", \"chol\", \"fbs\", \"restecg\", \"thalachh\", \"exng\", \"oldpeak\", \"slp\", \"caa\", \"thall\", \"output\"]", "column_data_types": "{\"age\": \"int64\", \"sex\": \"int64\", \"cp\": \"int64\", \"trtbps\": \"int64\", \"chol\": \"int64\", \"fbs\": \"int64\", \"restecg\": \"int64\", \"thalachh\": \"int64\", \"exng\": \"int64\", \"oldpeak\": \"float64\", \"slp\": \"int64\", \"caa\": \"int64\", \"thall\": \"int64\", \"output\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 303 entries, 0 to 302\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 303 non-null int64 \n 1 sex 303 non-null int64 \n 2 cp 303 non-null int64 \n 3 trtbps 303 non-null int64 \n 4 chol 303 non-null int64 \n 5 fbs 303 non-null int64 \n 6 restecg 303 non-null int64 \n 7 thalachh 303 non-null int64 \n 8 exng 303 non-null int64 \n 9 oldpeak 303 non-null float64\n 10 slp 303 non-null int64 \n 11 caa 303 non-null int64 \n 12 thall 303 non-null int64 \n 13 output 303 non-null int64 \ndtypes: float64(1), int64(13)\nmemory usage: 33.3 KB\n", "summary": "{\"age\": {\"count\": 303.0, \"mean\": 54.366336633663366, \"std\": 9.082100989837857, \"min\": 29.0, \"25%\": 47.5, \"50%\": 55.0, \"75%\": 61.0, \"max\": 77.0}, \"sex\": {\"count\": 303.0, \"mean\": 0.6831683168316832, \"std\": 0.46601082333962385, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}, \"cp\": {\"count\": 303.0, \"mean\": 0.966996699669967, \"std\": 1.0320524894832985, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 3.0}, \"trtbps\": {\"count\": 303.0, \"mean\": 131.62376237623764, \"std\": 17.5381428135171, \"min\": 94.0, \"25%\": 120.0, \"50%\": 130.0, \"75%\": 140.0, \"max\": 200.0}, \"chol\": {\"count\": 303.0, \"mean\": 246.26402640264027, \"std\": 51.83075098793003, \"min\": 126.0, \"25%\": 211.0, \"50%\": 240.0, \"75%\": 274.5, \"max\": 564.0}, \"fbs\": {\"count\": 303.0, \"mean\": 0.1485148514851485, \"std\": 0.35619787492797644, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \"restecg\": {\"count\": 303.0, \"mean\": 0.528052805280528, \"std\": 0.525859596359298, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 2.0}, \"thalachh\": {\"count\": 303.0, \"mean\": 149.64686468646866, \"std\": 22.905161114914094, \"min\": 71.0, \"25%\": 133.5, \"50%\": 153.0, \"75%\": 166.0, \"max\": 202.0}, \"exng\": {\"count\": 303.0, \"mean\": 0.32673267326732675, \"std\": 0.4697944645223165, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}, \"oldpeak\": {\"count\": 303.0, \"mean\": 1.0396039603960396, \"std\": 1.1610750220686348, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.8, \"75%\": 1.6, \"max\": 6.2}, \"slp\": {\"count\": 303.0, \"mean\": 1.3993399339933994, \"std\": 0.6162261453459619, \"min\": 0.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 2.0}, \"caa\": {\"count\": 303.0, \"mean\": 0.7293729372937293, \"std\": 1.022606364969327, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 4.0}, \"thall\": {\"count\": 303.0, \"mean\": 2.3135313531353137, \"std\": 0.6122765072781409, \"min\": 0.0, \"25%\": 2.0, \"50%\": 2.0, \"75%\": 3.0, \"max\": 3.0}, \"output\": {\"count\": 303.0, \"mean\": 0.5445544554455446, \"std\": 0.4988347841643913, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}}", "examples": "{\"age\":{\"0\":63,\"1\":37,\"2\":41,\"3\":56},\"sex\":{\"0\":1,\"1\":1,\"2\":0,\"3\":1},\"cp\":{\"0\":3,\"1\":2,\"2\":1,\"3\":1},\"trtbps\":{\"0\":145,\"1\":130,\"2\":130,\"3\":120},\"chol\":{\"0\":233,\"1\":250,\"2\":204,\"3\":236},\"fbs\":{\"0\":1,\"1\":0,\"2\":0,\"3\":0},\"restecg\":{\"0\":0,\"1\":1,\"2\":0,\"3\":1},\"thalachh\":{\"0\":150,\"1\":187,\"2\":172,\"3\":178},\"exng\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"oldpeak\":{\"0\":2.3,\"1\":3.5,\"2\":1.4,\"3\":0.8},\"slp\":{\"0\":0,\"1\":0,\"2\":2,\"3\":2},\"caa\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"thall\":{\"0\":1,\"1\":2,\"2\":2,\"3\":2},\"output\":{\"0\":1,\"1\":1,\"2\":1,\"3\":1}}"}}]
| true | 1 |
<start_data_description><data_path>heart-attack-analysis-prediction-dataset/heart.csv:
<column_names>
['age', 'sex', 'cp', 'trtbps', 'chol', 'fbs', 'restecg', 'thalachh', 'exng', 'oldpeak', 'slp', 'caa', 'thall', 'output']
<column_types>
{'age': 'int64', 'sex': 'int64', 'cp': 'int64', 'trtbps': 'int64', 'chol': 'int64', 'fbs': 'int64', 'restecg': 'int64', 'thalachh': 'int64', 'exng': 'int64', 'oldpeak': 'float64', 'slp': 'int64', 'caa': 'int64', 'thall': 'int64', 'output': 'int64'}
<dataframe_Summary>
{'age': {'count': 303.0, 'mean': 54.366336633663366, 'std': 9.082100989837857, 'min': 29.0, '25%': 47.5, '50%': 55.0, '75%': 61.0, 'max': 77.0}, 'sex': {'count': 303.0, 'mean': 0.6831683168316832, 'std': 0.46601082333962385, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}, 'cp': {'count': 303.0, 'mean': 0.966996699669967, 'std': 1.0320524894832985, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 2.0, 'max': 3.0}, 'trtbps': {'count': 303.0, 'mean': 131.62376237623764, 'std': 17.5381428135171, 'min': 94.0, '25%': 120.0, '50%': 130.0, '75%': 140.0, 'max': 200.0}, 'chol': {'count': 303.0, 'mean': 246.26402640264027, 'std': 51.83075098793003, 'min': 126.0, '25%': 211.0, '50%': 240.0, '75%': 274.5, 'max': 564.0}, 'fbs': {'count': 303.0, 'mean': 0.1485148514851485, 'std': 0.35619787492797644, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, 'restecg': {'count': 303.0, 'mean': 0.528052805280528, 'std': 0.525859596359298, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 2.0}, 'thalachh': {'count': 303.0, 'mean': 149.64686468646866, 'std': 22.905161114914094, 'min': 71.0, '25%': 133.5, '50%': 153.0, '75%': 166.0, 'max': 202.0}, 'exng': {'count': 303.0, 'mean': 0.32673267326732675, 'std': 0.4697944645223165, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}, 'oldpeak': {'count': 303.0, 'mean': 1.0396039603960396, 'std': 1.1610750220686348, 'min': 0.0, '25%': 0.0, '50%': 0.8, '75%': 1.6, 'max': 6.2}, 'slp': {'count': 303.0, 'mean': 1.3993399339933994, 'std': 0.6162261453459619, 'min': 0.0, '25%': 1.0, '50%': 1.0, '75%': 2.0, 'max': 2.0}, 'caa': {'count': 303.0, 'mean': 0.7293729372937293, 'std': 1.022606364969327, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 4.0}, 'thall': {'count': 303.0, 'mean': 2.3135313531353137, 'std': 0.6122765072781409, 'min': 0.0, '25%': 2.0, '50%': 2.0, '75%': 3.0, 'max': 3.0}, 'output': {'count': 303.0, 'mean': 0.5445544554455446, 'std': 0.4988347841643913, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}}
<dataframe_info>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 303 non-null int64
1 sex 303 non-null int64
2 cp 303 non-null int64
3 trtbps 303 non-null int64
4 chol 303 non-null int64
5 fbs 303 non-null int64
6 restecg 303 non-null int64
7 thalachh 303 non-null int64
8 exng 303 non-null int64
9 oldpeak 303 non-null float64
10 slp 303 non-null int64
11 caa 303 non-null int64
12 thall 303 non-null int64
13 output 303 non-null int64
dtypes: float64(1), int64(13)
memory usage: 33.3 KB
<some_examples>
{'age': {'0': 63, '1': 37, '2': 41, '3': 56}, 'sex': {'0': 1, '1': 1, '2': 0, '3': 1}, 'cp': {'0': 3, '1': 2, '2': 1, '3': 1}, 'trtbps': {'0': 145, '1': 130, '2': 130, '3': 120}, 'chol': {'0': 233, '1': 250, '2': 204, '3': 236}, 'fbs': {'0': 1, '1': 0, '2': 0, '3': 0}, 'restecg': {'0': 0, '1': 1, '2': 0, '3': 1}, 'thalachh': {'0': 150, '1': 187, '2': 172, '3': 178}, 'exng': {'0': 0, '1': 0, '2': 0, '3': 0}, 'oldpeak': {'0': 2.3, '1': 3.5, '2': 1.4, '3': 0.8}, 'slp': {'0': 0, '1': 0, '2': 2, '3': 2}, 'caa': {'0': 0, '1': 0, '2': 0, '3': 0}, 'thall': {'0': 1, '1': 2, '2': 2, '3': 2}, 'output': {'0': 1, '1': 1, '2': 1, '3': 1}}
<end_description>
| 427 | 0 | 2,122 | 427 |
129968883
|
import pandas as pd
df = pd.read_csv(
"https://raw.githubusercontent.com/andre-marcos-perez/ebac-course-utils/develop/dataset/credito.csv",
na_values="na",
)
df.head(n=10)
# Quantidade de linhas e colunas
df.shape
# Verificando a quantidade de clientes adimplentes
df[df["default"] == 0].shape
# Verificando a quantidade de clientes inadimplentes
df[df["default"] == 1].shape
qtd_total, _ = df.shape
qtd_adimplentes, _ = df[df["default"] == 0].shape
qtd_inadimplentes, _ = df[df["default"] == 1].shape
print(
f"A proporção clientes adimplentes é de {round(100 * qtd_adimplentes/qtd_total, 2 )}%"
)
print(
f"A proporção clientes adimplentes é de {round(100 * qtd_inadimplentes/qtd_total, 2 )}%"
)
df.head(n=5)
# Analisando o tipo de dado de cada coluna
df.dtypes
# Fazer correção do tipo de dado das colunas "limite_credito" e "valor_transacoes"
df.select_dtypes("object").describe().transpose()
# Tratar as informações que não foram preenchidas
df.drop("id", axis=1).select_dtypes("number").describe().transpose()
df.isna().any()
# Dados faltando nas colunas de "escolaridade", "estado_civil" e "salario_anual"
stats_dados_faltantes = []
def stats_dados_faltantes(df: pd.DataFrame) -> None:
# stats_dados_faltantes = []
for col in df.columns:
if df[col].isna().any():
qtd, _ = df[df[col].isna()].shape
total, _ = df.shape
dict_dados_faltantes = {
col: {"quantidade": qtd, "porcentagem": round(100 * qtd / total, 2)}
}
for stat in stats_dados_faltantes:
print(stat)
# dando erro
fn = lambda valor: float(valor.replace(".", "").replace(",", "."))
valores_originais = ["12.691,51", "8.256,96", "3.418,56", "3.313,03", "4.716,22"]
valores_limpos = list(map(fn, valores_originais))
print(valores_originais)
print(valores_limpos)
df[["limite_credito", "valor_transacoes_12m"]].dtypes
# dando erro
df["valor_transacoes_12m"] = df["valor_transacoes_12m"].apply(fn)
df["limite_credito"] = df["limite_credito"].apply(fn)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/968/129968883.ipynb
| null | null |
[{"Id": 129968883, "ScriptId": 38280929, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14679809, "CreationDate": "05/17/2023 19:45:05", "VersionNumber": 1.0, "Title": "Projeto EBAC", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 67.0, "LinesInsertedFromPrevious": 67.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
df = pd.read_csv(
"https://raw.githubusercontent.com/andre-marcos-perez/ebac-course-utils/develop/dataset/credito.csv",
na_values="na",
)
df.head(n=10)
# Quantidade de linhas e colunas
df.shape
# Verificando a quantidade de clientes adimplentes
df[df["default"] == 0].shape
# Verificando a quantidade de clientes inadimplentes
df[df["default"] == 1].shape
qtd_total, _ = df.shape
qtd_adimplentes, _ = df[df["default"] == 0].shape
qtd_inadimplentes, _ = df[df["default"] == 1].shape
print(
f"A proporção clientes adimplentes é de {round(100 * qtd_adimplentes/qtd_total, 2 )}%"
)
print(
f"A proporção clientes adimplentes é de {round(100 * qtd_inadimplentes/qtd_total, 2 )}%"
)
df.head(n=5)
# Analisando o tipo de dado de cada coluna
df.dtypes
# Fazer correção do tipo de dado das colunas "limite_credito" e "valor_transacoes"
df.select_dtypes("object").describe().transpose()
# Tratar as informações que não foram preenchidas
df.drop("id", axis=1).select_dtypes("number").describe().transpose()
df.isna().any()
# Dados faltando nas colunas de "escolaridade", "estado_civil" e "salario_anual"
stats_dados_faltantes = []
def stats_dados_faltantes(df: pd.DataFrame) -> None:
# stats_dados_faltantes = []
for col in df.columns:
if df[col].isna().any():
qtd, _ = df[df[col].isna()].shape
total, _ = df.shape
dict_dados_faltantes = {
col: {"quantidade": qtd, "porcentagem": round(100 * qtd / total, 2)}
}
for stat in stats_dados_faltantes:
print(stat)
# dando erro
fn = lambda valor: float(valor.replace(".", "").replace(",", "."))
valores_originais = ["12.691,51", "8.256,96", "3.418,56", "3.313,03", "4.716,22"]
valores_limpos = list(map(fn, valores_originais))
print(valores_originais)
print(valores_limpos)
df[["limite_credito", "valor_transacoes_12m"]].dtypes
# dando erro
df["valor_transacoes_12m"] = df["valor_transacoes_12m"].apply(fn)
df["limite_credito"] = df["limite_credito"].apply(fn)
| false | 0 | 753 | 0 | 753 | 753 |
||
129448694
|
<jupyter_start><jupyter_text>IQ-OTH/NCCD - Lung Cancer Dataset
### About
The Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) lung cancer dataset was collected in the above-mentioned specialist hospitals over a period of three months in fall 2019. It includes CT scans of patients diagnosed with lung cancer in different stages, as well as healthy subjects. IQ-OTH/NCCD slides were marked by oncologists and radiologists in these two centers. The dataset contains a total of 1190 images representing CT scan slices of 110 cases (see Figure 1). These cases are grouped into three classes: normal, benign, and malignant. of these, 40 cases are diagnosed as malignant; 15 cases diagnosed with benign, and 55 cases classified as normal cases. The CT scans were originally collected in DICOM format. The scanner used is SOMATOM from Siemens. CT protocol includes: 120 kV, slice thickness of 1 mm, with window width ranging from 350 to 1200 HU a and window center from 50 to 600 were used for reading. with breath-hold at full inspiration. All images were de-identified before performing analysis. Written consent was waived by the oversight review board. The study was approved by the institutional review board of participating medical centers. Each scan contains several slices. The number of these slices range from 80 to 200 slices, each of them represents an image of the human chest with different sides and angles. The 110 cases vary in gender, age, educational attainment, area of residence, and living status. Some of them are employees of the Iraqi ministries of Transport and Oil, others are farmers and gainers. Most of them come from places in the middle region of Iraq, particularly, the provinces of Baghdad, Wasit, Diyala, Salahuddin, and Babylon.
Kaggle dataset identifier: iqothnccd-lung-cancer-dataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import cv2
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
XData = []
yData = []
XBenign = []
import os
for dirname, _, filenames in os.walk(
"/kaggle/input/iqothnccd-lung-cancer-dataset/The IQ-OTHNCCD lung cancer dataset/The IQ-OTHNCCD lung cancer dataset/"
):
for filename in filenames:
if filename[-3:] != "txt":
category = " ".join(filename.split()[:2])
img = cv2.imread(os.path.join(dirname, filename))
img = cv2.resize(img, (512, 512))
img = img / 255
if category != "Bengin case":
XData.append(img)
yData.append(category)
else:
XBenign.append(img)
print(len(filenames))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# perform data augmentation
from keras.preprocessing.image import ImageDataGenerator
len(XBenign)
np.unique(yData, return_counts=True)
datagen = ImageDataGenerator(width_shift_range=0.2, height_shift_range=0.3)
x = np.array(XBenign)
x.shape
datagen.fit(x)
len(XBenign)
from tqdm import tqdm
for batch, i in zip(datagen.flow(x, batch_size=1), tqdm(range(510 - 120))):
if i >= (510 - 120):
break
XBenign.append(np.squeeze(batch, axis=0))
del x
len(XBenign)
yBenign = ["Bengin case" for i in range(510)]
XData.extend(XBenign)
yData.extend(yBenign)
len(XData), len(yData)
del yBenign
XData = np.array(XData)
XData.shape
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit(yData)
yEncoded = encoder.transform(yData)
encoder.inverse_transform([0, 1, 2])
yEncoded[:10]
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(XData, yEncoded, shuffle=True)
Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape
import tensorflow as tf
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(
32, (3, 3), activation="relu", input_shape=(512, 512, 3)
),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(3, activation="softmax"),
]
)
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
callback = tf.keras.callbacks.EarlyStopping(patience=7)
history = model.fit(
Xtrain, ytrain, epochs=15, validation_data=(Xtest, ytest), callbacks=[callback]
)
model.save("lung cancer.h5")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/448/129448694.ipynb
|
iqothnccd-lung-cancer-dataset
|
adityamahimkar
|
[{"Id": 129448694, "ScriptId": 38414507, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7919983, "CreationDate": "05/13/2023 23:17:08", "VersionNumber": 3.0, "Title": "Lung Cancer Classification 99% on train set", "EvaluationDate": "05/13/2023", "IsChange": false, "TotalLines": 132.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 132.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185507214, "KernelVersionId": 129448694, "SourceDatasetVersionId": 2882784}]
|
[{"Id": 2882784, "DatasetId": 1748489, "DatasourceVersionId": 2929798, "CreatorUserId": 1964952, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "12/03/2021 14:51:21", "VersionNumber": 2.0, "Title": "IQ-OTH/NCCD - Lung Cancer Dataset", "Slug": "iqothnccd-lung-cancer-dataset", "Subtitle": "Includes CT scans of patients diagnosed with Lung Cancer.", "Description": "### About\n\nThe Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) lung cancer dataset was collected in the above-mentioned specialist hospitals over a period of three months in fall 2019. It includes CT scans of patients diagnosed with lung cancer in different stages, as well as healthy subjects. IQ-OTH/NCCD slides were marked by oncologists and radiologists in these two centers. The dataset contains a total of 1190 images representing CT scan slices of 110 cases (see Figure 1). These cases are grouped into three classes: normal, benign, and malignant. of these, 40 cases are diagnosed as malignant; 15 cases diagnosed with benign, and 55 cases classified as normal cases. The CT scans were originally collected in DICOM format. The scanner used is SOMATOM from Siemens. CT protocol includes: 120 kV, slice thickness of 1 mm, with window width ranging from 350 to 1200 HU a and window center from 50 to 600 were used for reading. with breath-hold at full inspiration. All images were de-identified before performing analysis. Written consent was waived by the oversight review board. The study was approved by the institutional review board of participating medical centers. Each scan contains several slices. The number of these slices range from 80 to 200 slices, each of them represents an image of the human chest with different sides and angles. The 110 cases vary in gender, age, educational attainment, area of residence, and living status. Some of them are employees of the Iraqi ministries of Transport and Oil, others are farmers and gainers. Most of them come from places in the middle region of Iraq, particularly, the provinces of Baghdad, Wasit, Diyala, Salahuddin, and Babylon.\n\n### Acknowledgements\n\nThe data is been collected from Mendeley Data Publication, thanks to the authors who authored the dataset, AL-Huseiny, Muayed; alyasriy, hamdalla. \nCitation: `alyasriy, hamdalla; AL-Huseiny, Muayed (2021), \u201cThe IQ-OTHNCCD lung cancer dataset\u201d, Mendeley Data, V2, doi: 10.17632/bhmdr45bh2.2`", "VersionNotes": "Test Data Added", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1748489, "CreatorUserId": 1964952, "OwnerUserId": 1964952.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2882784.0, "CurrentDatasourceVersionId": 2929798.0, "ForumId": 1770580, "Type": 2, "CreationDate": "11/26/2021 15:12:35", "LastActivityDate": "11/26/2021", "TotalViews": 19960, "TotalDownloads": 2877, "TotalVotes": 36, "TotalKernels": 8}]
|
[{"Id": 1964952, "UserName": "adityamahimkar", "DisplayName": "Aditya Mahimkar", "RegisterDate": "06/04/2018", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import cv2
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
XData = []
yData = []
XBenign = []
import os
for dirname, _, filenames in os.walk(
"/kaggle/input/iqothnccd-lung-cancer-dataset/The IQ-OTHNCCD lung cancer dataset/The IQ-OTHNCCD lung cancer dataset/"
):
for filename in filenames:
if filename[-3:] != "txt":
category = " ".join(filename.split()[:2])
img = cv2.imread(os.path.join(dirname, filename))
img = cv2.resize(img, (512, 512))
img = img / 255
if category != "Bengin case":
XData.append(img)
yData.append(category)
else:
XBenign.append(img)
print(len(filenames))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# perform data augmentation
from keras.preprocessing.image import ImageDataGenerator
len(XBenign)
np.unique(yData, return_counts=True)
datagen = ImageDataGenerator(width_shift_range=0.2, height_shift_range=0.3)
x = np.array(XBenign)
x.shape
datagen.fit(x)
len(XBenign)
from tqdm import tqdm
for batch, i in zip(datagen.flow(x, batch_size=1), tqdm(range(510 - 120))):
if i >= (510 - 120):
break
XBenign.append(np.squeeze(batch, axis=0))
del x
len(XBenign)
yBenign = ["Bengin case" for i in range(510)]
XData.extend(XBenign)
yData.extend(yBenign)
len(XData), len(yData)
del yBenign
XData = np.array(XData)
XData.shape
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit(yData)
yEncoded = encoder.transform(yData)
encoder.inverse_transform([0, 1, 2])
yEncoded[:10]
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(XData, yEncoded, shuffle=True)
Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape
import tensorflow as tf
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(
32, (3, 3), activation="relu", input_shape=(512, 512, 3)
),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(3, activation="softmax"),
]
)
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
callback = tf.keras.callbacks.EarlyStopping(patience=7)
history = model.fit(
Xtrain, ytrain, epochs=15, validation_data=(Xtest, ytest), callbacks=[callback]
)
model.save("lung cancer.h5")
| false | 0 | 1,044 | 0 | 1,548 | 1,044 |
||
129448808
|
<jupyter_start><jupyter_text>Interesting Data to Visualize
Dataset for Kaggle's [Data Visualization](https://www.kaggle.com/learn/data-visualization) course
Kaggle dataset identifier: data-for-datavis
<jupyter_script>import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
import seaborn as sns
# # CSV - "comma-separated values"
# most common way of storing data
# index,name,age,gender,
# 0,"bob",42,"M",
# 1,"bobby",54,"M",
# ...
filepath = "/kaggle/input/data-for-datavis/fifa.csv"
# now create the data frame
dt = pd.read_csv(filepath, index_col="Date", parse_dates=True)
dt.head(15)
dt.tail(10)
# creating the graph
plt.figure(figsize=(16, 6))
sns.lineplot(data=dt)
fp = "/kaggle/input/data-for-datavis/spotify.csv"
data = pd.read_csv(fp, index_col="Date", parse_dates=True)
data.head()
# NaN = "Not a Number"
plt.figure(figsize=(14, 6))
sns.lineplot(data=data)
plt.title("Daily Streams of 5 Songs Over the Span of 1 Year")
list(data.columns)
plt.figure(figsize=(14, 6))
plt.title("Subset Graph of Daily Streams of Songs")
sns.lineplot(data=data["Shape of You"], label="Shape of You")
sns.lineplot(data=data["Despacito"], label="Despacito")
plt.ylabel("Streams in 10s of millions")
plt.xlabel("Date")
data["Shape of You"].head()
# BAR CHARTS
fp = "/kaggle/input/data-for-datavis/flight_delays.csv"
df = pd.read_csv(fp, index_col="Month")
df.head(10)
plt.figure(figsize=(10, 6))
plt.title("Average Delay for Spirit Flights by Month")
sns.barplot(x=df.index, y=df["NK"])
plt.ylabel("Average Arrial Delay in Minutes")
plt.figure(figsize=(10, 6))
plt.title("Average Delay for JetBlue Flights by Month")
sns.barplot(x=df.index, y=df["B6"])
plt.ylabel("Average Arrial Delay in Minutes")
# HEATMAPS
plt.figure(figsize=(14, 7))
plt.title("Average Delay time for All Airlines by Month")
sns.heatmap(data=df, annot=True)
plt.xlabel("Airline Name")
# SCATTERPLOT
fp = "/kaggle/input/data-for-datavis/insurance.csv"
df = pd.read_csv(fp)
df.head()
sns.scatterplot(x=df["bmi"], y=df["charges"])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/448/129448808.ipynb
|
data-for-datavis
|
alexisbcook
|
[{"Id": 129448808, "ScriptId": 37919259, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4135615, "CreationDate": "05/13/2023 23:19:40", "VersionNumber": 2.0, "Title": "GINA_DATAVIS", "EvaluationDate": "05/13/2023", "IsChange": false, "TotalLines": 75.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 75.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 185507492, "KernelVersionId": 129448808, "SourceDatasetVersionId": 3551030}]
|
[{"Id": 3551030, "DatasetId": 116573, "DatasourceVersionId": 3604079, "CreatorUserId": 2603295, "LicenseName": "Unknown", "CreationDate": "04/29/2022 20:37:30", "VersionNumber": 2.0, "Title": "Interesting Data to Visualize", "Slug": "data-for-datavis", "Subtitle": "For Kaggle's Data Visualization Course", "Description": "Dataset for Kaggle's [Data Visualization](https://www.kaggle.com/learn/data-visualization) course", "VersionNotes": "Data Update 2022/04/29", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 116573, "CreatorUserId": 2603295, "OwnerUserId": 2603295.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3551030.0, "CurrentDatasourceVersionId": 3604079.0, "ForumId": 126445, "Type": 2, "CreationDate": "02/06/2019 18:20:07", "LastActivityDate": "02/06/2019", "TotalViews": 101625, "TotalDownloads": 64292, "TotalVotes": 254, "TotalKernels": 2106}]
|
[{"Id": 2603295, "UserName": "alexisbcook", "DisplayName": "Alexis Cook", "RegisterDate": "12/11/2018", "PerformanceTier": 4}]
|
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
import seaborn as sns
# # CSV - "comma-separated values"
# most common way of storing data
# index,name,age,gender,
# 0,"bob",42,"M",
# 1,"bobby",54,"M",
# ...
filepath = "/kaggle/input/data-for-datavis/fifa.csv"
# now create the data frame
dt = pd.read_csv(filepath, index_col="Date", parse_dates=True)
dt.head(15)
dt.tail(10)
# creating the graph
plt.figure(figsize=(16, 6))
sns.lineplot(data=dt)
fp = "/kaggle/input/data-for-datavis/spotify.csv"
data = pd.read_csv(fp, index_col="Date", parse_dates=True)
data.head()
# NaN = "Not a Number"
plt.figure(figsize=(14, 6))
sns.lineplot(data=data)
plt.title("Daily Streams of 5 Songs Over the Span of 1 Year")
list(data.columns)
plt.figure(figsize=(14, 6))
plt.title("Subset Graph of Daily Streams of Songs")
sns.lineplot(data=data["Shape of You"], label="Shape of You")
sns.lineplot(data=data["Despacito"], label="Despacito")
plt.ylabel("Streams in 10s of millions")
plt.xlabel("Date")
data["Shape of You"].head()
# BAR CHARTS
fp = "/kaggle/input/data-for-datavis/flight_delays.csv"
df = pd.read_csv(fp, index_col="Month")
df.head(10)
plt.figure(figsize=(10, 6))
plt.title("Average Delay for Spirit Flights by Month")
sns.barplot(x=df.index, y=df["NK"])
plt.ylabel("Average Arrial Delay in Minutes")
plt.figure(figsize=(10, 6))
plt.title("Average Delay for JetBlue Flights by Month")
sns.barplot(x=df.index, y=df["B6"])
plt.ylabel("Average Arrial Delay in Minutes")
# HEATMAPS
plt.figure(figsize=(14, 7))
plt.title("Average Delay time for All Airlines by Month")
sns.heatmap(data=df, annot=True)
plt.xlabel("Airline Name")
# SCATTERPLOT
fp = "/kaggle/input/data-for-datavis/insurance.csv"
df = pd.read_csv(fp)
df.head()
sns.scatterplot(x=df["bmi"], y=df["charges"])
| false | 0 | 678 | 3 | 730 | 678 |
||
129448701
|
<jupyter_start><jupyter_text>FER-2013
The data consists of 48x48 pixel grayscale images of faces. The faces have been automatically registered so that the face is more or less centred and occupies about the same amount of space in each image.
The task is to categorize each face based on the emotion shown in the facial expression into one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples and the public test set consists of 3,589 examples.
Kaggle dataset identifier: fer2013
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
# from keras.preprocessing.image import ImageDataGenerator, load_img
import keras
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers import (
Conv2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
BatchNormalization,
)
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import cv2
from tensorflow.keras.applications import VGG16, InceptionResNetV2
from keras import regularizers
from keras.utils import plot_model
from keras.optimizers import Adam, RMSprop, SGD
from keras.callbacks import (
ModelCheckpoint,
CSVLogger,
TensorBoard,
EarlyStopping,
ReduceLROnPlateau,
)
import datetime
train_dir = "/kaggle/input/fer2013/train/"
test_dir = "/kaggle/input/fer2013/test/"
row, col = 48, 48
classes = 7
def count_exp(path, set_):
dict_ = {}
for expression in os.listdir(path):
dir_ = path + expression
dict_[expression] = len(os.listdir(dir_))
df = pd.DataFrame(dict_, index=[set_])
return df
train_count = count_exp(train_dir, "train")
test_count = count_exp(test_dir, "test")
print(train_count)
print(test_count)
train_count.transpose().plot(kind="bar")
test_count.transpose().plot(kind="bar")
plt.figure(figsize=(14, 22))
i = 1
for expression in os.listdir(train_dir):
img = cv2.imread(
(train_dir + expression + "/" + os.listdir(train_dir + expression)[1])
)
plt.subplot(1, 7, i)
plt.imshow(img)
plt.title(expression)
plt.axis("off")
i += 1
plt.show()
train_datagen = ImageDataGenerator(
rescale=1.0 / 255, zoom_range=0.3, horizontal_flip=True
)
training_set = train_datagen.flow_from_directory(
train_dir,
batch_size=64,
target_size=(48, 48),
shuffle=True,
color_mode="grayscale",
class_mode="categorical",
)
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_set = test_datagen.flow_from_directory(
test_dir,
batch_size=64,
target_size=(48, 48),
shuffle=True,
color_mode="grayscale",
class_mode="categorical",
)
training_set.class_indices
def get_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
128,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
fernet.save("fernet.h5")
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
def vgg16_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
512,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
def Alex_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
128,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
512,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
def inception_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
128,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
512,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/448/129448701.ipynb
|
fer2013
|
msambare
|
[{"Id": 129448701, "ScriptId": 38237304, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11145910, "CreationDate": "05/13/2023 23:17:18", "VersionNumber": 1.0, "Title": "notebook8f67c3b2ac", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 485.0, "LinesInsertedFromPrevious": 485.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 185507221, "KernelVersionId": 129448701, "SourceDatasetVersionId": 1351797}]
|
[{"Id": 1351797, "DatasetId": 786787, "DatasourceVersionId": 1384195, "CreatorUserId": 3187350, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "07/19/2020 12:24:26", "VersionNumber": 1.0, "Title": "FER-2013", "Slug": "fer2013", "Subtitle": "Learn facial expressions from an image", "Description": "The data consists of 48x48 pixel grayscale images of faces. The faces have been automatically registered so that the face is more or less centred and occupies about the same amount of space in each image. \n\nThe task is to categorize each face based on the emotion shown in the facial expression into one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples and the public test set consists of 3,589 examples.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 786787, "CreatorUserId": 3187350, "OwnerUserId": 3187350.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1351797.0, "CurrentDatasourceVersionId": 1384195.0, "ForumId": 801807, "Type": 2, "CreationDate": "07/19/2020 12:24:26", "LastActivityDate": "07/19/2020", "TotalViews": 404940, "TotalDownloads": 72694, "TotalVotes": 864, "TotalKernels": 237}]
|
[{"Id": 3187350, "UserName": "msambare", "DisplayName": "Manas Sambare", "RegisterDate": "05/06/2019", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
# from keras.preprocessing.image import ImageDataGenerator, load_img
import keras
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers import (
Conv2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
BatchNormalization,
)
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import cv2
from tensorflow.keras.applications import VGG16, InceptionResNetV2
from keras import regularizers
from keras.utils import plot_model
from keras.optimizers import Adam, RMSprop, SGD
from keras.callbacks import (
ModelCheckpoint,
CSVLogger,
TensorBoard,
EarlyStopping,
ReduceLROnPlateau,
)
import datetime
train_dir = "/kaggle/input/fer2013/train/"
test_dir = "/kaggle/input/fer2013/test/"
row, col = 48, 48
classes = 7
def count_exp(path, set_):
dict_ = {}
for expression in os.listdir(path):
dir_ = path + expression
dict_[expression] = len(os.listdir(dir_))
df = pd.DataFrame(dict_, index=[set_])
return df
train_count = count_exp(train_dir, "train")
test_count = count_exp(test_dir, "test")
print(train_count)
print(test_count)
train_count.transpose().plot(kind="bar")
test_count.transpose().plot(kind="bar")
plt.figure(figsize=(14, 22))
i = 1
for expression in os.listdir(train_dir):
img = cv2.imread(
(train_dir + expression + "/" + os.listdir(train_dir + expression)[1])
)
plt.subplot(1, 7, i)
plt.imshow(img)
plt.title(expression)
plt.axis("off")
i += 1
plt.show()
train_datagen = ImageDataGenerator(
rescale=1.0 / 255, zoom_range=0.3, horizontal_flip=True
)
training_set = train_datagen.flow_from_directory(
train_dir,
batch_size=64,
target_size=(48, 48),
shuffle=True,
color_mode="grayscale",
class_mode="categorical",
)
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_set = test_datagen.flow_from_directory(
test_dir,
batch_size=64,
target_size=(48, 48),
shuffle=True,
color_mode="grayscale",
class_mode="categorical",
)
training_set.class_indices
def get_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
128,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
fernet.save("fernet.h5")
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
def vgg16_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
512,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
def Alex_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
128,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
512,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
def inception_model(input_size, classes=7):
# Initialising the CNN
model = tf.keras.models.Sequential()
model.add(
Conv2D(
32,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(
Conv2D(
96,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu", padding="same"))
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
128,
kernel_size=(3, 3),
padding="same",
activation="relu",
input_shape=input_size,
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2))
model.add(Dropout(0.25))
model.add(
Conv2D(
256,
kernel_size=(3, 3),
activation="relu",
padding="same",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(
Conv2D(
512,
kernel_size=(3, 3),
activation="relu",
kernel_regularizer=regularizers.l2(0.01),
)
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(classes, activation="softmax"))
# Compliling the model
model.compile(
optimizer=Adam(lr=0.0001, decay=1e-6),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
fernet = get_model((row, col, 1), classes)
fernet.summary()
plot_model(fernet, to_file="fernet.png", show_shapes=True, show_layer_names=True)
chk_path = "ferNet.h5"
log_dir = "checkpoint/logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint = ModelCheckpoint(
filepath=chk_path, save_best_only=True, verbose=1, mode="min", moniter="val_loss"
)
earlystop = EarlyStopping(
monitor="val_loss", min_delta=0, patience=3, verbose=1, restore_best_weights=True
)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=6, verbose=1, min_delta=0.0001
)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
csv_logger = CSVLogger("training.log")
callbacks = [checkpoint, reduce_lr, csv_logger]
steps_per_epoch = training_set.n // training_set.batch_size
validation_steps = test_set.n // test_set.batch_size
hist = fernet.fit(
x=training_set,
validation_data=test_set,
epochs=60,
callbacks=callbacks,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 2)
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("Model Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["train", "test"], loc="upper left")
plt.subplot(1, 2, 1)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title("model Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["train", "test"], loc="upper left")
plt.show()
| false | 0 | 5,428 | 1 | 5,594 | 5,428 |
||
129448361
|
<jupyter_start><jupyter_text>Students Exam Scores: Extended Dataset
This dataset includes scores from three test scores of students at a (fictional) public school and a variety of personal and socio-economic factors that may have interaction effects upon them.
**Remark/warning/disclaimer:**
- This datasets are **fictional** and should be used for **educational purposes only**.
- The original dataset generator creator is Mr. [Royce Kimmons](http://roycekimmons.com/tools/generated_data/exams)
- There are *similar datasets* on kaggle already but this one is **different** and **arguably better** in two ways.
-> 1) has **more data** (**>30k** instead of just the 1k the other datasets have),
-> 2) has extended datasets with **more features** (15 instead of 9) and has **missing values** which makes it ideal for data cleaning and data preprocessing.
### Data Dictionary (column description)
1. **Gender**: Gender of the student (male/female)
2. **EthnicGroup**: Ethnic group of the student (group A to E)
3. **ParentEduc**: Parent(s) education background (from some_highschool to master's degree)
4. **LunchType**: School lunch type (standard or free/reduced)
5. **TestPrep**: Test preparation course followed (completed or none)
6. **ParentMaritalStatus**: Parent(s) marital status (married/single/widowed/divorced)
7. **PracticeSport**: How often the student parctice sport (never/sometimes/regularly))
8. **IsFirstChild**: If the child is first child in the family or not (yes/no)
9. **NrSiblings**: Number of siblings the student has (0 to 7)
10. **TransportMeans**: Means of transport to school (schoolbus/private)
11. **WklyStudyHours**: Weekly self-study hours(less that 5hrs; between 5 and 10hrs; more than 10hrs)
12. **MathScore**: math test score(0-100)
13. **ReadingScore**: reading test score(0-100)
14. **WritingScore**: writing test score(0-100)
### Analytics questions:
1. What factors (features) affect test scores most?
2. Are there interacting features which affect test scores?
Kaggle dataset identifier: students-exam-scores
<jupyter_script>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data_file = "/kaggle/input/students-exam-scores/Original_data_with_more_rows.csv"
data = pd.read_csv(data_file)
data.head()
# **let's delete the innecesary column "Unnamed:0"**
data = data.drop(["Unnamed: 0"], axis=1)
# **let's reduce the scale of numbers**
data.iloc[:, 5:] = data.iloc[:, 5:] / 10
data
# **Check if there is null**
data.isnull().sum()
# **Mean score by ethnic group**
def group_by(df: pd.DataFrame, col: str, test: str, method: str = None):
"""# Given a DataFrame, let's do a group by col. Test is the column of the differente score and method is the method we want to apply to the dataframe
Args:
df (DataFrame): is the DataFrame we want to group by.
col (str): parameter for the groupby method.
test (str): column of the differente score.
method (str, optional): is the method we want to apply to the dataframe. Defaults to None.
Returns:
DataFrame: the result of the groupby method.
"""
if method == None:
return df.groupby(col)[test]
return df.groupby(col)[test].agg(method)
# **The gruop E get most score mean than other**
#
test_list = ["ReadingScore", "MathScore", "WritingScore"]
def barplot_group(df: pd.DataFrame, col: str, method: str = None):
"""# Given a DataFrame, let's make a plot of the results of the group by col, applying the method.
Args:
* df (pd.DataFrame): is the DataFrame we want to group by.
* col (str): parameter for the groupby method.
* method (str, optional): _description_. Defaults to None.
Returns:
Plot: a barplot of the data
"""
ethnic_score_mean = {test: group_by(df, col, test, method) for test in test_list}
fig = plt.figure(figsize=(15, 10))
grid = fig.add_gridspec(2, 2)
axes = [
fig.add_subplot(grid[0, 0]),
fig.add_subplot(grid[0, 1]),
fig.add_subplot(grid[1, :]),
]
for i in range(len(ethnic_score_mean)):
element = test_list[i]
sns.barplot(
x=ethnic_score_mean[element].index, y=ethnic_score_mean[element], ax=axes[i]
)
barplot_group(data, "EthnicGroup", "mean")
barplot_group(data, "TestPrep", "mean")
barplot_group(data, "LunchType", "mean")
fig = plt.figure(figsize=(15, 10))
grid = fig.add_gridspec(3, 3)
axes = [
fig.add_subplot(grid[0, :3]),
fig.add_subplot(grid[1, :2]),
fig.add_subplot(grid[2:, :3]),
]
for idx, ax in enumerate(axes):
sns.barplot(y="ParentEduc", x=test_list[idx], data=data, ax=ax)
plt.show()
# **Fit normal distribution to data**
fig = plt.figure(figsize=(15, 10))
axes = fig.subplots(1, 2)
std = data["MathScore"].std()
mean = data["MathScore"].mean()
def gaussian_distribution(x):
return (1 / (std * np.sqrt(2 * np.pi))) * np.exp(-0.5 * ((x - mean) / std) ** 2)
axes[0].hist(data["MathScore"], bins=10, density=True)
axes[1].plot(data["MathScore"], gaussian_distribution(data["MathScore"]), "o")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/448/129448361.ipynb
|
students-exam-scores
|
desalegngeb
|
[{"Id": 129448361, "ScriptId": 38489884, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13352455, "CreationDate": "05/13/2023 23:09:14", "VersionNumber": 1.0, "Title": "notebook7b20b404be", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 98.0, "LinesInsertedFromPrevious": 98.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 185506624, "KernelVersionId": 129448361, "SourceDatasetVersionId": 5399169}]
|
[{"Id": 5399169, "DatasetId": 3128523, "DatasourceVersionId": 5472937, "CreatorUserId": 5430373, "LicenseName": "Other (specified in description)", "CreationDate": "04/14/2023 00:15:38", "VersionNumber": 2.0, "Title": "Students Exam Scores: Extended Dataset", "Slug": "students-exam-scores", "Subtitle": "Exam scores for students at a public school", "Description": "This dataset includes scores from three test scores of students at a (fictional) public school and a variety of personal and socio-economic factors that may have interaction effects upon them. \n\n**Remark/warning/disclaimer:** \n- This datasets are **fictional** and should be used for **educational purposes only**. \n- The original dataset generator creator is Mr. [Royce Kimmons](http://roycekimmons.com/tools/generated_data/exams)\n- There are *similar datasets* on kaggle already but this one is **different** and **arguably better** in two ways. \n -> 1) has **more data** (**>30k** instead of just the 1k the other datasets have),\n -> 2) has extended datasets with **more features** (15 instead of 9) and has **missing values** which makes it ideal for data cleaning and data preprocessing.\n\n### Data Dictionary (column description)\n\n1. **Gender**: Gender of the student (male/female)\n2. **EthnicGroup**: Ethnic group of the student (group A to E)\n3. **ParentEduc**: Parent(s) education background (from some_highschool to master's degree)\n4. **LunchType**: School lunch type (standard or free/reduced)\n5. **TestPrep**: Test preparation course followed (completed or none)\n6. **ParentMaritalStatus**: Parent(s) marital status (married/single/widowed/divorced)\n7. **PracticeSport**: How often the student parctice sport (never/sometimes/regularly))\n8. **IsFirstChild**: If the child is first child in the family or not (yes/no)\n9. **NrSiblings**: Number of siblings the student has (0 to 7)\n10. **TransportMeans**: Means of transport to school (schoolbus/private)\n11. **WklyStudyHours**: Weekly self-study hours(less that 5hrs; between 5 and 10hrs; more than 10hrs)\n12. **MathScore**: math test score(0-100)\n13. **ReadingScore**: reading test score(0-100)\n14. **WritingScore**: writing test score(0-100)\n\n### Analytics questions:\n\n1. What factors (features) affect test scores most?\n2. Are there interacting features which affect test scores?", "VersionNotes": "Data Update 2023-04-14", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3128523, "CreatorUserId": 5430373, "OwnerUserId": 5430373.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5399169.0, "CurrentDatasourceVersionId": 5472937.0, "ForumId": 3192141, "Type": 2, "CreationDate": "04/13/2023 21:52:39", "LastActivityDate": "04/13/2023", "TotalViews": 75452, "TotalDownloads": 15444, "TotalVotes": 282, "TotalKernels": 38}]
|
[{"Id": 5430373, "UserName": "desalegngeb", "DisplayName": "des.", "RegisterDate": "07/07/2020", "PerformanceTier": 3}]
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data_file = "/kaggle/input/students-exam-scores/Original_data_with_more_rows.csv"
data = pd.read_csv(data_file)
data.head()
# **let's delete the innecesary column "Unnamed:0"**
data = data.drop(["Unnamed: 0"], axis=1)
# **let's reduce the scale of numbers**
data.iloc[:, 5:] = data.iloc[:, 5:] / 10
data
# **Check if there is null**
data.isnull().sum()
# **Mean score by ethnic group**
def group_by(df: pd.DataFrame, col: str, test: str, method: str = None):
"""# Given a DataFrame, let's do a group by col. Test is the column of the differente score and method is the method we want to apply to the dataframe
Args:
df (DataFrame): is the DataFrame we want to group by.
col (str): parameter for the groupby method.
test (str): column of the differente score.
method (str, optional): is the method we want to apply to the dataframe. Defaults to None.
Returns:
DataFrame: the result of the groupby method.
"""
if method == None:
return df.groupby(col)[test]
return df.groupby(col)[test].agg(method)
# **The gruop E get most score mean than other**
#
test_list = ["ReadingScore", "MathScore", "WritingScore"]
def barplot_group(df: pd.DataFrame, col: str, method: str = None):
"""# Given a DataFrame, let's make a plot of the results of the group by col, applying the method.
Args:
* df (pd.DataFrame): is the DataFrame we want to group by.
* col (str): parameter for the groupby method.
* method (str, optional): _description_. Defaults to None.
Returns:
Plot: a barplot of the data
"""
ethnic_score_mean = {test: group_by(df, col, test, method) for test in test_list}
fig = plt.figure(figsize=(15, 10))
grid = fig.add_gridspec(2, 2)
axes = [
fig.add_subplot(grid[0, 0]),
fig.add_subplot(grid[0, 1]),
fig.add_subplot(grid[1, :]),
]
for i in range(len(ethnic_score_mean)):
element = test_list[i]
sns.barplot(
x=ethnic_score_mean[element].index, y=ethnic_score_mean[element], ax=axes[i]
)
barplot_group(data, "EthnicGroup", "mean")
barplot_group(data, "TestPrep", "mean")
barplot_group(data, "LunchType", "mean")
fig = plt.figure(figsize=(15, 10))
grid = fig.add_gridspec(3, 3)
axes = [
fig.add_subplot(grid[0, :3]),
fig.add_subplot(grid[1, :2]),
fig.add_subplot(grid[2:, :3]),
]
for idx, ax in enumerate(axes):
sns.barplot(y="ParentEduc", x=test_list[idx], data=data, ax=ax)
plt.show()
# **Fit normal distribution to data**
fig = plt.figure(figsize=(15, 10))
axes = fig.subplots(1, 2)
std = data["MathScore"].std()
mean = data["MathScore"].mean()
def gaussian_distribution(x):
return (1 / (std * np.sqrt(2 * np.pi))) * np.exp(-0.5 * ((x - mean) / std) ** 2)
axes[0].hist(data["MathScore"], bins=10, density=True)
axes[1].plot(data["MathScore"], gaussian_distribution(data["MathScore"]), "o")
plt.show()
| false | 0 | 981 | 1 | 1,573 | 981 |
||
129448786
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import matplotlib.pyplot as plt
class ChessBoard:
"""
Represents a chess board with red and blue queens on it.
"""
def __init__(self):
"""
Initializes an 8x8 grid with alternating black and white cells.
"""
# Create an 8x8x3 array of zeros, where each cell represents a pixel on the board
self.board = np.zeros((8, 8, 3))
# Loop through each row and column of the board and check if the sum of their coordinates is even
for i in range(8):
for j in range(8):
if (i + j) % 2 == 0:
# If the sum is even, set the color of the cell to white (RGB value of 1, 1, 1)
self.board[i, j] = [1, 1, 1]
self.red_pos = None
self.blue_pos = None
def add_red(self, row, col):
"""
Adds a red queen to the specified row and column.
"""
self.board[row, col] = [1, 0.2, 0]
self.red_pos = (row, col)
def add_blue(self, row, col):
"""
Adds a blue queen to the specified row and column.
"""
self.board[row, col] = [0, 1, 1]
self.blue_pos = (row, col)
def render(self):
"""
Displays the chess board with red and blue queens shown in correct locations.
"""
# Display the board using matplotlib.pyplot.imshow()
plt.imshow(self.board)
def is_under_attack(self):
"""
Determines if the red queen is under attack by a blue piece horizontally, vertically or diagonally.
Returns True if the red queen is under attack, False otherwise.
"""
# Check if the red and blue queens are in the same row or column
if self.red_pos[0] == self.blue_pos[0] or self.red_pos[1] == self.blue_pos[1]:
return True
# Check if the red and blue queens are on the same diagonal
if abs(self.red_pos[0] - self.blue_pos[0]) == abs(
self.red_pos[1] - self.blue_pos[1]
):
return True
# Neither of the above conditions are met, the red queen is not under attack
return False
def __str__(self):
"""
Returns a string representation of the board.
"""
return str(self.board)
def __repr__(self):
"""
Returns a string representation of the board .
"""
return f"ChessBoard() with red queen at {self.red_pos} and blue queen at {self.blue_pos}"
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(5, 5)
board.render()
print("Is the red queen under attack diagonally?", board.is_under_attack())
board = ChessBoard()
board.add_red(3, 4)
board.add_blue(6, 7)
board.render()
under_attack = board.is_under_attack()
print("Is the red queen under attack diagonally?", board.is_under_attack())
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(5, 5)
board.render()
print("Is the red queen under attack diagonally?", board.is_under_attack())
board = ChessBoard()
board.add_red(4, 2)
board.add_blue(7, 2)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack vertically?", is_under_attack)
board = ChessBoard()
board.add_red(4, 2)
board.add_blue(4, 6)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack horizontally?", is_under_attack)
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(7, 2)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack ?", is_under_attack)
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(7, 4)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack ?", is_under_attack)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/448/129448786.ipynb
| null | null |
[{"Id": 129448786, "ScriptId": 38243198, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14984801, "CreationDate": "05/13/2023 23:19:15", "VersionNumber": 2.0, "Title": "Chess Board", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 146.0, "LinesInsertedFromPrevious": 4.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 142.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import matplotlib.pyplot as plt
class ChessBoard:
"""
Represents a chess board with red and blue queens on it.
"""
def __init__(self):
"""
Initializes an 8x8 grid with alternating black and white cells.
"""
# Create an 8x8x3 array of zeros, where each cell represents a pixel on the board
self.board = np.zeros((8, 8, 3))
# Loop through each row and column of the board and check if the sum of their coordinates is even
for i in range(8):
for j in range(8):
if (i + j) % 2 == 0:
# If the sum is even, set the color of the cell to white (RGB value of 1, 1, 1)
self.board[i, j] = [1, 1, 1]
self.red_pos = None
self.blue_pos = None
def add_red(self, row, col):
"""
Adds a red queen to the specified row and column.
"""
self.board[row, col] = [1, 0.2, 0]
self.red_pos = (row, col)
def add_blue(self, row, col):
"""
Adds a blue queen to the specified row and column.
"""
self.board[row, col] = [0, 1, 1]
self.blue_pos = (row, col)
def render(self):
"""
Displays the chess board with red and blue queens shown in correct locations.
"""
# Display the board using matplotlib.pyplot.imshow()
plt.imshow(self.board)
def is_under_attack(self):
"""
Determines if the red queen is under attack by a blue piece horizontally, vertically or diagonally.
Returns True if the red queen is under attack, False otherwise.
"""
# Check if the red and blue queens are in the same row or column
if self.red_pos[0] == self.blue_pos[0] or self.red_pos[1] == self.blue_pos[1]:
return True
# Check if the red and blue queens are on the same diagonal
if abs(self.red_pos[0] - self.blue_pos[0]) == abs(
self.red_pos[1] - self.blue_pos[1]
):
return True
# Neither of the above conditions are met, the red queen is not under attack
return False
def __str__(self):
"""
Returns a string representation of the board.
"""
return str(self.board)
def __repr__(self):
"""
Returns a string representation of the board .
"""
return f"ChessBoard() with red queen at {self.red_pos} and blue queen at {self.blue_pos}"
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(5, 5)
board.render()
print("Is the red queen under attack diagonally?", board.is_under_attack())
board = ChessBoard()
board.add_red(3, 4)
board.add_blue(6, 7)
board.render()
under_attack = board.is_under_attack()
print("Is the red queen under attack diagonally?", board.is_under_attack())
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(5, 5)
board.render()
print("Is the red queen under attack diagonally?", board.is_under_attack())
board = ChessBoard()
board.add_red(4, 2)
board.add_blue(7, 2)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack vertically?", is_under_attack)
board = ChessBoard()
board.add_red(4, 2)
board.add_blue(4, 6)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack horizontally?", is_under_attack)
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(7, 2)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack ?", is_under_attack)
board = ChessBoard()
board.add_red(3, 3)
board.add_blue(7, 4)
board.render()
is_under_attack = board.is_under_attack()
print("Is the red queen under attack ?", is_under_attack)
| false | 0 | 1,308 | 0 | 1,308 | 1,308 |
||
129986112
|
<jupyter_start><jupyter_text>glove.840B.300d.txt
Kaggle dataset identifier: glove840b300dtxt
<jupyter_script># # Sentiment Analysis On Movie Reviews
# 
# # Dataset Description
# The dataset is comprised of tab-separated files with phrases from the Rotten Tomatoes dataset. The train/test split has been preserved for the purposes of benchmarking, but the sentences have been shuffled from their original order. Each Sentence has been parsed into many phrases by the Stanford parser. Each phrase has a PhraseId. Each sentence has a SentenceId. Phrases that are repeated (such as short/common words) are only included once in the data.
# train.tsv contains the phrases and their associated sentiment labels. We have additionally provided a SentenceId so that you can track which phrases belong to a single sentence.
# test.tsv contains just phrases. You must assign a sentiment label to each phrase.
# The sentiment labels are:
# 0 - negative
# 1 - somewhat negative
# 2 - neutral
# 3 - somewhat positive
# 4 - positive
from nltk.corpus import sentiwordnet as swn
from afinn import Afinn
from nltk.sentiment import SentimentIntensityAnalyzer
from tensorflow.keras.utils import to_categorical
import numpy as np # linear algebra
import pandas as pd
import string
import nltk
from nltk.corpus import stopwords
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
# Download the stopwords corpus
nltk.download("stopwords")
import os
nltk.download("vader-lixcon")
nltk.download("sentiwordnet")
nltk.download("aaveraged_perceptron_tagger")
nltk.download("wordnet")
import warnings
warnings.filterwarnings("ignore")
# **Unzip File**
import zipfile
with zipfile.ZipFile(
"/kaggle/input/sentiment-analysis-on-movie-reviews/test.tsv.zip", "r"
) as f:
f.extractall()
with zipfile.ZipFile(
"/kaggle/input/sentiment-analysis-on-movie-reviews/train.tsv.zip", "r"
) as f:
f.extractall()
# **read data**
train = pd.read_csv("/kaggle/working/train.tsv", sep="\t")
test = pd.read_csv("/kaggle/working/test.tsv", sep="\t")
train
train.drop(columns=["PhraseId", "SentenceId"], axis=1)
# **New columns**
# Function to count the number of words
def count_words(text):
words = text.split()
return len(words)
# Function to count the number of punctuation characters
def count_punctuation(text):
count = sum([1 for char in text if char in string.punctuation])
return count
# Function to count the number of stopwords
def count_stopwords(text):
stop_words = set(stopwords.words("english")) # Adjust language as per your data
words = text.split()
count = sum([1 for word in words if word.lower() in stop_words])
return count
# Add new columns
train["text_len"] = train["Phrase"].apply(len)
train["words_count"] = train["Phrase"].apply(count_words)
train["punctuation_count"] = train["Phrase"].apply(count_punctuation)
train["stopwords_count"] = train["Phrase"].apply(count_stopwords)
import seaborn as sns
n1 = train[train["Sentiment"] == 0]["text_len"]
n2 = train[train["Sentiment"] == 1]["text_len"]
n3 = train[train["Sentiment"] == 2]["text_len"]
n4 = train[train["Sentiment"] == 3]["text_len"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="pink")
sns.kdeplot(n4, shade=True, color="yellow")
n1 = train[train["Sentiment"] == 0]["words_count"]
n2 = train[train["Sentiment"] == 1]["words_count"]
n3 = train[train["Sentiment"] == 2]["words_count"]
n4 = train[train["Sentiment"] == 3]["words_count"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="white")
sns.kdeplot(n4, shade=True, color="yellow")
n1 = train[train["Sentiment"] == 0]["punctuation_count"]
n2 = train[train["Sentiment"] == 1]["punctuation_count"]
n3 = train[train["Sentiment"] == 2]["punctuation_count"]
n4 = train[train["Sentiment"] == 3]["punctuation_count"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentiments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="green") # Changed color to green
sns.kdeplot(n4, shade=True, color="orange") # Changed color to orange
n1 = train[train["Sentiment"] == 0]["stopwords_count"]
n2 = train[train["Sentiment"] == 1]["stopwords_count"]
n3 = train[train["Sentiment"] == 2]["stopwords_count"]
n4 = train[train["Sentiment"] == 3]["stopwords_count"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="white")
sns.kdeplot(n4, shade=True, color="yellow")
# **AFINN:**
# AFINN is a lexicon-based approach for sentiment analysis. It assigns pre-defined sentiment scores to words based on a pre-constructed list of words and their associated sentiment scores. The AFINN library provides a ready-to-use implementation of this lexicon for sentiment analysis in various programming languages, including Python. By using AFINN, you can calculate the sentiment score of a given text based on the sentiment scores of individual words in the text.
# **SentimentIntensityAnalyzer**:
# SentimentIntensityAnalyzer is a part of the Natural Language Toolkit (NLTK) library in Python. It is a rule-based sentiment analysis tool that uses a combination of lexical and grammatical heuristics to determine the sentiment polarity of a given text. It provides a sentiment intensity score, which represents the positive, negative, and neutral sentiment intensities in the text.
# Initialize the lexicon-based models
afinn = Afinn()
sia = SentimentIntensityAnalyzer()
# Define a function to calculate the sentiment using SentiWordNet
def calculate_sentiment_swn(text):
sentiment = 0
tokens = nltk.word_tokenize(text)
for token in tokens:
synsets = list(swn.senti_synsets(token))
if synsets:
# Use the first synset for simplicity
synset = synsets[0]
sentiment += synset.pos_score() - synset.neg_score()
return sentiment
# Map sentiment scores to the 5 classes
def map_sentiment(score):
if score < -0.7:
return 0 # Negative
elif score < 0:
return 1 # Somewhat negative
elif score == 0:
return 2 # Neutral
elif score < 0.5:
return 3 # Somewhat positive
else:
return 4 # Positive
def predict_sentiment(data, column):
df = data
# Perform sentiment analysis using AFINN
afinn_scores = df[column].apply(afinn.score)
afinn_sentiments = afinn_scores.map(map_sentiment)
# Perform sentiment analysis using SentiWordNet
# swn_scores = df[column].apply(calculate_sentiment_swn)
# swn_sentiments = swn_scores.map(map_sentiment)
# Perform sentiment analysis using VADER
vader_scores = df[column].apply(sia.polarity_scores)
vader_scores = vader_scores.apply(lambda x: x["compound"])
vader_sentiments = vader_scores.map(map_sentiment)
# Add sentiment columns to the dataframe
df[f"{column}_AFINN_Score"] = afinn_scores
df[f"{column}_AFINN_Sentiment"] = afinn_sentiments
# df[f'{column}_SentiWordNet_Score'] = swn_scores
# df[f'{column}_SentiWordNet_Sentiment'] = swn_sentiments
df[f"{column}_VADER_Score"] = vader_scores
df[f"{column}_VADER_Sentiment"] = vader_sentiments
return df
# Print the dataframe with sentiment analysis results
df = predict_sentiment(train, "Phrase")
df
# **See the score**
from sklearn.metrics import accuracy_score
# Calculate accuracy
accuracy = accuracy_score(df["Sentiment"], df["Phrase_VADER_Sentiment"])
print(f"Accuracy: {accuracy}")
accuracy = accuracy_score(df["Sentiment"], df["Phrase_AFINN_Sentiment"])
print(f"Accuracy: {accuracy}")
# **Text processing**
# from nltk.corpus import word_tokeniz
from nltk.tokenize import word_tokenize
def process_text(text):
# Remove punctuation
text = text.translate(str.maketrans("", "", string.punctuation))
# Tokenize the text
tokens = word_tokenize(text)
# Remove stopwords and conjunctions
stop_words = set(stopwords.words("english"))
conjunctions = set(["and", "or", "but", "nor", "so", "for", "yet"])
tokens = [
token
for token in tokens
if token.lower() not in stop_words and token.lower() not in conjunctions
]
# Convert tokens to lowercase
tokens = [token.lower() for token in tokens]
# Remove numbers
tokens = [token for token in tokens if not token.isdigit()]
# Join the processed tokens back into text
processed_text = " ".join(tokens)
return processed_text
train["Phrase"] = train["Phrase"].apply(lambda x: process_text(x))
test["Phrase"] = test["Phrase"].apply(lambda x: process_text(x))
# **Machine learning to make sentiment Analysis.**
X_train, x_val, y_train, y_val = train_test_split(
df["Phrase"], df["Sentiment"], shuffle=True, test_size=0.01
)
tfidf = TfidfVectorizer()
x_train_tfidif = tfidf.fit_transform(X_train)
x_val_tfidf = tfidf.transform(x_val)
x_test_tfidf = tfidf.transform(test["Phrase"])
# Create a Multinomial Naive Bayes classifier and fit it to the training data
nb_classifier = MultinomialNB()
nb_classifier.fit(X_train_counts, y_train)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Assuming you have your data in X (text) and y (labels)
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
train["Phrase"], train["Sentiment"], test_size=0.2, random_state=42
)
# Create an instance of the TfidfVectorizer
vectorizer = TfidfVectorizer()
# Fit and transform the training data to vectorize the text
X_train_vectorized = vectorizer.fit_transform(X_train)
# Transform the test data using the same vectorizer
X_test_vectorized = vectorizer.transform(X_test)
# Create an instance of the Naive Bayes classifier
nb_classifier = MultinomialNB()
# Train the classifier on the vectorized training data
nb_classifier.fit(X_train_vectorized, y_train)
# Make predictions on the vectorized test data
y_pred = nb_classifier.predict(X_test_vectorized)
# Calculate the accuracy of the classifier
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
# **Deep learning use LSTM**
from keras.models import Sequential
from keras.layers import Embedding, Dense, LSTM
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
df["Phrase"], df["Sentiment"], test_size=0.2, random_state=42
)
from keras.preprocessing.text import Tokenizer
# from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
# Tokenize the text data
tokenizer = Tokenizer()
tokenizer.fit_on_texts(df["Phrase"])
word_index = tokenizer.word_index
# Convert the text sequences to token sequences
train_sequences = tokenizer.texts_to_sequences(X_train)
test_sequences = tokenizer.texts_to_sequences(X_test)
# Find the maximum sequence length
max_sequence_length = max([len(seq) for seq in train_sequences])
# Pad the sequences with zeros to have a consistent length
padded_train_sequences = pad_sequences(train_sequences, maxlen=max_sequence_length)
padded_test_sequences = pad_sequences(test_sequences, maxlen=max_sequence_length)
# Verify the shape of the padded sequences
print("Padded training sequences shape:", padded_train_sequences.shape)
print("Padded testing sequences shape:", padded_test_sequences.shape)
# **Define the model**
model = Sequential()
model.add(Embedding(len(word_index) + 1, 100, input_length=max_sequence_length))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(5, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
from keras.utils import to_categorical
# Convert target labels to one-hot encoded vectors
num_classes = 5 # Number of sentiment classes
encoded_train_labels = to_categorical(y_train, num_classes=num_classes)
encoded_test_labels = to_categorical(y_test, num_classes=num_classes)
model.fit(
padded_train_sequences,
encoded_train_labels,
epochs=10,
validation_data=(padded_test_sequences, encoded_test_labels),
)
io
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/986/129986112.ipynb
|
glove840b300dtxt
|
takuok
|
[{"Id": 129986112, "ScriptId": 38614781, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12594195, "CreationDate": "05/18/2023 00:10:41", "VersionNumber": 2.0, "Title": "Sentimental Analysis On Movie Reviews", "EvaluationDate": "05/18/2023", "IsChange": true, "TotalLines": 354.0, "LinesInsertedFromPrevious": 45.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 309.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186432815, "KernelVersionId": 129986112, "SourceDatasetVersionId": 11650}]
|
[{"Id": 11650, "DatasetId": 8327, "DatasourceVersionId": 11650, "CreatorUserId": 841938, "LicenseName": "Unknown", "CreationDate": "12/31/2017 06:21:23", "VersionNumber": 1.0, "Title": "glove.840B.300d.txt", "Slug": "glove840b300dtxt", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 2232946667.0, "TotalUncompressedBytes": 2232946667.0}]
|
[{"Id": 8327, "CreatorUserId": 841938, "OwnerUserId": 841938.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 11650.0, "CurrentDatasourceVersionId": 11650.0, "ForumId": 15365, "Type": 2, "CreationDate": "12/31/2017 06:21:23", "LastActivityDate": "02/05/2018", "TotalViews": 49884, "TotalDownloads": 17760, "TotalVotes": 144, "TotalKernels": 421}]
|
[{"Id": 841938, "UserName": "takuok", "DisplayName": "takuoko", "RegisterDate": "12/20/2016", "PerformanceTier": 4}]
|
# # Sentiment Analysis On Movie Reviews
# 
# # Dataset Description
# The dataset is comprised of tab-separated files with phrases from the Rotten Tomatoes dataset. The train/test split has been preserved for the purposes of benchmarking, but the sentences have been shuffled from their original order. Each Sentence has been parsed into many phrases by the Stanford parser. Each phrase has a PhraseId. Each sentence has a SentenceId. Phrases that are repeated (such as short/common words) are only included once in the data.
# train.tsv contains the phrases and their associated sentiment labels. We have additionally provided a SentenceId so that you can track which phrases belong to a single sentence.
# test.tsv contains just phrases. You must assign a sentiment label to each phrase.
# The sentiment labels are:
# 0 - negative
# 1 - somewhat negative
# 2 - neutral
# 3 - somewhat positive
# 4 - positive
from nltk.corpus import sentiwordnet as swn
from afinn import Afinn
from nltk.sentiment import SentimentIntensityAnalyzer
from tensorflow.keras.utils import to_categorical
import numpy as np # linear algebra
import pandas as pd
import string
import nltk
from nltk.corpus import stopwords
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
# Download the stopwords corpus
nltk.download("stopwords")
import os
nltk.download("vader-lixcon")
nltk.download("sentiwordnet")
nltk.download("aaveraged_perceptron_tagger")
nltk.download("wordnet")
import warnings
warnings.filterwarnings("ignore")
# **Unzip File**
import zipfile
with zipfile.ZipFile(
"/kaggle/input/sentiment-analysis-on-movie-reviews/test.tsv.zip", "r"
) as f:
f.extractall()
with zipfile.ZipFile(
"/kaggle/input/sentiment-analysis-on-movie-reviews/train.tsv.zip", "r"
) as f:
f.extractall()
# **read data**
train = pd.read_csv("/kaggle/working/train.tsv", sep="\t")
test = pd.read_csv("/kaggle/working/test.tsv", sep="\t")
train
train.drop(columns=["PhraseId", "SentenceId"], axis=1)
# **New columns**
# Function to count the number of words
def count_words(text):
words = text.split()
return len(words)
# Function to count the number of punctuation characters
def count_punctuation(text):
count = sum([1 for char in text if char in string.punctuation])
return count
# Function to count the number of stopwords
def count_stopwords(text):
stop_words = set(stopwords.words("english")) # Adjust language as per your data
words = text.split()
count = sum([1 for word in words if word.lower() in stop_words])
return count
# Add new columns
train["text_len"] = train["Phrase"].apply(len)
train["words_count"] = train["Phrase"].apply(count_words)
train["punctuation_count"] = train["Phrase"].apply(count_punctuation)
train["stopwords_count"] = train["Phrase"].apply(count_stopwords)
import seaborn as sns
n1 = train[train["Sentiment"] == 0]["text_len"]
n2 = train[train["Sentiment"] == 1]["text_len"]
n3 = train[train["Sentiment"] == 2]["text_len"]
n4 = train[train["Sentiment"] == 3]["text_len"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="pink")
sns.kdeplot(n4, shade=True, color="yellow")
n1 = train[train["Sentiment"] == 0]["words_count"]
n2 = train[train["Sentiment"] == 1]["words_count"]
n3 = train[train["Sentiment"] == 2]["words_count"]
n4 = train[train["Sentiment"] == 3]["words_count"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="white")
sns.kdeplot(n4, shade=True, color="yellow")
n1 = train[train["Sentiment"] == 0]["punctuation_count"]
n2 = train[train["Sentiment"] == 1]["punctuation_count"]
n3 = train[train["Sentiment"] == 2]["punctuation_count"]
n4 = train[train["Sentiment"] == 3]["punctuation_count"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentiments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="green") # Changed color to green
sns.kdeplot(n4, shade=True, color="orange") # Changed color to orange
n1 = train[train["Sentiment"] == 0]["stopwords_count"]
n2 = train[train["Sentiment"] == 1]["stopwords_count"]
n3 = train[train["Sentiment"] == 2]["stopwords_count"]
n4 = train[train["Sentiment"] == 3]["stopwords_count"]
sns.kdeplot(n1, shade=True, color="red").set_title(
"Text Length for different sentments"
)
sns.kdeplot(n2, shade=True, color="blue")
sns.kdeplot(n3, shade=True, color="white")
sns.kdeplot(n4, shade=True, color="yellow")
# **AFINN:**
# AFINN is a lexicon-based approach for sentiment analysis. It assigns pre-defined sentiment scores to words based on a pre-constructed list of words and their associated sentiment scores. The AFINN library provides a ready-to-use implementation of this lexicon for sentiment analysis in various programming languages, including Python. By using AFINN, you can calculate the sentiment score of a given text based on the sentiment scores of individual words in the text.
# **SentimentIntensityAnalyzer**:
# SentimentIntensityAnalyzer is a part of the Natural Language Toolkit (NLTK) library in Python. It is a rule-based sentiment analysis tool that uses a combination of lexical and grammatical heuristics to determine the sentiment polarity of a given text. It provides a sentiment intensity score, which represents the positive, negative, and neutral sentiment intensities in the text.
# Initialize the lexicon-based models
afinn = Afinn()
sia = SentimentIntensityAnalyzer()
# Define a function to calculate the sentiment using SentiWordNet
def calculate_sentiment_swn(text):
sentiment = 0
tokens = nltk.word_tokenize(text)
for token in tokens:
synsets = list(swn.senti_synsets(token))
if synsets:
# Use the first synset for simplicity
synset = synsets[0]
sentiment += synset.pos_score() - synset.neg_score()
return sentiment
# Map sentiment scores to the 5 classes
def map_sentiment(score):
if score < -0.7:
return 0 # Negative
elif score < 0:
return 1 # Somewhat negative
elif score == 0:
return 2 # Neutral
elif score < 0.5:
return 3 # Somewhat positive
else:
return 4 # Positive
def predict_sentiment(data, column):
df = data
# Perform sentiment analysis using AFINN
afinn_scores = df[column].apply(afinn.score)
afinn_sentiments = afinn_scores.map(map_sentiment)
# Perform sentiment analysis using SentiWordNet
# swn_scores = df[column].apply(calculate_sentiment_swn)
# swn_sentiments = swn_scores.map(map_sentiment)
# Perform sentiment analysis using VADER
vader_scores = df[column].apply(sia.polarity_scores)
vader_scores = vader_scores.apply(lambda x: x["compound"])
vader_sentiments = vader_scores.map(map_sentiment)
# Add sentiment columns to the dataframe
df[f"{column}_AFINN_Score"] = afinn_scores
df[f"{column}_AFINN_Sentiment"] = afinn_sentiments
# df[f'{column}_SentiWordNet_Score'] = swn_scores
# df[f'{column}_SentiWordNet_Sentiment'] = swn_sentiments
df[f"{column}_VADER_Score"] = vader_scores
df[f"{column}_VADER_Sentiment"] = vader_sentiments
return df
# Print the dataframe with sentiment analysis results
df = predict_sentiment(train, "Phrase")
df
# **See the score**
from sklearn.metrics import accuracy_score
# Calculate accuracy
accuracy = accuracy_score(df["Sentiment"], df["Phrase_VADER_Sentiment"])
print(f"Accuracy: {accuracy}")
accuracy = accuracy_score(df["Sentiment"], df["Phrase_AFINN_Sentiment"])
print(f"Accuracy: {accuracy}")
# **Text processing**
# from nltk.corpus import word_tokeniz
from nltk.tokenize import word_tokenize
def process_text(text):
# Remove punctuation
text = text.translate(str.maketrans("", "", string.punctuation))
# Tokenize the text
tokens = word_tokenize(text)
# Remove stopwords and conjunctions
stop_words = set(stopwords.words("english"))
conjunctions = set(["and", "or", "but", "nor", "so", "for", "yet"])
tokens = [
token
for token in tokens
if token.lower() not in stop_words and token.lower() not in conjunctions
]
# Convert tokens to lowercase
tokens = [token.lower() for token in tokens]
# Remove numbers
tokens = [token for token in tokens if not token.isdigit()]
# Join the processed tokens back into text
processed_text = " ".join(tokens)
return processed_text
train["Phrase"] = train["Phrase"].apply(lambda x: process_text(x))
test["Phrase"] = test["Phrase"].apply(lambda x: process_text(x))
# **Machine learning to make sentiment Analysis.**
X_train, x_val, y_train, y_val = train_test_split(
df["Phrase"], df["Sentiment"], shuffle=True, test_size=0.01
)
tfidf = TfidfVectorizer()
x_train_tfidif = tfidf.fit_transform(X_train)
x_val_tfidf = tfidf.transform(x_val)
x_test_tfidf = tfidf.transform(test["Phrase"])
# Create a Multinomial Naive Bayes classifier and fit it to the training data
nb_classifier = MultinomialNB()
nb_classifier.fit(X_train_counts, y_train)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Assuming you have your data in X (text) and y (labels)
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
train["Phrase"], train["Sentiment"], test_size=0.2, random_state=42
)
# Create an instance of the TfidfVectorizer
vectorizer = TfidfVectorizer()
# Fit and transform the training data to vectorize the text
X_train_vectorized = vectorizer.fit_transform(X_train)
# Transform the test data using the same vectorizer
X_test_vectorized = vectorizer.transform(X_test)
# Create an instance of the Naive Bayes classifier
nb_classifier = MultinomialNB()
# Train the classifier on the vectorized training data
nb_classifier.fit(X_train_vectorized, y_train)
# Make predictions on the vectorized test data
y_pred = nb_classifier.predict(X_test_vectorized)
# Calculate the accuracy of the classifier
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
# **Deep learning use LSTM**
from keras.models import Sequential
from keras.layers import Embedding, Dense, LSTM
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
df["Phrase"], df["Sentiment"], test_size=0.2, random_state=42
)
from keras.preprocessing.text import Tokenizer
# from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
# Tokenize the text data
tokenizer = Tokenizer()
tokenizer.fit_on_texts(df["Phrase"])
word_index = tokenizer.word_index
# Convert the text sequences to token sequences
train_sequences = tokenizer.texts_to_sequences(X_train)
test_sequences = tokenizer.texts_to_sequences(X_test)
# Find the maximum sequence length
max_sequence_length = max([len(seq) for seq in train_sequences])
# Pad the sequences with zeros to have a consistent length
padded_train_sequences = pad_sequences(train_sequences, maxlen=max_sequence_length)
padded_test_sequences = pad_sequences(test_sequences, maxlen=max_sequence_length)
# Verify the shape of the padded sequences
print("Padded training sequences shape:", padded_train_sequences.shape)
print("Padded testing sequences shape:", padded_test_sequences.shape)
# **Define the model**
model = Sequential()
model.add(Embedding(len(word_index) + 1, 100, input_length=max_sequence_length))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(5, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
from keras.utils import to_categorical
# Convert target labels to one-hot encoded vectors
num_classes = 5 # Number of sentiment classes
encoded_train_labels = to_categorical(y_train, num_classes=num_classes)
encoded_test_labels = to_categorical(y_test, num_classes=num_classes)
model.fit(
padded_train_sequences,
encoded_train_labels,
epochs=10,
validation_data=(padded_test_sequences, encoded_test_labels),
)
io
| false | 0 | 3,740 | 0 | 3,778 | 3,740 |
||
129986035
|
<jupyter_start><jupyter_text>glove.6B.300d.txt
Kaggle dataset identifier: glove6b300dtxt
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import re
import pandas as pd
from nltk.corpus import stopwords
import os
import matplotlib.pyplot as plt
import seaborn as sns
color = sns.color_palette()
import nltk
nltk.download("stopwords")
nltk.download("wordnet")
nltk.download("punkt")
from nltk.corpus import stopwords
stop_words = set(stopwords.words("english"))
from nltk.stem import PorterStemmer, WordNetLemmatizer
from tensorflow.keras.preprocessing.text import one_hot, Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import warnings
warnings.filterwarnings("ignore")
import os
# * First we import important packages like pandas,nltk,re,os
# * we use pandas to handle our dataset it is used to take input of test and training data then
# * we import stopwords to remove usnecessary words like is,are,names etc from the dataset we use re to keep only words
# * i will explain this in details where we use re. then we import os for setting directory
# * some worldcloud and barplot visualization
# ## First use pandas pd.read_csv() for reading these tabulated files and our basic process will be like
# * importing data
# * cleaning them
# * visualizing them
# * our stack models for deep leaning with
# # **DATA CLEANING**
# - drop index colunm
#
df = pd.read_csv("/kaggle/input/url-classification-dataset-dmoz/dmoz.csv")
# Assuming you have loaded the dataset into a DataFrame called 'df'
# Select 10,000 rows for each category
df = (
df.groupby("category")
.apply(lambda x: x.sample(n=7468, random_state=42))
.reset_index(drop=True)
)
df.columns = ["index", "category", "title", "desc"]
df.drop(columns="index", axis=1, inplace=True)
df.head()
df.shape
# **The shape of the data is (1195851, 3)**
df.info()
# * we have three columns (category this is the target for make classification ,
# title , descrbation )
# * No non value in data
# * type of data object
# * size memory is 27.4 MB
df.describe()
# * The "category" column seems to have 13 unique values, which suggests that you have a multi-class classification problem with 13 classes.
# * The "title" column appears to have 1,122,645 unique values, which is almost as many as the number of instances in the dataset. This suggests that many of the titles are unique, which might make it challenging to extract meaningful features from them.
# * The "desc" column appears to have 1,133,703 unique values, which is also close to the number of instances in the dataset. Like the "title" column, this suggests that many of the descriptions are unique and might be challenging to process.
# * it is generally a good practice to remove duplicates from your dataset as one of the first steps in your data preprocessing pipeline. This is because duplicate data can introduce bias and inaccuracies into your analysis and modeling efforts.
df.duplicated().sum()
df.drop_duplicates(keep="first", inplace=True)
# * There appears to be a class imbalance in the dataset. The number of articles in each category varies widely, with Business, Society, and Arts having significantly more articles than the other categories.
plt.figure(figsize=(10, 8))
category = df["category"].value_counts()
sns.barplot(x=category, y=category.index, palette="rocket")
plt.title("Category Distribution")
plt.xlabel("Number of Articles")
plt.show()
# * Fix imblance classes by use :
# Pre-trained models: Many pre-trained language models, such as BERT and GPT, have been trained on large, diverse datasets and can be fine-tuned on your specific classification task. Fine-tuning on imbalanced data can help the model to learn to better handle the minority class.
category
df.columns
df["text"] = df["title"] + " " + df["desc"]
del df["title"]
del df["desc"]
# * Remove punctuation and special characters, as well as converting all text to lowercase:
import re
def preprocess_text(text):
# Remove punctuation and special characters
text = re.sub(r"[^\w\s]", "", text)
# Convert all text to lowercase
text = text.lower()
return text
df["text"] = df["text"].apply(lambda a: preprocess_text(a))
# * Remove stop words using the Natural Language Toolkit (NLTK) library in Python:
def remove_stopwords(text):
tokens = nltk.word_tokenize(text)
filtered_tokens = [token for token in tokens if token not in stop_words]
return " ".join(filtered_tokens)
df["text"] = df["text"].apply(lambda a: remove_stopwords(a))
import re
def text_normalizer(text):
# Convert text to lowercase
text = text.lower()
# Remove special characters and symbols
text = re.sub(r"[^a-zA-Z0-9]", " ", text)
# Remove extra whitespace
text = re.sub(r"\s+", " ", text)
# Remove leading and trailing whitespace
text = text.strip()
return text
df["text"] = df["text"].apply(lambda a: text_normalizer(a))
# * Perform stemming and lemmatization using the NLTK library in Python:
def stem_text(text):
# Create a PorterStemmer object
stemmer = PorterStemmer()
# Tokenize the text
tokens = nltk.word_tokenize(text)
# Perform stemming on each token
stemmed_tokens = [stemmer.stem(token) for token in tokens]
# Join the stemmed tokens back together into a string
stemmed_text = " ".join(stemmed_tokens)
return stemmed_text
df["text"] = df["text"].apply(lambda a: stem_text(a))
df
# * Define the model
# * Function that takes a list of strings (e.g., the columns of a DataFrame) and removes duplicate words from each string:
# # using above function and store the filter things in array
from wordcloud import WordCloud, STOPWORDS
stopwords = set(STOPWORDS)
def show_wordcloud(data, title=None):
wordcloud = WordCloud(
background_color="black",
stopwords=stopwords,
max_words=200,
max_font_size=40,
scale=3,
random_state=1, # chosen at random by flipping a coin; it was heads
).generate(str(data))
fig = plt.figure(1, figsize=(15, 15))
plt.axis("off")
if title:
fig.suptitle(title, fontsize=20)
fig.subplots_adjust(top=2.3)
plt.imshow(wordcloud)
plt.show()
show_wordcloud(df["text"])
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
# # tokenize upto max 6000 words
# # then using keras function of preprocessing of tokenizing and padding
#
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
df["text"], df["category"], shuffle=True, test_size=0.2
)
max_feature = 6000
tokenizer = Tokenizer(num_words=max_feature)
tokenizer.fit_on_texts(X_train)
word_index = tokenizer.word_index
# Convert the text sequences to token sequences
train_sequences = tokenizer.texts_to_sequences(X_train)
test_sequences = tokenizer.texts_to_sequences(X_test)
# Find the maximum sequence length
max_sequence_length = max([len(seq) for seq in train_sequences])
# Pad the sequences with zeros to have a consistent length
padded_train_sequences = pad_sequences(train_sequences, maxlen=max_sequence_length)
padded_test_sequences = pad_sequences(test_sequences, maxlen=max_sequence_length)
# Verify the shape of the padded sequences
print("Padded training sequences shape:", padded_train_sequences.shape)
print("Padded testing sequences shape:", padded_test_sequences.shape)
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM
model = Sequential()
model.add(Embedding(len(word_index) + 1, 100, input_length=max_sequence_length))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(13, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
mapping = {}
for i, label in enumerate(set(df["category"])):
mapping[label] = i
df["category"] = df["category"].map(mapping)
set(df["category"])
y_train = y_train.map(mapping)
y_test = y_test.map(mapping)
from keras.utils import to_categorical
# Convert target labels to one-hot encoded vectors
num_classes = 13 # Number of sentiment classes
encoded_train_labels = to_categorical(y_train, num_classes=num_classes)
encoded_test_labels = to_categorical(y_test, num_classes=num_classes)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
Embedding,
LSTM,
Dense,
Bidirectional,
Dropout,
BatchNormalization,
)
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l2
# Increase Model Complexity
model = Sequential()
model.add(Embedding(len(word_index) + 1, 100, input_length=max_sequence_length))
model.add(
Bidirectional(LSTM(128, dropout=0.2, recurrent_dropout=0.2, return_sequences=True))
)
model.add(Bidirectional(LSTM(128, dropout=0.2, recurrent_dropout=0.2)))
# Pre-trained Word Embeddings (Optional)
# model.add(Embedding(vocab_size, embedding_dim, weights=[embedding_matrix], input_length=max_sequence_length, trainable=False))
# Regularization Techniques
model.add(Dense(64, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(32, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(16, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(8, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(4, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(2, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(1, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(10, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(10, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(13, activation="softmax"))
# Batch Normalization
model.add(BatchNormalization())
# Compile the model
optimizer = Adam(learning_rate=0.001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
# Print the model summary
model.summary()
# model.fit(padded_train_sequences, encoded_train_labels, epochs=10)
# model.fit(padded_train_sequences, encoded_train_labels, epochs=10)
embedding_dim = 300
# Load pre-trained GloVe embeddings
embeddings_index = {}
with open(
"/kaggle/input/glove6b300dtxt/glove.6B.300d.txt", "r", encoding="utf-8"
) as file:
for line in file:
values = line.split()
word = values[0]
embedding = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = embedding
# Create the embedding matrix
embedding_matrix = np.zeros((len(word_index) + 1, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
from tensorflow.keras.layers import Input
# Define the input layers for query and value
query_input = Input(shape=(max_sequence_length, embedding_dim))
value_input = Input(shape=(max_sequence_length, embedding_dim))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
Embedding,
Conv1D,
MaxPooling1D,
LSTM,
Dense,
Dropout,
Flatten,
GlobalMaxPooling1D,
Attention,
)
from tensorflow.keras.optimizers import Adam
import numpy as np
# Define the model
model = Sequential()
model.add(
Embedding(
len(word_index) + 1,
embedding_dim,
weights=[embedding_matrix],
input_length=max_sequence_length,
trainable=False,
)
)
model.add(Conv1D(128, 5, activation="relu"))
model.add(MaxPooling1D(pool_size=4))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2, return_sequences=True))
# Add the Attention layer with query and value inputs
# model.add(Attention()([query_input, value_input]))
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.5))
model.add(GlobalMaxPooling1D())
model.add(Dense(num_classes, activation="softmax"))
# Compile the model
optimizer = Adam(learning_rate=0.001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
# Print the model summary
model.summary()
# Fit the model
history = model.fit(
padded_train_sequences, encoded_train_labels, epochs=10, batch_size=32
)
y_pred = model.predict(padded_test_sequences)
# Assuming you have the true labels of the test data in y_test
# and the predicted labels in y_pred
# Convert the predicted probabilities to class labels
y_pred_classes = np.argmax(y_pred, axis=1)
# Compare the predicted labels with the true labels
correct_predictions = np.equal(y_pred_classes, y_test)
# Calculate the accuracy
accuracy = np.sum(correct_predictions) / len(y_test)
print("Accuracy: ", accuracy)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/986/129986035.ipynb
|
glove6b300dtxt
|
thanakomsn
|
[{"Id": 129986035, "ScriptId": 38613918, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12594195, "CreationDate": "05/18/2023 00:09:22", "VersionNumber": 1.0, "Title": "Dmoz Classification (STM)", "EvaluationDate": "05/18/2023", "IsChange": true, "TotalLines": 381.0, "LinesInsertedFromPrevious": 381.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186432698, "KernelVersionId": 129986035, "SourceDatasetVersionId": 8240}, {"Id": 186432699, "KernelVersionId": 129986035, "SourceDatasetVersionId": 81849}, {"Id": 186432700, "KernelVersionId": 129986035, "SourceDatasetVersionId": 3205803}]
|
[{"Id": 8240, "DatasetId": 5504, "DatasourceVersionId": 8240, "CreatorUserId": 644012, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "11/28/2017 07:19:43", "VersionNumber": 1.0, "Title": "glove.6B.300d.txt", "Slug": "glove6b300dtxt", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 404848082.0, "TotalUncompressedBytes": 404848082.0}]
|
[{"Id": 5504, "CreatorUserId": 644012, "OwnerUserId": 644012.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 8240.0, "CurrentDatasourceVersionId": 8240.0, "ForumId": 11680, "Type": 2, "CreationDate": "11/28/2017 07:19:43", "LastActivityDate": "01/30/2018", "TotalViews": 57333, "TotalDownloads": 11262, "TotalVotes": 35, "TotalKernels": 61}]
|
[{"Id": 644012, "UserName": "thanakomsn", "DisplayName": "Thanakom Sangnetra", "RegisterDate": "06/18/2016", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import re
import pandas as pd
from nltk.corpus import stopwords
import os
import matplotlib.pyplot as plt
import seaborn as sns
color = sns.color_palette()
import nltk
nltk.download("stopwords")
nltk.download("wordnet")
nltk.download("punkt")
from nltk.corpus import stopwords
stop_words = set(stopwords.words("english"))
from nltk.stem import PorterStemmer, WordNetLemmatizer
from tensorflow.keras.preprocessing.text import one_hot, Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import warnings
warnings.filterwarnings("ignore")
import os
# * First we import important packages like pandas,nltk,re,os
# * we use pandas to handle our dataset it is used to take input of test and training data then
# * we import stopwords to remove usnecessary words like is,are,names etc from the dataset we use re to keep only words
# * i will explain this in details where we use re. then we import os for setting directory
# * some worldcloud and barplot visualization
# ## First use pandas pd.read_csv() for reading these tabulated files and our basic process will be like
# * importing data
# * cleaning them
# * visualizing them
# * our stack models for deep leaning with
# # **DATA CLEANING**
# - drop index colunm
#
df = pd.read_csv("/kaggle/input/url-classification-dataset-dmoz/dmoz.csv")
# Assuming you have loaded the dataset into a DataFrame called 'df'
# Select 10,000 rows for each category
df = (
df.groupby("category")
.apply(lambda x: x.sample(n=7468, random_state=42))
.reset_index(drop=True)
)
df.columns = ["index", "category", "title", "desc"]
df.drop(columns="index", axis=1, inplace=True)
df.head()
df.shape
# **The shape of the data is (1195851, 3)**
df.info()
# * we have three columns (category this is the target for make classification ,
# title , descrbation )
# * No non value in data
# * type of data object
# * size memory is 27.4 MB
df.describe()
# * The "category" column seems to have 13 unique values, which suggests that you have a multi-class classification problem with 13 classes.
# * The "title" column appears to have 1,122,645 unique values, which is almost as many as the number of instances in the dataset. This suggests that many of the titles are unique, which might make it challenging to extract meaningful features from them.
# * The "desc" column appears to have 1,133,703 unique values, which is also close to the number of instances in the dataset. Like the "title" column, this suggests that many of the descriptions are unique and might be challenging to process.
# * it is generally a good practice to remove duplicates from your dataset as one of the first steps in your data preprocessing pipeline. This is because duplicate data can introduce bias and inaccuracies into your analysis and modeling efforts.
df.duplicated().sum()
df.drop_duplicates(keep="first", inplace=True)
# * There appears to be a class imbalance in the dataset. The number of articles in each category varies widely, with Business, Society, and Arts having significantly more articles than the other categories.
plt.figure(figsize=(10, 8))
category = df["category"].value_counts()
sns.barplot(x=category, y=category.index, palette="rocket")
plt.title("Category Distribution")
plt.xlabel("Number of Articles")
plt.show()
# * Fix imblance classes by use :
# Pre-trained models: Many pre-trained language models, such as BERT and GPT, have been trained on large, diverse datasets and can be fine-tuned on your specific classification task. Fine-tuning on imbalanced data can help the model to learn to better handle the minority class.
category
df.columns
df["text"] = df["title"] + " " + df["desc"]
del df["title"]
del df["desc"]
# * Remove punctuation and special characters, as well as converting all text to lowercase:
import re
def preprocess_text(text):
# Remove punctuation and special characters
text = re.sub(r"[^\w\s]", "", text)
# Convert all text to lowercase
text = text.lower()
return text
df["text"] = df["text"].apply(lambda a: preprocess_text(a))
# * Remove stop words using the Natural Language Toolkit (NLTK) library in Python:
def remove_stopwords(text):
tokens = nltk.word_tokenize(text)
filtered_tokens = [token for token in tokens if token not in stop_words]
return " ".join(filtered_tokens)
df["text"] = df["text"].apply(lambda a: remove_stopwords(a))
import re
def text_normalizer(text):
# Convert text to lowercase
text = text.lower()
# Remove special characters and symbols
text = re.sub(r"[^a-zA-Z0-9]", " ", text)
# Remove extra whitespace
text = re.sub(r"\s+", " ", text)
# Remove leading and trailing whitespace
text = text.strip()
return text
df["text"] = df["text"].apply(lambda a: text_normalizer(a))
# * Perform stemming and lemmatization using the NLTK library in Python:
def stem_text(text):
# Create a PorterStemmer object
stemmer = PorterStemmer()
# Tokenize the text
tokens = nltk.word_tokenize(text)
# Perform stemming on each token
stemmed_tokens = [stemmer.stem(token) for token in tokens]
# Join the stemmed tokens back together into a string
stemmed_text = " ".join(stemmed_tokens)
return stemmed_text
df["text"] = df["text"].apply(lambda a: stem_text(a))
df
# * Define the model
# * Function that takes a list of strings (e.g., the columns of a DataFrame) and removes duplicate words from each string:
# # using above function and store the filter things in array
from wordcloud import WordCloud, STOPWORDS
stopwords = set(STOPWORDS)
def show_wordcloud(data, title=None):
wordcloud = WordCloud(
background_color="black",
stopwords=stopwords,
max_words=200,
max_font_size=40,
scale=3,
random_state=1, # chosen at random by flipping a coin; it was heads
).generate(str(data))
fig = plt.figure(1, figsize=(15, 15))
plt.axis("off")
if title:
fig.suptitle(title, fontsize=20)
fig.subplots_adjust(top=2.3)
plt.imshow(wordcloud)
plt.show()
show_wordcloud(df["text"])
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
# # tokenize upto max 6000 words
# # then using keras function of preprocessing of tokenizing and padding
#
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
df["text"], df["category"], shuffle=True, test_size=0.2
)
max_feature = 6000
tokenizer = Tokenizer(num_words=max_feature)
tokenizer.fit_on_texts(X_train)
word_index = tokenizer.word_index
# Convert the text sequences to token sequences
train_sequences = tokenizer.texts_to_sequences(X_train)
test_sequences = tokenizer.texts_to_sequences(X_test)
# Find the maximum sequence length
max_sequence_length = max([len(seq) for seq in train_sequences])
# Pad the sequences with zeros to have a consistent length
padded_train_sequences = pad_sequences(train_sequences, maxlen=max_sequence_length)
padded_test_sequences = pad_sequences(test_sequences, maxlen=max_sequence_length)
# Verify the shape of the padded sequences
print("Padded training sequences shape:", padded_train_sequences.shape)
print("Padded testing sequences shape:", padded_test_sequences.shape)
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM
model = Sequential()
model.add(Embedding(len(word_index) + 1, 100, input_length=max_sequence_length))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(13, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
mapping = {}
for i, label in enumerate(set(df["category"])):
mapping[label] = i
df["category"] = df["category"].map(mapping)
set(df["category"])
y_train = y_train.map(mapping)
y_test = y_test.map(mapping)
from keras.utils import to_categorical
# Convert target labels to one-hot encoded vectors
num_classes = 13 # Number of sentiment classes
encoded_train_labels = to_categorical(y_train, num_classes=num_classes)
encoded_test_labels = to_categorical(y_test, num_classes=num_classes)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
Embedding,
LSTM,
Dense,
Bidirectional,
Dropout,
BatchNormalization,
)
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l2
# Increase Model Complexity
model = Sequential()
model.add(Embedding(len(word_index) + 1, 100, input_length=max_sequence_length))
model.add(
Bidirectional(LSTM(128, dropout=0.2, recurrent_dropout=0.2, return_sequences=True))
)
model.add(Bidirectional(LSTM(128, dropout=0.2, recurrent_dropout=0.2)))
# Pre-trained Word Embeddings (Optional)
# model.add(Embedding(vocab_size, embedding_dim, weights=[embedding_matrix], input_length=max_sequence_length, trainable=False))
# Regularization Techniques
model.add(Dense(64, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(32, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(16, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(8, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(4, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(2, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(1, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(10, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(10, activation="relu", kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(13, activation="softmax"))
# Batch Normalization
model.add(BatchNormalization())
# Compile the model
optimizer = Adam(learning_rate=0.001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
# Print the model summary
model.summary()
# model.fit(padded_train_sequences, encoded_train_labels, epochs=10)
# model.fit(padded_train_sequences, encoded_train_labels, epochs=10)
embedding_dim = 300
# Load pre-trained GloVe embeddings
embeddings_index = {}
with open(
"/kaggle/input/glove6b300dtxt/glove.6B.300d.txt", "r", encoding="utf-8"
) as file:
for line in file:
values = line.split()
word = values[0]
embedding = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = embedding
# Create the embedding matrix
embedding_matrix = np.zeros((len(word_index) + 1, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
from tensorflow.keras.layers import Input
# Define the input layers for query and value
query_input = Input(shape=(max_sequence_length, embedding_dim))
value_input = Input(shape=(max_sequence_length, embedding_dim))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
Embedding,
Conv1D,
MaxPooling1D,
LSTM,
Dense,
Dropout,
Flatten,
GlobalMaxPooling1D,
Attention,
)
from tensorflow.keras.optimizers import Adam
import numpy as np
# Define the model
model = Sequential()
model.add(
Embedding(
len(word_index) + 1,
embedding_dim,
weights=[embedding_matrix],
input_length=max_sequence_length,
trainable=False,
)
)
model.add(Conv1D(128, 5, activation="relu"))
model.add(MaxPooling1D(pool_size=4))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2, return_sequences=True))
# Add the Attention layer with query and value inputs
# model.add(Attention()([query_input, value_input]))
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.5))
model.add(GlobalMaxPooling1D())
model.add(Dense(num_classes, activation="softmax"))
# Compile the model
optimizer = Adam(learning_rate=0.001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
# Print the model summary
model.summary()
# Fit the model
history = model.fit(
padded_train_sequences, encoded_train_labels, epochs=10, batch_size=32
)
y_pred = model.predict(padded_test_sequences)
# Assuming you have the true labels of the test data in y_test
# and the predicted labels in y_pred
# Convert the predicted probabilities to class labels
y_pred_classes = np.argmax(y_pred, axis=1)
# Compare the predicted labels with the true labels
correct_predictions = np.equal(y_pred_classes, y_test)
# Calculate the accuracy
accuracy = np.sum(correct_predictions) / len(y_test)
print("Accuracy: ", accuracy)
| false | 1 | 3,861 | 0 | 3,896 | 3,861 |
||
129812916
|
<jupyter_start><jupyter_text>University Students Complaints & Reports📝👨🎓
The "Voices Heard" dataset is a comprehensive collection of reports and complaints submitted by students in a university setting. From academic grievances to campus safety concerns, this dataset offers a rich trove of insights into the student experience, providing valuable feedback for university administrators and educators. With its diverse range of feedback, "Voices Heard" offers a unique opportunity to gain a better understanding of the needs and concerns of students, and to develop data-driven solutions to enhance the university experience for all. .
Kaggle dataset identifier: university-students-complaints-and-reports
<jupyter_script>import pandas as pd
df = pd.read_csv(
"/kaggle/input/university-students-complaints-and-reports/Datasetprojpowerbi.csv"
)
df.sample(3)
df.shape
df.info()
df.columns
# # EDA
# * uni
# * bi
# * multi
# # UNI-VARIATE
df["Gender"].value_counts()
df["Gender"].value_counts().plot(kind="pie", shadow=True, autopct="%.0f%%")
df["Year"].value_counts()
df["Year"].value_counts().plot(kind="bar")
df["Genre"].value_counts()
df["Genre"].value_counts().plot(kind="bar")
df["Count"].value_counts()
# # BI-VARIATE
df.columns
df.groupby("Genre").sum()["Year"].sort_values(ascending=False)
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(15, 8))
plt.xticks(rotation=45)
sns.countplot(data=df, x=df["Genre"], hue=df["Year"])
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(15, 8))
plt.xticks(rotation=45)
sns.countplot(data=df, x=df["Genre"], hue=df["Gender"])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/812/129812916.ipynb
|
university-students-complaints-and-reports
|
omarsobhy14
|
[{"Id": 129812916, "ScriptId": 38606706, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9663382, "CreationDate": "05/16/2023 16:33:19", "VersionNumber": 1.0, "Title": "\ud83d\udcc8 --- EDA", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 52.0, "LinesInsertedFromPrevious": 52.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 186187937, "KernelVersionId": 129812916, "SourceDatasetVersionId": 5672268}]
|
[{"Id": 5672268, "DatasetId": 3260867, "DatasourceVersionId": 5747799, "CreatorUserId": 11085604, "LicenseName": "Other (specified in description)", "CreationDate": "05/12/2023 19:46:45", "VersionNumber": 1.0, "Title": "University Students Complaints & Reports\ud83d\udcdd\ud83d\udc68\u200d\ud83c\udf93", "Slug": "university-students-complaints-and-reports", "Subtitle": "Voices Heard: Unleashing Insights from Student Feedback in University", "Description": "The \"Voices Heard\" dataset is a comprehensive collection of reports and complaints submitted by students in a university setting. From academic grievances to campus safety concerns, this dataset offers a rich trove of insights into the student experience, providing valuable feedback for university administrators and educators. With its diverse range of feedback, \"Voices Heard\" offers a unique opportunity to gain a better understanding of the needs and concerns of students, and to develop data-driven solutions to enhance the university experience for all. .", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3260867, "CreatorUserId": 11085604, "OwnerUserId": 11085604.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 6264732.0, "CurrentDatasourceVersionId": 6344561.0, "ForumId": 3326442, "Type": 2, "CreationDate": "05/12/2023 19:46:45", "LastActivityDate": "05/12/2023", "TotalViews": 11588, "TotalDownloads": 1576, "TotalVotes": 43, "TotalKernels": 8}]
|
[{"Id": 11085604, "UserName": "omarsobhy14", "DisplayName": "Omar Sobhy", "RegisterDate": "07/19/2022", "PerformanceTier": 2}]
|
import pandas as pd
df = pd.read_csv(
"/kaggle/input/university-students-complaints-and-reports/Datasetprojpowerbi.csv"
)
df.sample(3)
df.shape
df.info()
df.columns
# # EDA
# * uni
# * bi
# * multi
# # UNI-VARIATE
df["Gender"].value_counts()
df["Gender"].value_counts().plot(kind="pie", shadow=True, autopct="%.0f%%")
df["Year"].value_counts()
df["Year"].value_counts().plot(kind="bar")
df["Genre"].value_counts()
df["Genre"].value_counts().plot(kind="bar")
df["Count"].value_counts()
# # BI-VARIATE
df.columns
df.groupby("Genre").sum()["Year"].sort_values(ascending=False)
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(15, 8))
plt.xticks(rotation=45)
sns.countplot(data=df, x=df["Genre"], hue=df["Year"])
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(15, 8))
plt.xticks(rotation=45)
sns.countplot(data=df, x=df["Genre"], hue=df["Gender"])
| false | 1 | 333 | 3 | 488 | 333 |
||
129812398
|
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42) # Generate random but repoducible scatter data points
num_points = 10
x = np.random.rand(num_points)
y = np.random.rand(num_points)
a = np.random.rand()
b = np.random.rand()
f = lambda x: a * x + b # Generate a random function (linear equation)
residuals = y - f(x)
sum_squared_residuals = np.sum(residuals**2) # Calculate the sum of squared residuals
vertical_distances = np.abs(
residuals
) # Calculate the vertical distance between the function and each data point
r_function = (a, b, sum_squared_residuals, vertical_distances)
functions = []
functions.append(
(a, b, sum_squared_residuals, vertical_distances)
) # Combine the Random linear functions and with the previous function above
# Visualize the scatter data points and the function
plt.scatter(x, y, color="green", label="Data Points")
plt.title("Data Points")
plt.show()
# Visualize the scatter data points and the function
plt.scatter(x, y, color="green", label="Data Points")
plt.plot(x, f(x), color="red", label="Random Function")
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Loss Function")
plt.legend()
text = f"Sum of Squared Residuals \n (Loss Value): {sum_squared_residuals:.2f}" # Annotate the sum of squared residuals
plt.annotate(
text, xy=(0.05, 0.65), xycoords="axes fraction", fontsize=10, ha="left", va="top"
)
for i, v in enumerate(
y
): # Plot vlines to show the distance between each data point and the function
label = f"{residuals[i]**2:.2f}"
if y[i] < f(x[i]):
plt.vlines(x[i], y[i], y[i] + vertical_distances[i], colors="black")
plt.text(
x[i] + 0.04,
y[i] + vertical_distances[i] / 2,
label,
ha="center",
va="bottom",
fontsize=8,
)
if y[i] > f(x[i]):
plt.vlines(x[i], y[i], y[i] - vertical_distances[i], colors="black")
plt.text(
x[i] + 0.04,
y[i] - vertical_distances[i] / 2,
label,
ha="center",
va="top",
fontsize=8,
)
plt.show()
num_functions = (
9 # Also generate 9 random linear functions and combine with the 1 function above
)
for _ in range(num_functions):
a = np.random.rand() # Random slope
b = np.random.rand() # Random y-intercept
f = lambda x: a * x + b # Linear function
residuals = y - f(x)
sum_squared_residuals = np.sum(residuals**2)
vertical_distances = np.abs(residuals)
functions.append((a, b, sum_squared_residuals, vertical_distances))
# Plot the Random linear functions
colors = [
"red",
"green",
"blue",
"orange",
"purple",
"cyan",
"magenta",
"pink",
"brown",
"gray",
]
plt.figure(figsize=(10, 6))
for i, (a, b, s, v) in enumerate(functions):
color = colors[i % len(colors)]
plt.plot(x, a * x + b, linewidth=1, alpha=0.8, color=color)
plt.scatter(x, y, color="green", label="Data Points")
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Random Linear Functions")
plt.show()
# Plot the Random linear functions and their Loss Value
for i, (a, b, s, v) in enumerate(functions):
color = colors[i % len(colors)]
plt.scatter(i, s, marker="o", color=color)
plt.text(i, s, f"{s:.2f}", ha="center", va="bottom")
plt.xlabel("Index of the Random Linear Function")
plt.ylabel("Loss Function Value")
plt.title("Loss Function Value of Random Linear Functions")
plt.show()
def gradient_descent(x, y, learning_rate, num_iterations):
# Initialize the parameters
a = 0
b = 0
n = len(x)
# Store the parameter values for plotting
a_values = [a]
b_values = [b]
loss_values = []
for iteration in range(num_iterations): # Perform gradient descent
y_pred = a * x + b # Calculate the predicted values
residuals = y_pred - y
sum_squared_residuals = np.sum(residuals**2)
# Calculate the gradients
gradient_a = (2 / n) * np.sum((y_pred - y) * x)
gradient_b = (2 / n) * np.sum(y_pred - y)
# Update the parameters
a -= learning_rate * gradient_a
b -= learning_rate * gradient_b
# Store the parameter values
a_values.append(a)
b_values.append(b)
loss_values.append(sum_squared_residuals)
return a, b, a_values, b_values, loss_values
learning_rate = 0.1 # Set hyperpameter learning rate
num_iterations = 5
a_optimal, b_optimal, a_values, b_values, loss_values = gradient_descent(
x, y, learning_rate, num_iterations
) # Perform gradient descent
for i, v in enumerate(a_values):
plt.title("Training at step " + str(i))
plt.scatter(x, y, color="green", label="Data Points")
plt.plot(x, a_values[i] * x + b_values[i], linewidth=1, alpha=0.8)
plt.show()
residuals = y - a_optimal * x + b_optimal
sum_squared_residuals = np.sum(residuals**2)
plt.title("Loss Value after training " + str(num_iterations) + " times")
plt.scatter(list(range(num_iterations)), loss_values, linewidth=1, alpha=1)
plt.plot(loss_values)
print("Optimal Linear Function: y = " + str(a_optimal) + "*x + " + str(b_optimal))
print("Loss Value of the Optimal Linear Function ", sum_squared_residuals)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/812/129812398.ipynb
| null | null |
[{"Id": 129812398, "ScriptId": 38584711, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1848770, "CreationDate": "05/16/2023 16:28:45", "VersionNumber": 1.0, "Title": "DPL 302m / 1.4 / Loss Function", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 119.0, "LinesInsertedFromPrevious": 119.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42) # Generate random but repoducible scatter data points
num_points = 10
x = np.random.rand(num_points)
y = np.random.rand(num_points)
a = np.random.rand()
b = np.random.rand()
f = lambda x: a * x + b # Generate a random function (linear equation)
residuals = y - f(x)
sum_squared_residuals = np.sum(residuals**2) # Calculate the sum of squared residuals
vertical_distances = np.abs(
residuals
) # Calculate the vertical distance between the function and each data point
r_function = (a, b, sum_squared_residuals, vertical_distances)
functions = []
functions.append(
(a, b, sum_squared_residuals, vertical_distances)
) # Combine the Random linear functions and with the previous function above
# Visualize the scatter data points and the function
plt.scatter(x, y, color="green", label="Data Points")
plt.title("Data Points")
plt.show()
# Visualize the scatter data points and the function
plt.scatter(x, y, color="green", label="Data Points")
plt.plot(x, f(x), color="red", label="Random Function")
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Loss Function")
plt.legend()
text = f"Sum of Squared Residuals \n (Loss Value): {sum_squared_residuals:.2f}" # Annotate the sum of squared residuals
plt.annotate(
text, xy=(0.05, 0.65), xycoords="axes fraction", fontsize=10, ha="left", va="top"
)
for i, v in enumerate(
y
): # Plot vlines to show the distance between each data point and the function
label = f"{residuals[i]**2:.2f}"
if y[i] < f(x[i]):
plt.vlines(x[i], y[i], y[i] + vertical_distances[i], colors="black")
plt.text(
x[i] + 0.04,
y[i] + vertical_distances[i] / 2,
label,
ha="center",
va="bottom",
fontsize=8,
)
if y[i] > f(x[i]):
plt.vlines(x[i], y[i], y[i] - vertical_distances[i], colors="black")
plt.text(
x[i] + 0.04,
y[i] - vertical_distances[i] / 2,
label,
ha="center",
va="top",
fontsize=8,
)
plt.show()
num_functions = (
9 # Also generate 9 random linear functions and combine with the 1 function above
)
for _ in range(num_functions):
a = np.random.rand() # Random slope
b = np.random.rand() # Random y-intercept
f = lambda x: a * x + b # Linear function
residuals = y - f(x)
sum_squared_residuals = np.sum(residuals**2)
vertical_distances = np.abs(residuals)
functions.append((a, b, sum_squared_residuals, vertical_distances))
# Plot the Random linear functions
colors = [
"red",
"green",
"blue",
"orange",
"purple",
"cyan",
"magenta",
"pink",
"brown",
"gray",
]
plt.figure(figsize=(10, 6))
for i, (a, b, s, v) in enumerate(functions):
color = colors[i % len(colors)]
plt.plot(x, a * x + b, linewidth=1, alpha=0.8, color=color)
plt.scatter(x, y, color="green", label="Data Points")
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Random Linear Functions")
plt.show()
# Plot the Random linear functions and their Loss Value
for i, (a, b, s, v) in enumerate(functions):
color = colors[i % len(colors)]
plt.scatter(i, s, marker="o", color=color)
plt.text(i, s, f"{s:.2f}", ha="center", va="bottom")
plt.xlabel("Index of the Random Linear Function")
plt.ylabel("Loss Function Value")
plt.title("Loss Function Value of Random Linear Functions")
plt.show()
def gradient_descent(x, y, learning_rate, num_iterations):
# Initialize the parameters
a = 0
b = 0
n = len(x)
# Store the parameter values for plotting
a_values = [a]
b_values = [b]
loss_values = []
for iteration in range(num_iterations): # Perform gradient descent
y_pred = a * x + b # Calculate the predicted values
residuals = y_pred - y
sum_squared_residuals = np.sum(residuals**2)
# Calculate the gradients
gradient_a = (2 / n) * np.sum((y_pred - y) * x)
gradient_b = (2 / n) * np.sum(y_pred - y)
# Update the parameters
a -= learning_rate * gradient_a
b -= learning_rate * gradient_b
# Store the parameter values
a_values.append(a)
b_values.append(b)
loss_values.append(sum_squared_residuals)
return a, b, a_values, b_values, loss_values
learning_rate = 0.1 # Set hyperpameter learning rate
num_iterations = 5
a_optimal, b_optimal, a_values, b_values, loss_values = gradient_descent(
x, y, learning_rate, num_iterations
) # Perform gradient descent
for i, v in enumerate(a_values):
plt.title("Training at step " + str(i))
plt.scatter(x, y, color="green", label="Data Points")
plt.plot(x, a_values[i] * x + b_values[i], linewidth=1, alpha=0.8)
plt.show()
residuals = y - a_optimal * x + b_optimal
sum_squared_residuals = np.sum(residuals**2)
plt.title("Loss Value after training " + str(num_iterations) + " times")
plt.scatter(list(range(num_iterations)), loss_values, linewidth=1, alpha=1)
plt.plot(loss_values)
print("Optimal Linear Function: y = " + str(a_optimal) + "*x + " + str(b_optimal))
print("Loss Value of the Optimal Linear Function ", sum_squared_residuals)
| false | 0 | 1,645 | 0 | 1,645 | 1,645 |
||
129865528
|
# # Datastructures
# Lists - Collective data enclosed in [ ]
# They allow repeated data.They do preserve the order of data
l1 = [34, 78, 889, 0, 888, 34, 56, 78, 909, 565, 56]
print(l1)
# it allows mixed datatypes to be stored inside lists
l2 = [67, 898, "name", "abc", "xyz", [5, 6], (56, 90, 67, 0)]
print(l2)
type(l2)
list3 = l2 + 45
print(l3)
l2.append(45)
# append is the method in the list which allows to add a particular(single)
# element inside the list at the back.
l2
l2.append([67, 89, 0])
l2
l2.extend([90, 89, 90, -18])
# extend is a method in lists which allows me to insert
# multiple values at the end of the list
l2
l2.append("name")
l2
l1
l2.append(1)
l2
l1.insert()
l1.insert(0, 6)
l1
l2.extend(["python"])
l2
l2.insert(0, -1)
l2
l2.pop(7)
l2
l2.pop()
l2
del l2[9]
l2
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/865/129865528.ipynb
| null | null |
[{"Id": 129865528, "ScriptId": 38625898, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14842923, "CreationDate": "05/17/2023 04:11:57", "VersionNumber": 1.0, "Title": "notebook84caaac4a1", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 67.0, "LinesInsertedFromPrevious": 67.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Datastructures
# Lists - Collective data enclosed in [ ]
# They allow repeated data.They do preserve the order of data
l1 = [34, 78, 889, 0, 888, 34, 56, 78, 909, 565, 56]
print(l1)
# it allows mixed datatypes to be stored inside lists
l2 = [67, 898, "name", "abc", "xyz", [5, 6], (56, 90, 67, 0)]
print(l2)
type(l2)
list3 = l2 + 45
print(l3)
l2.append(45)
# append is the method in the list which allows to add a particular(single)
# element inside the list at the back.
l2
l2.append([67, 89, 0])
l2
l2.extend([90, 89, 90, -18])
# extend is a method in lists which allows me to insert
# multiple values at the end of the list
l2
l2.append("name")
l2
l1
l2.append(1)
l2
l1.insert()
l1.insert(0, 6)
l1
l2.extend(["python"])
l2
l2.insert(0, -1)
l2
l2.pop(7)
l2
l2.pop()
l2
del l2[9]
l2
| false | 0 | 379 | 0 | 379 | 379 |
||
129175590
|
<jupyter_start><jupyter_text>HR Competency Scores for Screening
##### Context
Recruitment and candidate selection play a critical role in determining the success of an organization. An effective initial screening process can significantly improve the quality of the hiring pool and increase the chances of finding the right candidate for any given role. This dataset focuses on both behavioral and functional competency scores, which are essential aspects of a candidate's potential fit and contribution to the organization.
##### Sources
The data in this dataset has been collected from an anonymous company's internal HR department and published in a normalized form. The dataset combines the scores from two key assessments:
1. Functional competency test: Utilized to evaluate a candidate's hard skills and domain knowledge.
2. HR behavior test: An assessment tool focused on evaluating soft or behavior skills, crucial for teamwork and adaptability within an organization.
##### Young Researchers' Contribution
We were approached by a group of young researchers interested in the explainable AI (XAI) problem. They aimed to analyze HR data to understand why specific candidates were called for interviews while others were not. With their valuable input and help in preprocessing the data, we have made this dataset available for the wider research community.
##### Inspiration
The inspiration behind sharing this dataset was the growing need for insights into the hiring process and the importance of selecting candidates who possess a balance of functional and behavioral competencies. With the added value of XAI research, we hope to encourage researchers and data scientists to analyze the initial screening process, build models to optimize candidate selection, explain their decisions, and uncover new insights that can enhance recruitment strategies.
The dataset can be employed for a wide range of applications, including:
1. Identifying the most significant factors in determining a candidate's eligibility for an interview.
2. Developing machine learning models to predict and explain the likelihood of a candidate being called for an interview.
3. Analyzing the balance between functional competencies and behavioral skills required for a good fit in the organization.
4. Investigating the impact of different skill combinations on the overall competency scores.
We hope this dataset inspires researchers to explore new dimensions of the hiring process and contribute to building better and more transparent recruitment strategies.
Kaggle dataset identifier: hr-competency-scores-for-screening
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
df = pd.read_csv("/kaggle/input/hr-competency-scores-for-screening/dataset.csv")
df.head()
df.shape
# # Checking for Class Imbalance
palette_color = sns.color_palette("pastel")
plt.pie(
x=df["call_for_interview"].value_counts(),
labels=df["call_for_interview"].value_counts().index,
autopct="%.0f%%",
shadow=True,
colors=palette_color,
)
def plots(df, t):
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
sns.barplot(df, x="call_for_interview", y=t, palette="summer", ax=axes[0])
for container in axes[0].containers:
axes[0].bar_label(container, color="black", size=10, padding=10)
sns.histplot(df, x=t, ax=axes[1], kde=True, color="g")
plt.suptitle(t)
plt.show()
# # Barplots grouped by call for inteview feature
for i in df.columns[:-1]:
plots(df, i)
# # Broad overview of numerical data using pairplot
sns.pairplot(df, vars=df.columns[:-1], hue="call_for_interview")
# # Checking for outliers
for i in df.columns[:-1]:
sns.boxplot(df, x=i)
plt.show()
# # Removal of detected outliers
def outliers_removal(df, x):
perc = np.percentile(df[x], [0, 25, 50, 75, 100])
iqr = perc[3] - perc[1]
mn = perc[1] - 1.5 * iqr
mx = perc[3] + 1.5 * iqr
df.loc[df[x] < mn, x] = mn
df.loc[df[x] > mx, x] = mx
return df
df = outliers_removal(df, "functional_competency_score")
sns.boxplot(df, x="functional_competency_score")
plt.show()
# # Correlation and splitting data for training and testing
corr = df.corr(method="kendall")
sns.heatmap(corr, annot=True)
x = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
x_train, x_test, y_train, y_test = train_test_split(
x, y, random_state=42, test_size=0.2
)
# # Modelling pipeline
def evaluate(model, name, _round=2):
y_pred = model.predict(x_test)
acc = accuracy_score(y_pred, y_test)
acc *= 100
acc = round(acc, 2)
print("{}: {}%".format(name, acc))
def training(model, name):
model.fit(x_train, y_train)
evaluate(model, name, 2)
return model
# # Models definitions and Hyperparameters tuning
lnr = LogisticRegression()
svc = SVC(C=0.5)
lvc = LinearSVC(C=0.5)
dtc = DecisionTreeClassifier(max_depth=20, criterion="entropy")
rfc = RandomForestClassifier(max_depth=20, n_estimators=100, criterion="entropy")
abc = AdaBoostClassifier(n_estimators=60, learning_rate=0.1)
xgb = XGBClassifier(
n_estimators=1000, max_depth=10, eta=0.1, subsample=0.7, colsample_bytree=0.8
)
knn = KNeighborsClassifier(n_neighbors=10)
gnb = GaussianNB()
models = [lnr, svc, lvc, dtc, rfc, abc, xgb, knn, gnb]
name = [
"Logistic Regression",
"SVC",
"LinearSVC",
"Decision Tree",
"Random Forest",
"Ada Boost",
"XGBClassifier",
"KNN",
"Naive Bayes",
]
# # Models training and performance assesment
trained = []
for i, j in zip(models, name):
trained += [training(i, j)]
print()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/175/129175590.ipynb
|
hr-competency-scores-for-screening
|
muhammadjawwadismail
|
[{"Id": 129175590, "ScriptId": 38389338, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11036701, "CreationDate": "05/11/2023 14:45:28", "VersionNumber": 1.0, "Title": "HR Competency Classifier", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 124.0, "LinesInsertedFromPrevious": 124.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 184991050, "KernelVersionId": 129175590, "SourceDatasetVersionId": 5658852}]
|
[{"Id": 5658852, "DatasetId": 3252282, "DatasourceVersionId": 5734269, "CreatorUserId": 4429155, "LicenseName": "CC BY-SA 4.0", "CreationDate": "05/10/2023 21:38:25", "VersionNumber": 1.0, "Title": "HR Competency Scores for Screening", "Slug": "hr-competency-scores-for-screening", "Subtitle": "Anonymized HR Data for Evaluating Candidate Screening Processes", "Description": "##### Context\n\nRecruitment and candidate selection play a critical role in determining the success of an organization. An effective initial screening process can significantly improve the quality of the hiring pool and increase the chances of finding the right candidate for any given role. This dataset focuses on both behavioral and functional competency scores, which are essential aspects of a candidate's potential fit and contribution to the organization.\n\n##### Sources\n\nThe data in this dataset has been collected from an anonymous company's internal HR department and published in a normalized form. The dataset combines the scores from two key assessments:\n\n1. Functional competency test: Utilized to evaluate a candidate's hard skills and domain knowledge.\n2. HR behavior test: An assessment tool focused on evaluating soft or behavior skills, crucial for teamwork and adaptability within an organization.\n\n##### Young Researchers' Contribution\n\nWe were approached by a group of young researchers interested in the explainable AI (XAI) problem. They aimed to analyze HR data to understand why specific candidates were called for interviews while others were not. With their valuable input and help in preprocessing the data, we have made this dataset available for the wider research community.\n\n##### Inspiration\n\nThe inspiration behind sharing this dataset was the growing need for insights into the hiring process and the importance of selecting candidates who possess a balance of functional and behavioral competencies. With the added value of XAI research, we hope to encourage researchers and data scientists to analyze the initial screening process, build models to optimize candidate selection, explain their decisions, and uncover new insights that can enhance recruitment strategies.\n\nThe dataset can be employed for a wide range of applications, including:\n\n1. Identifying the most significant factors in determining a candidate's eligibility for an interview.\n2. Developing machine learning models to predict and explain the likelihood of a candidate being called for an interview.\n3. Analyzing the balance between functional competencies and behavioral skills required for a good fit in the organization.\n4. Investigating the impact of different skill combinations on the overall competency scores.\n\nWe hope this dataset inspires researchers to explore new dimensions of the hiring process and contribute to building better and more transparent recruitment strategies.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3252282, "CreatorUserId": 4429155, "OwnerUserId": 4429155.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5658852.0, "CurrentDatasourceVersionId": 5734269.0, "ForumId": 3317708, "Type": 2, "CreationDate": "05/10/2023 21:38:25", "LastActivityDate": "05/10/2023", "TotalViews": 5601, "TotalDownloads": 897, "TotalVotes": 37, "TotalKernels": 12}]
|
[{"Id": 4429155, "UserName": "muhammadjawwadismail", "DisplayName": "Muhammad Jawad", "RegisterDate": "02/03/2020", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
df = pd.read_csv("/kaggle/input/hr-competency-scores-for-screening/dataset.csv")
df.head()
df.shape
# # Checking for Class Imbalance
palette_color = sns.color_palette("pastel")
plt.pie(
x=df["call_for_interview"].value_counts(),
labels=df["call_for_interview"].value_counts().index,
autopct="%.0f%%",
shadow=True,
colors=palette_color,
)
def plots(df, t):
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
sns.barplot(df, x="call_for_interview", y=t, palette="summer", ax=axes[0])
for container in axes[0].containers:
axes[0].bar_label(container, color="black", size=10, padding=10)
sns.histplot(df, x=t, ax=axes[1], kde=True, color="g")
plt.suptitle(t)
plt.show()
# # Barplots grouped by call for inteview feature
for i in df.columns[:-1]:
plots(df, i)
# # Broad overview of numerical data using pairplot
sns.pairplot(df, vars=df.columns[:-1], hue="call_for_interview")
# # Checking for outliers
for i in df.columns[:-1]:
sns.boxplot(df, x=i)
plt.show()
# # Removal of detected outliers
def outliers_removal(df, x):
perc = np.percentile(df[x], [0, 25, 50, 75, 100])
iqr = perc[3] - perc[1]
mn = perc[1] - 1.5 * iqr
mx = perc[3] + 1.5 * iqr
df.loc[df[x] < mn, x] = mn
df.loc[df[x] > mx, x] = mx
return df
df = outliers_removal(df, "functional_competency_score")
sns.boxplot(df, x="functional_competency_score")
plt.show()
# # Correlation and splitting data for training and testing
corr = df.corr(method="kendall")
sns.heatmap(corr, annot=True)
x = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
x_train, x_test, y_train, y_test = train_test_split(
x, y, random_state=42, test_size=0.2
)
# # Modelling pipeline
def evaluate(model, name, _round=2):
y_pred = model.predict(x_test)
acc = accuracy_score(y_pred, y_test)
acc *= 100
acc = round(acc, 2)
print("{}: {}%".format(name, acc))
def training(model, name):
model.fit(x_train, y_train)
evaluate(model, name, 2)
return model
# # Models definitions and Hyperparameters tuning
lnr = LogisticRegression()
svc = SVC(C=0.5)
lvc = LinearSVC(C=0.5)
dtc = DecisionTreeClassifier(max_depth=20, criterion="entropy")
rfc = RandomForestClassifier(max_depth=20, n_estimators=100, criterion="entropy")
abc = AdaBoostClassifier(n_estimators=60, learning_rate=0.1)
xgb = XGBClassifier(
n_estimators=1000, max_depth=10, eta=0.1, subsample=0.7, colsample_bytree=0.8
)
knn = KNeighborsClassifier(n_neighbors=10)
gnb = GaussianNB()
models = [lnr, svc, lvc, dtc, rfc, abc, xgb, knn, gnb]
name = [
"Logistic Regression",
"SVC",
"LinearSVC",
"Decision Tree",
"Random Forest",
"Ada Boost",
"XGBClassifier",
"KNN",
"Naive Bayes",
]
# # Models training and performance assesment
trained = []
for i, j in zip(models, name):
trained += [training(i, j)]
print()
| false | 1 | 1,213 | 1 | 1,741 | 1,213 |
||
129175190
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/cardanotvl/tvl.csv")
df.info()
df.tail(11)
import matplotlib.pyplot as plt
plt.style.use("ggplot")
# Assuming your data is stored in a DataFrame called 'df'
# Convert the 'Date' column to a datetime format
df["Date"] = pd.to_datetime(df["Date"])
# Filter the data to only include rows where the 'Date' column is greater than or equal to '2023-01-01'
df = df[df["Date"] >= "2023-01-01"]
# Plot the data
ax = df.plot(
x="Date",
y=[
"minswap",
"indigo",
"wingriders",
"djed-stablecoin",
"liqwid",
"muesliswap",
"sundaeswap",
"optim-finance",
"aada",
"fluidtokens",
],
figsize=(16, 6),
title="Cardano Total Value Locked - DeFiLlama",
xlabel="Date",
ylabel="USD",
)
# Move the legend above the x-axis label
ax.legend(bbox_to_anchor=(0.5, -0.25), loc="upper center", ncol=len(df.columns[1:]))
# Format the y-axis tick labels to display values in millions
plt.gca().yaxis.set_major_formatter(plt.FuncFormatter(lambda x, loc: f"{int(x/1e6)}M"))
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/175/129175190.ipynb
| null | null |
[{"Id": 129175190, "ScriptId": 38401928, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4732812, "CreationDate": "05/11/2023 14:42:54", "VersionNumber": 1.0, "Title": "Cardano DeFi - TVL", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 46.0, "LinesInsertedFromPrevious": 46.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/cardanotvl/tvl.csv")
df.info()
df.tail(11)
import matplotlib.pyplot as plt
plt.style.use("ggplot")
# Assuming your data is stored in a DataFrame called 'df'
# Convert the 'Date' column to a datetime format
df["Date"] = pd.to_datetime(df["Date"])
# Filter the data to only include rows where the 'Date' column is greater than or equal to '2023-01-01'
df = df[df["Date"] >= "2023-01-01"]
# Plot the data
ax = df.plot(
x="Date",
y=[
"minswap",
"indigo",
"wingriders",
"djed-stablecoin",
"liqwid",
"muesliswap",
"sundaeswap",
"optim-finance",
"aada",
"fluidtokens",
],
figsize=(16, 6),
title="Cardano Total Value Locked - DeFiLlama",
xlabel="Date",
ylabel="USD",
)
# Move the legend above the x-axis label
ax.legend(bbox_to_anchor=(0.5, -0.25), loc="upper center", ncol=len(df.columns[1:]))
# Format the y-axis tick labels to display values in millions
plt.gca().yaxis.set_major_formatter(plt.FuncFormatter(lambda x, loc: f"{int(x/1e6)}M"))
plt.show()
| false | 0 | 556 | 0 | 556 | 556 |
||
129175542
|
<jupyter_start><jupyter_text>benetech making graphs accessible csv
Kaggle dataset identifier: benetech-making-graphs-accessible-csv
<jupyter_script># # Indroduction
# Hello guys, I hope you had a chance to check out my EasyOCR finetuning notebook. If you haven't, feel free to take a look and then come back to this notebook. In this notebook, we'll be comparing the performance of the EasyOCR public model with our finetuned model. Let's dive in!
# Imports
import os
import pandas as pd
import cv2
import easyocr
# On the EasyOCR GitHub page, they explain how to access our custom pretrained model. To use our finetuned model, we need three files: the model architecture file (a .py file), a .yaml file containing network parameters and other information, and the finetuned model file (a .pth file). We should place the .py and .yaml files in the `.EasyOCR/user_network` folder and the .pth file in the `.EasyOCR/model` folder. It's important to name all three files the same.
# Now, let's take a look at how we can access our finetuned model.
# `I couldn't find the user_network folder in .EasyOCR, so I created one with that name.
# We need this folder to store `.py` file and `.yaml` file.
# copy-paste all three files (model architecture file, network parameter file, and finetuned model file) to the correct destination folders.
public_model_reader = easyocr.Reader(["en"], gpu=True)
finetuned_model_reader = easyocr.Reader(
["en"], recog_network="best_accuracy", gpu=True
) # In this code snippet, a parameter named `recog_network` is utilized and assigned the value of `best_accuracy`. This parameter is used to read three files that are all named `best_accuracy`, which is why they specified that these three files must have identical names.
# Here, we are solely fine-tuning the recog_network of an OCR engine, OCR engine typically consists of two models:
# a detection model and a recognition model. The detection model's job is to identify the bounding boxes of each word,
# while the recognition model is responsible for recognizing the text within those boxes.
# However, in this particular case, we only fine-tunned the recognition model.
import os
import pandas as pd
import cv2
import json
import matplotlib.pyplot as plt
BASE_DIR = "/kaggle/input/benetech-making-graphs-accessible"
files = os.listdir(f"{BASE_DIR}/train/images")
f = open(f"{BASE_DIR}/train/annotations/0000ae6cbdb1.json")
annotated_data = json.load(f)
# annotated_data['text'][10]annotated_data['text'][10]
def hwlt2ltrb(coor):
left = coor[2]
top = coor[3]
right = left + coor[1]
bottom = top + coor[0]
return (left, top, right, bottom)
def get_ltrb(source, idx):
l, t, r, b = float("inf"), float("inf"), float("-inf"), float("-inf")
data = source[idx]["polygon"]
x_values = (data["x0"], data["x1"], data["x2"], data["x3"])
y_values = (data["y0"], data["y1"], data["y2"], data["y3"])
l = min(l, min(x_values))
t = min(t, min(y_values))
r = max(r, max(x_values))
b = max(b, max(y_values))
text = source[idx]["text"]
return (l, t, r, b), text
public_model_correct_count = 0
finetuned_model_correct_count = 0
for i, file in enumerate(files):
if i > 50000:
img = cv2.imread(f"{BASE_DIR}/train/images/{file}")
image = img.copy()
h, w, _ = img.shape
f = open(f"{BASE_DIR}/train/annotations/{file.replace('jpg','json')}")
annotated_data = json.load(f)
indicies = list(
map(lambda x: x["id"], annotated_data["axes"]["x-axis"]["ticks"])
)
indicies.extend(
list(map(lambda x: x["id"], annotated_data["axes"]["y-axis"]["ticks"]))
)
# plot_l,plot_t,plot_r,plot_b = hwlt2ltrb((plot_bb["height"],plot_bb["width"],plot_bb["x0"],plot_bb["y0"]))
# try:
for idx in indicies:
try:
coor, text = get_ltrb(annotated_data["text"], idx)
l, t, r, b = coor
canvas = cv2.rectangle(image, (l, t), (r, b), (0, 255, 0), 1)
text_crop = img[t:b, l:r]
img = text_crop
try:
public_model_res = public_model_reader.readtext(img)
if public_model_res[0][1] == text:
public_model_correct_count += 1
except:
pass
try:
finetuned_model_res = finetuned_model_reader.readtext(img)
if finetuned_model_res[0][1] == text:
finetuned_model_correct_count += 1
except:
pass
except:
pass
if i % 1000 == 0:
print(f"Exsist Model Per: {public_model_correct_count}")
print(f"Finetunned Model Per: {finetuned_model_correct_count}")
print("#" * 100)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/175/129175542.ipynb
|
benetech-making-graphs-accessible-csv
|
seshurajup
|
[{"Id": 129175542, "ScriptId": 38385404, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6015114, "CreationDate": "05/11/2023 14:45:10", "VersionNumber": 1.0, "Title": "Check The Performance Of our Finetuned OCR Model", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 121.0, "LinesInsertedFromPrevious": 69.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 52.0, "LinesInsertedFromFork": 69.0, "LinesDeletedFromFork": 776.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 52.0, "TotalVotes": 0}]
|
[{"Id": 184990926, "KernelVersionId": 129175542, "SourceDatasetVersionId": 5214849}, {"Id": 184990927, "KernelVersionId": 129175542, "SourceDatasetVersionId": 5663450}]
|
[{"Id": 5214849, "DatasetId": 3033230, "DatasourceVersionId": 5287288, "CreatorUserId": 761268, "LicenseName": "Unknown", "CreationDate": "03/22/2023 14:11:20", "VersionNumber": 2.0, "Title": "benetech making graphs accessible csv", "Slug": "benetech-making-graphs-accessible-csv", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023/03/22", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3033230, "CreatorUserId": 761268, "OwnerUserId": 761268.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5214849.0, "CurrentDatasourceVersionId": 5287288.0, "ForumId": 3072643, "Type": 2, "CreationDate": "03/22/2023 12:26:06", "LastActivityDate": "03/22/2023", "TotalViews": 166, "TotalDownloads": 15, "TotalVotes": 2, "TotalKernels": 2}]
|
[{"Id": 761268, "UserName": "seshurajup", "DisplayName": "SeshuRajuP \ud83e\uddd8\u200d\u2642\ufe0f", "RegisterDate": "10/21/2016", "PerformanceTier": 2}]
|
# # Indroduction
# Hello guys, I hope you had a chance to check out my EasyOCR finetuning notebook. If you haven't, feel free to take a look and then come back to this notebook. In this notebook, we'll be comparing the performance of the EasyOCR public model with our finetuned model. Let's dive in!
# Imports
import os
import pandas as pd
import cv2
import easyocr
# On the EasyOCR GitHub page, they explain how to access our custom pretrained model. To use our finetuned model, we need three files: the model architecture file (a .py file), a .yaml file containing network parameters and other information, and the finetuned model file (a .pth file). We should place the .py and .yaml files in the `.EasyOCR/user_network` folder and the .pth file in the `.EasyOCR/model` folder. It's important to name all three files the same.
# Now, let's take a look at how we can access our finetuned model.
# `I couldn't find the user_network folder in .EasyOCR, so I created one with that name.
# We need this folder to store `.py` file and `.yaml` file.
# copy-paste all three files (model architecture file, network parameter file, and finetuned model file) to the correct destination folders.
public_model_reader = easyocr.Reader(["en"], gpu=True)
finetuned_model_reader = easyocr.Reader(
["en"], recog_network="best_accuracy", gpu=True
) # In this code snippet, a parameter named `recog_network` is utilized and assigned the value of `best_accuracy`. This parameter is used to read three files that are all named `best_accuracy`, which is why they specified that these three files must have identical names.
# Here, we are solely fine-tuning the recog_network of an OCR engine, OCR engine typically consists of two models:
# a detection model and a recognition model. The detection model's job is to identify the bounding boxes of each word,
# while the recognition model is responsible for recognizing the text within those boxes.
# However, in this particular case, we only fine-tunned the recognition model.
import os
import pandas as pd
import cv2
import json
import matplotlib.pyplot as plt
BASE_DIR = "/kaggle/input/benetech-making-graphs-accessible"
files = os.listdir(f"{BASE_DIR}/train/images")
f = open(f"{BASE_DIR}/train/annotations/0000ae6cbdb1.json")
annotated_data = json.load(f)
# annotated_data['text'][10]annotated_data['text'][10]
def hwlt2ltrb(coor):
left = coor[2]
top = coor[3]
right = left + coor[1]
bottom = top + coor[0]
return (left, top, right, bottom)
def get_ltrb(source, idx):
l, t, r, b = float("inf"), float("inf"), float("-inf"), float("-inf")
data = source[idx]["polygon"]
x_values = (data["x0"], data["x1"], data["x2"], data["x3"])
y_values = (data["y0"], data["y1"], data["y2"], data["y3"])
l = min(l, min(x_values))
t = min(t, min(y_values))
r = max(r, max(x_values))
b = max(b, max(y_values))
text = source[idx]["text"]
return (l, t, r, b), text
public_model_correct_count = 0
finetuned_model_correct_count = 0
for i, file in enumerate(files):
if i > 50000:
img = cv2.imread(f"{BASE_DIR}/train/images/{file}")
image = img.copy()
h, w, _ = img.shape
f = open(f"{BASE_DIR}/train/annotations/{file.replace('jpg','json')}")
annotated_data = json.load(f)
indicies = list(
map(lambda x: x["id"], annotated_data["axes"]["x-axis"]["ticks"])
)
indicies.extend(
list(map(lambda x: x["id"], annotated_data["axes"]["y-axis"]["ticks"]))
)
# plot_l,plot_t,plot_r,plot_b = hwlt2ltrb((plot_bb["height"],plot_bb["width"],plot_bb["x0"],plot_bb["y0"]))
# try:
for idx in indicies:
try:
coor, text = get_ltrb(annotated_data["text"], idx)
l, t, r, b = coor
canvas = cv2.rectangle(image, (l, t), (r, b), (0, 255, 0), 1)
text_crop = img[t:b, l:r]
img = text_crop
try:
public_model_res = public_model_reader.readtext(img)
if public_model_res[0][1] == text:
public_model_correct_count += 1
except:
pass
try:
finetuned_model_res = finetuned_model_reader.readtext(img)
if finetuned_model_res[0][1] == text:
finetuned_model_correct_count += 1
except:
pass
except:
pass
if i % 1000 == 0:
print(f"Exsist Model Per: {public_model_correct_count}")
print(f"Finetunned Model Per: {finetuned_model_correct_count}")
print("#" * 100)
| false | 0 | 1,370 | 0 | 1,401 | 1,370 |
||
129096914
|
import pandas as pd
a = pd.read_csv("/kaggle/input/countries-code/report.csv")
a
mask = a.duplicated(subset=["REGION_ID"])
mask
a[mask]
a.drop_duplicates(subset=["REGION_ID"], ignore_index=True, inplace=True)
a
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/096/129096914.ipynb
| null | null |
[{"Id": 129096914, "ScriptId": 38377480, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13282236, "CreationDate": "05/11/2023 01:54:50", "VersionNumber": 1.0, "Title": "OWN DATAFRAME", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 13.0, "LinesInsertedFromPrevious": 13.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
a = pd.read_csv("/kaggle/input/countries-code/report.csv")
a
mask = a.duplicated(subset=["REGION_ID"])
mask
a[mask]
a.drop_duplicates(subset=["REGION_ID"], ignore_index=True, inplace=True)
a
| false | 0 | 76 | 0 | 76 | 76 |
||
129096312
|
<jupyter_start><jupyter_text>Lung Cancer
### The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The data is collected from the website online lung cancer prediction system .
Total no. of attributes:16
No .of instances:284
Attribute information:
1. Gender: M(male), F(female)
2. Age: Age of the patient
3. Smoking: YES=2 , NO=1.
4. Yellow fingers: YES=2 , NO=1.
5. Anxiety: YES=2 , NO=1.
6. Peer_pressure: YES=2 , NO=1.
7. Chronic Disease: YES=2 , NO=1.
8. Fatigue: YES=2 , NO=1.
9. Allergy: YES=2 , NO=1.
10. Wheezing: YES=2 , NO=1.
11. Alcohol: YES=2 , NO=1.
12. Coughing: YES=2 , NO=1.
13. Shortness of Breath: YES=2 , NO=1.
14. Swallowing Difficulty: YES=2 , NO=1.
15. Chest pain: YES=2 , NO=1.
16. Lung Cancer: YES , NO.
Kaggle dataset identifier: lung-cancer
<jupyter_code>import pandas as pd
df = pd.read_csv('lung-cancer/survey lung cancer.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 309 entries, 0 to 308
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 GENDER 309 non-null object
1 AGE 309 non-null int64
2 SMOKING 309 non-null int64
3 YELLOW_FINGERS 309 non-null int64
4 ANXIETY 309 non-null int64
5 PEER_PRESSURE 309 non-null int64
6 CHRONIC DISEASE 309 non-null int64
7 FATIGUE 309 non-null int64
8 ALLERGY 309 non-null int64
9 WHEEZING 309 non-null int64
10 ALCOHOL CONSUMING 309 non-null int64
11 COUGHING 309 non-null int64
12 SHORTNESS OF BREATH 309 non-null int64
13 SWALLOWING DIFFICULTY 309 non-null int64
14 CHEST PAIN 309 non-null int64
15 LUNG_CANCER 309 non-null object
dtypes: int64(14), object(2)
memory usage: 38.8+ KB
<jupyter_text>Examples:
{
"GENDER": "M",
"AGE": 69,
"SMOKING": 1,
"YELLOW_FINGERS": 2,
"ANXIETY": 2,
"PEER_PRESSURE": 1,
"CHRONIC DISEASE": 1,
"FATIGUE ": 2,
"ALLERGY ": 1,
"WHEEZING": 2,
"ALCOHOL CONSUMING": 2,
"COUGHING": 2,
"SHORTNESS OF BREATH": 2,
"SWALLOWING DIFFICULTY": 2,
"CHEST PAIN": 2,
"LUNG_CANCER": "YES"
}
{
"GENDER": "M",
"AGE": 74,
"SMOKING": 2,
"YELLOW_FINGERS": 1,
"ANXIETY": 1,
"PEER_PRESSURE": 1,
"CHRONIC DISEASE": 2,
"FATIGUE ": 2,
"ALLERGY ": 2,
"WHEEZING": 1,
"ALCOHOL CONSUMING": 1,
"COUGHING": 1,
"SHORTNESS OF BREATH": 2,
"SWALLOWING DIFFICULTY": 2,
"CHEST PAIN": 2,
"LUNG_CANCER": "YES"
}
{
"GENDER": "F",
"AGE": 59,
"SMOKING": 1,
"YELLOW_FINGERS": 1,
"ANXIETY": 1,
"PEER_PRESSURE": 2,
"CHRONIC DISEASE": 1,
"FATIGUE ": 2,
"ALLERGY ": 1,
"WHEEZING": 2,
"ALCOHOL CONSUMING": 1,
"COUGHING": 2,
"SHORTNESS OF BREATH": 2,
"SWALLOWING DIFFICULTY": 1,
"CHEST PAIN": 2,
"LUNG_CANCER": "NO"
}
{
"GENDER": "M",
"AGE": 63,
"SMOKING": 2,
"YELLOW_FINGERS": 2,
"ANXIETY": 2,
"PEER_PRESSURE": 1,
"CHRONIC DISEASE": 1,
"FATIGUE ": 1,
"ALLERGY ": 1,
"WHEEZING": 1,
"ALCOHOL CONSUMING": 2,
"COUGHING": 1,
"SHORTNESS OF BREATH": 1,
"SWALLOWING DIFFICULTY": 2,
"CHEST PAIN": 2,
"LUNG_CANCER": "NO"
}
<jupyter_script>import pandas as pd
df = pd.read_csv("../input/lung-cancer/survey lung cancer.csv")
df.head()
df.shape
# Some info about our attributes and its datatype
df.info()
# Some analysis on the numerical columns
df.describe()
# Check for null values
df.isnull().sum()
# Check for duplicates in the dataset
df.duplicated().sum()
df.drop_duplicates(inplace=True)
df.shape
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/096/129096312.ipynb
|
lung-cancer
|
mysarahmadbhat
|
[{"Id": 129096312, "ScriptId": 38366099, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10692872, "CreationDate": "05/11/2023 01:45:56", "VersionNumber": 2.0, "Title": "Prediction Lung cancer", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 22.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 22.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184849576, "KernelVersionId": 129096312, "SourceDatasetVersionId": 2668247}]
|
[{"Id": 2668247, "DatasetId": 1623385, "DatasourceVersionId": 2712518, "CreatorUserId": 6990402, "LicenseName": "CC0: Public Domain", "CreationDate": "10/01/2021 13:39:48", "VersionNumber": 1.0, "Title": "Lung Cancer", "Slug": "lung-cancer", "Subtitle": "Does Smoking cause Lung Cancer.", "Description": "### The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The data is collected from the website online lung cancer prediction system .\nTotal no. of attributes:16\nNo .of instances:284\nAttribute information:\n1.\tGender: M(male), F(female)\n2.\tAge: Age of the patient\n3.\tSmoking: YES=2 , NO=1.\n4.\tYellow fingers: YES=2 , NO=1.\n5.\tAnxiety: YES=2 , NO=1.\n6.\tPeer_pressure: YES=2 , NO=1.\n7.\tChronic Disease: YES=2 , NO=1.\n8.\tFatigue: YES=2 , NO=1.\n9.\tAllergy: YES=2 , NO=1.\n10.\tWheezing: YES=2 , NO=1.\n11.\tAlcohol: YES=2 , NO=1.\n12.\tCoughing: YES=2 , NO=1.\n13.\tShortness of Breath: YES=2 , NO=1.\n14.\tSwallowing Difficulty: YES=2 , NO=1.\n15.\tChest pain: YES=2 , NO=1.\n16.\tLung Cancer: YES , NO.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1623385, "CreatorUserId": 6990402, "OwnerUserId": 6990402.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2668247.0, "CurrentDatasourceVersionId": 2712518.0, "ForumId": 1643979, "Type": 2, "CreationDate": "10/01/2021 13:39:48", "LastActivityDate": "10/01/2021", "TotalViews": 153177, "TotalDownloads": 23248, "TotalVotes": 272, "TotalKernels": 70}]
|
[{"Id": 6990402, "UserName": "mysarahmadbhat", "DisplayName": "mysar ahmad bhat", "RegisterDate": "03/21/2021", "PerformanceTier": 3}]
|
import pandas as pd
df = pd.read_csv("../input/lung-cancer/survey lung cancer.csv")
df.head()
df.shape
# Some info about our attributes and its datatype
df.info()
# Some analysis on the numerical columns
df.describe()
# Check for null values
df.isnull().sum()
# Check for duplicates in the dataset
df.duplicated().sum()
df.drop_duplicates(inplace=True)
df.shape
|
[{"lung-cancer/survey lung cancer.csv": {"column_names": "[\"GENDER\", \"AGE\", \"SMOKING\", \"YELLOW_FINGERS\", \"ANXIETY\", \"PEER_PRESSURE\", \"CHRONIC DISEASE\", \"FATIGUE \", \"ALLERGY \", \"WHEEZING\", \"ALCOHOL CONSUMING\", \"COUGHING\", \"SHORTNESS OF BREATH\", \"SWALLOWING DIFFICULTY\", \"CHEST PAIN\", \"LUNG_CANCER\"]", "column_data_types": "{\"GENDER\": \"object\", \"AGE\": \"int64\", \"SMOKING\": \"int64\", \"YELLOW_FINGERS\": \"int64\", \"ANXIETY\": \"int64\", \"PEER_PRESSURE\": \"int64\", \"CHRONIC DISEASE\": \"int64\", \"FATIGUE \": \"int64\", \"ALLERGY \": \"int64\", \"WHEEZING\": \"int64\", \"ALCOHOL CONSUMING\": \"int64\", \"COUGHING\": \"int64\", \"SHORTNESS OF BREATH\": \"int64\", \"SWALLOWING DIFFICULTY\": \"int64\", \"CHEST PAIN\": \"int64\", \"LUNG_CANCER\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 309 entries, 0 to 308\nData columns (total 16 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 GENDER 309 non-null object\n 1 AGE 309 non-null int64 \n 2 SMOKING 309 non-null int64 \n 3 YELLOW_FINGERS 309 non-null int64 \n 4 ANXIETY 309 non-null int64 \n 5 PEER_PRESSURE 309 non-null int64 \n 6 CHRONIC DISEASE 309 non-null int64 \n 7 FATIGUE 309 non-null int64 \n 8 ALLERGY 309 non-null int64 \n 9 WHEEZING 309 non-null int64 \n 10 ALCOHOL CONSUMING 309 non-null int64 \n 11 COUGHING 309 non-null int64 \n 12 SHORTNESS OF BREATH 309 non-null int64 \n 13 SWALLOWING DIFFICULTY 309 non-null int64 \n 14 CHEST PAIN 309 non-null int64 \n 15 LUNG_CANCER 309 non-null object\ndtypes: int64(14), object(2)\nmemory usage: 38.8+ KB\n", "summary": "{\"AGE\": {\"count\": 309.0, \"mean\": 62.67313915857605, \"std\": 8.210301387885995, \"min\": 21.0, \"25%\": 57.0, \"50%\": 62.0, \"75%\": 69.0, \"max\": 87.0}, \"SMOKING\": {\"count\": 309.0, \"mean\": 1.5631067961165048, \"std\": 0.4968060894409518, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"YELLOW_FINGERS\": {\"count\": 309.0, \"mean\": 1.5695792880258899, \"std\": 0.49593819429101677, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"ANXIETY\": {\"count\": 309.0, \"mean\": 1.4983818770226538, \"std\": 0.5008084079652348, \"min\": 1.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 2.0}, \"PEER_PRESSURE\": {\"count\": 309.0, \"mean\": 1.5016181229773462, \"std\": 0.5008084079652348, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"CHRONIC DISEASE\": {\"count\": 309.0, \"mean\": 1.5048543689320388, \"std\": 0.5007874268634864, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"FATIGUE \": {\"count\": 309.0, \"mean\": 1.6731391585760518, \"std\": 0.46982676766120723, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"ALLERGY \": {\"count\": 309.0, \"mean\": 1.5566343042071198, \"std\": 0.49758801243408385, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"WHEEZING\": {\"count\": 309.0, \"mean\": 1.5566343042071198, \"std\": 0.49758801243408385, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"ALCOHOL CONSUMING\": {\"count\": 309.0, \"mean\": 1.5566343042071198, \"std\": 0.4975880124340838, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"COUGHING\": {\"count\": 309.0, \"mean\": 1.5792880258899675, \"std\": 0.49447415124782723, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"SHORTNESS OF BREATH\": {\"count\": 309.0, \"mean\": 1.6407766990291262, \"std\": 0.48055100136181955, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}, \"SWALLOWING DIFFICULTY\": {\"count\": 309.0, \"mean\": 1.4692556634304208, \"std\": 0.49986338653997353, \"min\": 1.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 2.0}, \"CHEST PAIN\": {\"count\": 309.0, \"mean\": 1.5566343042071198, \"std\": 0.4975880124340838, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 2.0, \"max\": 2.0}}", "examples": "{\"GENDER\":{\"0\":\"M\",\"1\":\"M\",\"2\":\"F\",\"3\":\"M\"},\"AGE\":{\"0\":69,\"1\":74,\"2\":59,\"3\":63},\"SMOKING\":{\"0\":1,\"1\":2,\"2\":1,\"3\":2},\"YELLOW_FINGERS\":{\"0\":2,\"1\":1,\"2\":1,\"3\":2},\"ANXIETY\":{\"0\":2,\"1\":1,\"2\":1,\"3\":2},\"PEER_PRESSURE\":{\"0\":1,\"1\":1,\"2\":2,\"3\":1},\"CHRONIC DISEASE\":{\"0\":1,\"1\":2,\"2\":1,\"3\":1},\"FATIGUE \":{\"0\":2,\"1\":2,\"2\":2,\"3\":1},\"ALLERGY \":{\"0\":1,\"1\":2,\"2\":1,\"3\":1},\"WHEEZING\":{\"0\":2,\"1\":1,\"2\":2,\"3\":1},\"ALCOHOL CONSUMING\":{\"0\":2,\"1\":1,\"2\":1,\"3\":2},\"COUGHING\":{\"0\":2,\"1\":1,\"2\":2,\"3\":1},\"SHORTNESS OF BREATH\":{\"0\":2,\"1\":2,\"2\":2,\"3\":1},\"SWALLOWING DIFFICULTY\":{\"0\":2,\"1\":2,\"2\":1,\"3\":2},\"CHEST PAIN\":{\"0\":2,\"1\":2,\"2\":2,\"3\":2},\"LUNG_CANCER\":{\"0\":\"YES\",\"1\":\"YES\",\"2\":\"NO\",\"3\":\"NO\"}}"}}]
| true | 1 |
<start_data_description><data_path>lung-cancer/survey lung cancer.csv:
<column_names>
['GENDER', 'AGE', 'SMOKING', 'YELLOW_FINGERS', 'ANXIETY', 'PEER_PRESSURE', 'CHRONIC DISEASE', 'FATIGUE ', 'ALLERGY ', 'WHEEZING', 'ALCOHOL CONSUMING', 'COUGHING', 'SHORTNESS OF BREATH', 'SWALLOWING DIFFICULTY', 'CHEST PAIN', 'LUNG_CANCER']
<column_types>
{'GENDER': 'object', 'AGE': 'int64', 'SMOKING': 'int64', 'YELLOW_FINGERS': 'int64', 'ANXIETY': 'int64', 'PEER_PRESSURE': 'int64', 'CHRONIC DISEASE': 'int64', 'FATIGUE ': 'int64', 'ALLERGY ': 'int64', 'WHEEZING': 'int64', 'ALCOHOL CONSUMING': 'int64', 'COUGHING': 'int64', 'SHORTNESS OF BREATH': 'int64', 'SWALLOWING DIFFICULTY': 'int64', 'CHEST PAIN': 'int64', 'LUNG_CANCER': 'object'}
<dataframe_Summary>
{'AGE': {'count': 309.0, 'mean': 62.67313915857605, 'std': 8.210301387885995, 'min': 21.0, '25%': 57.0, '50%': 62.0, '75%': 69.0, 'max': 87.0}, 'SMOKING': {'count': 309.0, 'mean': 1.5631067961165048, 'std': 0.4968060894409518, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'YELLOW_FINGERS': {'count': 309.0, 'mean': 1.5695792880258899, 'std': 0.49593819429101677, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'ANXIETY': {'count': 309.0, 'mean': 1.4983818770226538, 'std': 0.5008084079652348, 'min': 1.0, '25%': 1.0, '50%': 1.0, '75%': 2.0, 'max': 2.0}, 'PEER_PRESSURE': {'count': 309.0, 'mean': 1.5016181229773462, 'std': 0.5008084079652348, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'CHRONIC DISEASE': {'count': 309.0, 'mean': 1.5048543689320388, 'std': 0.5007874268634864, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'FATIGUE ': {'count': 309.0, 'mean': 1.6731391585760518, 'std': 0.46982676766120723, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'ALLERGY ': {'count': 309.0, 'mean': 1.5566343042071198, 'std': 0.49758801243408385, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'WHEEZING': {'count': 309.0, 'mean': 1.5566343042071198, 'std': 0.49758801243408385, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'ALCOHOL CONSUMING': {'count': 309.0, 'mean': 1.5566343042071198, 'std': 0.4975880124340838, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'COUGHING': {'count': 309.0, 'mean': 1.5792880258899675, 'std': 0.49447415124782723, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'SHORTNESS OF BREATH': {'count': 309.0, 'mean': 1.6407766990291262, 'std': 0.48055100136181955, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}, 'SWALLOWING DIFFICULTY': {'count': 309.0, 'mean': 1.4692556634304208, 'std': 0.49986338653997353, 'min': 1.0, '25%': 1.0, '50%': 1.0, '75%': 2.0, 'max': 2.0}, 'CHEST PAIN': {'count': 309.0, 'mean': 1.5566343042071198, 'std': 0.4975880124340838, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 2.0, 'max': 2.0}}
<dataframe_info>
RangeIndex: 309 entries, 0 to 308
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 GENDER 309 non-null object
1 AGE 309 non-null int64
2 SMOKING 309 non-null int64
3 YELLOW_FINGERS 309 non-null int64
4 ANXIETY 309 non-null int64
5 PEER_PRESSURE 309 non-null int64
6 CHRONIC DISEASE 309 non-null int64
7 FATIGUE 309 non-null int64
8 ALLERGY 309 non-null int64
9 WHEEZING 309 non-null int64
10 ALCOHOL CONSUMING 309 non-null int64
11 COUGHING 309 non-null int64
12 SHORTNESS OF BREATH 309 non-null int64
13 SWALLOWING DIFFICULTY 309 non-null int64
14 CHEST PAIN 309 non-null int64
15 LUNG_CANCER 309 non-null object
dtypes: int64(14), object(2)
memory usage: 38.8+ KB
<some_examples>
{'GENDER': {'0': 'M', '1': 'M', '2': 'F', '3': 'M'}, 'AGE': {'0': 69, '1': 74, '2': 59, '3': 63}, 'SMOKING': {'0': 1, '1': 2, '2': 1, '3': 2}, 'YELLOW_FINGERS': {'0': 2, '1': 1, '2': 1, '3': 2}, 'ANXIETY': {'0': 2, '1': 1, '2': 1, '3': 2}, 'PEER_PRESSURE': {'0': 1, '1': 1, '2': 2, '3': 1}, 'CHRONIC DISEASE': {'0': 1, '1': 2, '2': 1, '3': 1}, 'FATIGUE ': {'0': 2, '1': 2, '2': 2, '3': 1}, 'ALLERGY ': {'0': 1, '1': 2, '2': 1, '3': 1}, 'WHEEZING': {'0': 2, '1': 1, '2': 2, '3': 1}, 'ALCOHOL CONSUMING': {'0': 2, '1': 1, '2': 1, '3': 2}, 'COUGHING': {'0': 2, '1': 1, '2': 2, '3': 1}, 'SHORTNESS OF BREATH': {'0': 2, '1': 2, '2': 2, '3': 1}, 'SWALLOWING DIFFICULTY': {'0': 2, '1': 2, '2': 1, '3': 2}, 'CHEST PAIN': {'0': 2, '1': 2, '2': 2, '3': 2}, 'LUNG_CANCER': {'0': 'YES', '1': 'YES', '2': 'NO', '3': 'NO'}}
<end_description>
| 112 | 0 | 1,566 | 112 |
129102433
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are3 available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import zipfile
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso, Ridge, ElasticNet
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_log_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
t = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/train.csv.zip")
t.extractall()
train = pd.read_csv("/kaggle/working/train.csv")
train.head()
num_cols = train.select_dtypes(include=["number"]).columns
cat_cols = train.select_dtypes(include=["object"]).columns
train[num_cols].describe()
train[cat_cols].describe()
# verificar se tem numeros nulos
for x in num_cols:
if train[x].isna().mean() > 0:
print(x, " \t \t", train[x].isna().mean() * 100)
# verificar se tem numeros nulos
print("/n/n")
for x in cat_cols:
if train[x].isna().mean() > 0:
print(x, " \t \t", train[x].isna().mean() * 100)
# mais de 30% de nulos verificar
train_1 = train
for x in num_cols:
if train[x].isna().mean() > 0.15:
train_1 = train_1.drop(x, axis=1)
num_cols2 = train_1.select_dtypes(include=["number"]).columns
for x in range(len(num_cols2)):
train_1[num_cols2[x]].fillna(train_1[num_cols2[x]].mean(), inplace=True)
# colocar os valores nulos com a media da categoria
x = train_1.drop("price_doc", axis=1)
y = train_1["price_doc"]
for x in cat_cols:
train_1[x] = LabelEncoder().fit_transform(train_1[x].astype(str))
train_1[x] = train_1[x] * 1
train_1[num_cols2].head()
# numeros indexados
"""plt.figure(figsize=(10, 8))
sns.pointplot(x='floor', y='price_doc', data=train_1)
plt.ylabel('Preço', fontsize=12)
plt.xlabel('Andares', fontsize=12)
plt.xticks(rotation='vertical')
plt.show()"""
"""
train_1[(train_1['floor']) == 33]
train_1[(train_1['floor']) == 32]
train_1[(train_1['floor']) == 28]
train_1[(train_1['floor']) == 27]
train_1[(train_1['floor']) == 21]
"""
"""
train_1.drop(train_1.index[14263], inplace=True)
train_1.drop(train_1.index[14348], inplace=True)
train_1.drop(train_1.index[17260], inplace=True)
train_1.drop(train_1.index[18261], inplace=True)
"""
"""train_1.drop(train_1.index[14263], inplace=True)"""
"""train_2 = train_1[train_1.drop(columns=["id"]).duplicated(keep='first')]
train_2.info()"""
"""
x_train, x_test, y_train,y_test = train_test_split(x, y, test_size=0.5, random_state=52)
"""
"""
floresta = RandomForestRegressor(random_state=52, n_estimators=350,max_depth=10)
floresta.fit(x_train, y_train) """ # Monta a floresta aleatória
"""y_pred = floresta.predict(x_test) #faz a previsão e testa a floresta
rmsle = mean_squared_log_error(y_train, y_pred) ** 0.5
print("RMSLE:", rmsle)"""
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.5, random_state=52
)
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
modelo = ElasticNet(alpha=1000)
modelo.fit(x_train, y_train)
y_pred = modelo.predict(x_test)
rmsle = mean_squared_log_error(y_train, y_pred) ** 0.5
print("RMSLE:", rmsle)
import zipfile
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/test.csv.zip")
z.extractall()
test = pd.read_csv("/kaggle/working/test.csv")
num_cols = test.select_dtypes(include=["number"]).columns
cat_cols = test.select_dtypes(include=["object"]).columns
test_1 = test
for x in num_cols:
if test[x].isna().mean() > 0.15:
test_1 = test_1.drop(x, axis=1)
for x in cat_cols:
test_1[x] = LabelEncoder().fit_transform(train_1[x].astype(str))
test_1[x] = test_1[x] * 1
# numeros indexados
y_pred = modelo.predict(x_test)
output = pd.DataFrame({"id": test_1.id, "price_doc": y_pred})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
output.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/102/129102433.ipynb
| null | null |
[{"Id": 129102433, "ScriptId": 38281692, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14612488, "CreationDate": "05/11/2023 03:11:06", "VersionNumber": 3.0, "Title": "Dicas para uso de Arvore de Decis\u00e3o", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 170.0, "LinesInsertedFromPrevious": 71.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 99.0, "LinesInsertedFromFork": 156.0, "LinesDeletedFromFork": 159.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 14.0, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are3 available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import zipfile
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso, Ridge, ElasticNet
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_log_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
t = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/train.csv.zip")
t.extractall()
train = pd.read_csv("/kaggle/working/train.csv")
train.head()
num_cols = train.select_dtypes(include=["number"]).columns
cat_cols = train.select_dtypes(include=["object"]).columns
train[num_cols].describe()
train[cat_cols].describe()
# verificar se tem numeros nulos
for x in num_cols:
if train[x].isna().mean() > 0:
print(x, " \t \t", train[x].isna().mean() * 100)
# verificar se tem numeros nulos
print("/n/n")
for x in cat_cols:
if train[x].isna().mean() > 0:
print(x, " \t \t", train[x].isna().mean() * 100)
# mais de 30% de nulos verificar
train_1 = train
for x in num_cols:
if train[x].isna().mean() > 0.15:
train_1 = train_1.drop(x, axis=1)
num_cols2 = train_1.select_dtypes(include=["number"]).columns
for x in range(len(num_cols2)):
train_1[num_cols2[x]].fillna(train_1[num_cols2[x]].mean(), inplace=True)
# colocar os valores nulos com a media da categoria
x = train_1.drop("price_doc", axis=1)
y = train_1["price_doc"]
for x in cat_cols:
train_1[x] = LabelEncoder().fit_transform(train_1[x].astype(str))
train_1[x] = train_1[x] * 1
train_1[num_cols2].head()
# numeros indexados
"""plt.figure(figsize=(10, 8))
sns.pointplot(x='floor', y='price_doc', data=train_1)
plt.ylabel('Preço', fontsize=12)
plt.xlabel('Andares', fontsize=12)
plt.xticks(rotation='vertical')
plt.show()"""
"""
train_1[(train_1['floor']) == 33]
train_1[(train_1['floor']) == 32]
train_1[(train_1['floor']) == 28]
train_1[(train_1['floor']) == 27]
train_1[(train_1['floor']) == 21]
"""
"""
train_1.drop(train_1.index[14263], inplace=True)
train_1.drop(train_1.index[14348], inplace=True)
train_1.drop(train_1.index[17260], inplace=True)
train_1.drop(train_1.index[18261], inplace=True)
"""
"""train_1.drop(train_1.index[14263], inplace=True)"""
"""train_2 = train_1[train_1.drop(columns=["id"]).duplicated(keep='first')]
train_2.info()"""
"""
x_train, x_test, y_train,y_test = train_test_split(x, y, test_size=0.5, random_state=52)
"""
"""
floresta = RandomForestRegressor(random_state=52, n_estimators=350,max_depth=10)
floresta.fit(x_train, y_train) """ # Monta a floresta aleatória
"""y_pred = floresta.predict(x_test) #faz a previsão e testa a floresta
rmsle = mean_squared_log_error(y_train, y_pred) ** 0.5
print("RMSLE:", rmsle)"""
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.5, random_state=52
)
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
modelo = ElasticNet(alpha=1000)
modelo.fit(x_train, y_train)
y_pred = modelo.predict(x_test)
rmsle = mean_squared_log_error(y_train, y_pred) ** 0.5
print("RMSLE:", rmsle)
import zipfile
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/test.csv.zip")
z.extractall()
test = pd.read_csv("/kaggle/working/test.csv")
num_cols = test.select_dtypes(include=["number"]).columns
cat_cols = test.select_dtypes(include=["object"]).columns
test_1 = test
for x in num_cols:
if test[x].isna().mean() > 0.15:
test_1 = test_1.drop(x, axis=1)
for x in cat_cols:
test_1[x] = LabelEncoder().fit_transform(train_1[x].astype(str))
test_1[x] = test_1[x] * 1
# numeros indexados
y_pred = modelo.predict(x_test)
output = pd.DataFrame({"id": test_1.id, "price_doc": y_pred})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
output.head()
| false | 0 | 1,720 | 0 | 1,720 | 1,720 |
||
129078185
|
<jupyter_start><jupyter_text>7500 hotels from Airbnb, Booking and Hotels.com
This dataset was created while I was working on a SerpApi demo project to showcase [hotels-scraper-js](https://www.npmjs.com/package/hotels-scraper-js) NPM tool.
This dataset includes only hotel listings from Airbnb, Booking, and Hotels.com, and the main point was to explore prices in famous European capitals. 500 hotels from each website per city.
In total, there're 7500 hotel listings.
Kaggle dataset identifier: 500-hotels-from-airbnb-booking-and-hotelscom
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
import json
files = [
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Berlin.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Rome.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Madrid.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/London.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Paris.json",
]
list_cities = [file.split("/")[-1].split(".")[0] for file in files]
print(list_cities)
dict_city_filepath = dict(zip(list_cities, files)).items()
agg = ["airbnbHotels", "bookingHotels", "hotelsComHotels"]
airbnb_df = pd.DataFrame()
bookinghotel_df = pd.DataFrame()
hotelsdotcom_df = pd.DataFrame()
for city, filepath in dict_city_filepath:
with open(filepath, "r") as f:
data = json.load(f)
# create a temporary DataFrame to hold the data
temp_df = pd.DataFrame(data[agg[0]])
temp_df["City"] = city
# append the temporary dataframe to the airbnb dataframe
airbnb_df = pd.concat([airbnb_df, temp_df], ignore_index=True)
# repeat the process for bookinghotel and hotelsdotcom dataframes
temp_df = pd.DataFrame(data[agg[1]])
temp_df["City"] = city
bookinghotel_df = pd.concat([bookinghotel_df, temp_df], ignore_index=True)
temp_df = pd.DataFrame(data[agg[2]])
temp_df["City"] = city
hotelsdotcom_df = pd.concat([hotelsdotcom_df, temp_df], ignore_index=True)
print(airbnb_df.shape)
print(bookinghotel_df.shape)
hotelsdotcom_df.shape
airbnb_df.columns
airbnb_df.City.value_counts()
airbnb_df.head()
airbnb_df.groupby("City")
airbnb_df.info()
airbnb_df["rating"] = airbnb_df.rating.replace("No rating", np.nan).astype(
float, errors="ignore"
)
airbnb_df.groupby("City")["rating"].mean()
airbnb_df.info()
airbnb_df.thumbnail[15]
airbnb_df["price"].apply(lambda x: x["currency"])
airbnb_df["currency"] = airbnb_df.price["currency"]
airbnb_df[["currency", "value", "period"]] = airbnb_df["price"].apply(
lambda x: pd.Series([x.get("currency"), x.get("value"), x.get("period")])
)
airbnb_df.value.unique()
airbnb_df.price
airbnb_df.head()
# dropping reason : price column we have extracted and currency is only single unit value ....also we can drop period it has only single unique value
airbnb_df = airbnb_df.drop(["price", "currency"], axis=1).rename(
columns={"value": "price"}
)
airbnb_df.groupby("City").mean(numeric_only=True)[["rating", "price"]].reset_index()
import plotly.express as px
px.bar(
airbnb_df.groupby("City")
.mean(numeric_only=True)[["rating", "price"]]
.reset_index(),
x="City",
y="price",
color="rating",
)
# not so much to do eda
for i in range(5):
print(airbnb_df.thumbnail[i])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/078/129078185.ipynb
|
500-hotels-from-airbnb-booking-and-hotelscom
|
mykhailozub
|
[{"Id": 129078185, "ScriptId": 38369292, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12716271, "CreationDate": "05/10/2023 20:17:47", "VersionNumber": 2.0, "Title": "7500 hotels from Aggretor EDA", "EvaluationDate": "05/10/2023", "IsChange": false, "TotalLines": 112.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 112.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 4}]
|
[{"Id": 184812975, "KernelVersionId": 129078185, "SourceDatasetVersionId": 5602416}]
|
[{"Id": 5602416, "DatasetId": 3222832, "DatasourceVersionId": 5677448, "CreatorUserId": 14943658, "LicenseName": "CC0: Public Domain", "CreationDate": "05/04/2023 15:37:17", "VersionNumber": 1.0, "Title": "7500 hotels from Airbnb, Booking and Hotels.com", "Slug": "500-hotels-from-airbnb-booking-and-hotelscom", "Subtitle": "Hotels in Berlin, London, Madrid, Paris, Rome", "Description": "This dataset was created while I was working on a SerpApi demo project to showcase [hotels-scraper-js](https://www.npmjs.com/package/hotels-scraper-js) NPM tool.\n\nThis dataset includes only hotel listings from Airbnb, Booking, and Hotels.com, and the main point was to explore prices in famous European capitals. 500 hotels from each website per city.\n\nIn total, there're 7500 hotel listings.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3222832, "CreatorUserId": 14943658, "OwnerUserId": 14943658.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5602416.0, "CurrentDatasourceVersionId": 5677448.0, "ForumId": 3287790, "Type": 2, "CreationDate": "05/04/2023 15:37:17", "LastActivityDate": "05/04/2023", "TotalViews": 5350, "TotalDownloads": 1172, "TotalVotes": 25, "TotalKernels": 2}]
|
[{"Id": 14943658, "UserName": "mykhailozub", "DisplayName": "Mykhailo Zub", "RegisterDate": "05/04/2023", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
import json
files = [
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Berlin.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Rome.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Madrid.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/London.json",
"/kaggle/input/500-hotels-from-airbnb-booking-and-hotelscom/Paris.json",
]
list_cities = [file.split("/")[-1].split(".")[0] for file in files]
print(list_cities)
dict_city_filepath = dict(zip(list_cities, files)).items()
agg = ["airbnbHotels", "bookingHotels", "hotelsComHotels"]
airbnb_df = pd.DataFrame()
bookinghotel_df = pd.DataFrame()
hotelsdotcom_df = pd.DataFrame()
for city, filepath in dict_city_filepath:
with open(filepath, "r") as f:
data = json.load(f)
# create a temporary DataFrame to hold the data
temp_df = pd.DataFrame(data[agg[0]])
temp_df["City"] = city
# append the temporary dataframe to the airbnb dataframe
airbnb_df = pd.concat([airbnb_df, temp_df], ignore_index=True)
# repeat the process for bookinghotel and hotelsdotcom dataframes
temp_df = pd.DataFrame(data[agg[1]])
temp_df["City"] = city
bookinghotel_df = pd.concat([bookinghotel_df, temp_df], ignore_index=True)
temp_df = pd.DataFrame(data[agg[2]])
temp_df["City"] = city
hotelsdotcom_df = pd.concat([hotelsdotcom_df, temp_df], ignore_index=True)
print(airbnb_df.shape)
print(bookinghotel_df.shape)
hotelsdotcom_df.shape
airbnb_df.columns
airbnb_df.City.value_counts()
airbnb_df.head()
airbnb_df.groupby("City")
airbnb_df.info()
airbnb_df["rating"] = airbnb_df.rating.replace("No rating", np.nan).astype(
float, errors="ignore"
)
airbnb_df.groupby("City")["rating"].mean()
airbnb_df.info()
airbnb_df.thumbnail[15]
airbnb_df["price"].apply(lambda x: x["currency"])
airbnb_df["currency"] = airbnb_df.price["currency"]
airbnb_df[["currency", "value", "period"]] = airbnb_df["price"].apply(
lambda x: pd.Series([x.get("currency"), x.get("value"), x.get("period")])
)
airbnb_df.value.unique()
airbnb_df.price
airbnb_df.head()
# dropping reason : price column we have extracted and currency is only single unit value ....also we can drop period it has only single unique value
airbnb_df = airbnb_df.drop(["price", "currency"], axis=1).rename(
columns={"value": "price"}
)
airbnb_df.groupby("City").mean(numeric_only=True)[["rating", "price"]].reset_index()
import plotly.express as px
px.bar(
airbnb_df.groupby("City")
.mean(numeric_only=True)[["rating", "price"]]
.reset_index(),
x="City",
y="price",
color="rating",
)
# not so much to do eda
for i in range(5):
print(airbnb_df.thumbnail[i])
| false | 0 | 1,124 | 4 | 1,290 | 1,124 |
||
129078489
|
<jupyter_start><jupyter_text>Books Dataset
### Context
Books read by users and ratings provided by them on Amazon
### Content
Online data for books from Amazon along with user ratings and users who bought them
Kaggle dataset identifier: books-dataset
<jupyter_script># # A BOOKISH DATASET
# **Context:**
# There are so many potential questions we could explore with this dataset, but the question that spiked my interest is: Are there any correlations between user demographics (age, gender, location) and book preferences? Do certain types of users tend to prefer certain types of books?
# **Description of the dataset:**
# This dataset has been compiled by Cai-Nicolas Ziegler (2004).
# Inside, there are three tables for users, books and ratings.
# *Lets get started!*
# First, we are gonna import all our libraries, and then proceed to evaluate the dataset.
# We are gonna be analyzing the user data (demographic information) alongside the ratings data, and looking for correlations between demographic factors and book preferences.
# For example, do younger users tend to prefer certain genres of books, or are there regional differences in book preferences?
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
# databases
books_df = pd.read_csv(
"../input/books-dataset/books_data/books.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
ratings_df = pd.read_csv(
"../input/books-dataset/books_data/ratings.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
users_df = pd.read_csv(
"../input/books-dataset/books_data/users.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
# For anyone else using this dataset, there is a problem with the encoding, so try using latin1.
# its a stubborn dataset file, but this should work and fix to default encoding used by pandas, which is utf-8.
# We are also not gonna use the Image URL so we are dropping that at once
books_df.drop(["Image-URL-S", "Image-URL-M", "Image-URL-L"], axis=1, inplace=True)
books_df.head(5)
ratings_df.head(5)
users_df.head(5)
# we see some NA values in users Age, so we are gonna take care of that
users_df = users_df.fillna(0)
users_df_drop_1 = users_df.dropna()
users_df = users_df.replace({"%": ""}, regex=True)
print(users_df.head(5))
users_df.head(5)
users_df.info()
ratings_df.info()
books_df.info()
# new dataframe! We will be using ISBN as our common denominator
ratings_books_df = pd.merge(ratings_df, books_df, on="ISBN")
merged_df = pd.merge(ratings_books_df, users_df, on="User-ID")
# New dataframe! So lovely.
merged_df.head(21)
# Summary statistics for location, as we are trying to determine if demographics are
# correlated with book preferences
# First: LOCATION
location_mean = merged_df["Location"].value_counts().mean()
location_median = merged_df["Location"].value_counts().median()
location_mode = merged_df["Location"].value_counts().idxmax()
location_range = merged_df["Location"].nunique()
location_std = merged_df["Location"].value_counts().std()
# Then: AGE
age_mean = merged_df["Age"].mean()
age_median = merged_df["Age"].median()
age_mode = merged_df["Age"].mode()[0]
age_range = merged_df["Age"].max() - merged_df["Age"].min()
age_std = merged_df["Age"].std()
# And now, book preferences (Which translates to Book-Rating)
rating_count = merged_df["Book-Rating"].count()
rating_mean = merged_df["Book-Rating"].mean()
rating_median = merged_df["Book-Rating"].median()
rating_mode = merged_df["Book-Rating"].mode()[0]
rating_range = merged_df["Book-Rating"].max() - merged_df["Book-Rating"].min()
rating_std = merged_df["Book-Rating"].std()
# And we print the summary statistics
print("Summary statistics for demographic factors:")
print(f"Location mean: {location_mean}")
print(f"Location median: {location_median}")
print(f"Location mode: {location_mode}")
print(f"Location range: {location_range}")
print(f"Location std: {location_std}")
print(f"Age mean: {age_mean}")
print(f"Age median: {age_median}")
print(f"Age mode: {age_mode}")
print(f"Age range: {age_range}")
print(f"Age std: {age_std}")
print("\nSummary statistics for the book preferences:")
print(f"Rating count: {rating_count}")
print(f"Rating mean: {rating_mean}")
print(f"Rating median: {rating_median}")
print(f"Rating mode: {rating_mode}")
print(f"Rating range: {rating_range}")
print(f"Rating std: {rating_std}")
# We are interested in taking a closer look in ratings, so:
# Extract the ratings variable from the ratings DataFrame
ratings = ratings_df["Book-Rating"]
# Histogram
plt.hist(ratings, bins=11, range=(-0.5, 10.5), edgecolor="black", color="#AA98A9")
# Add labels and title to the plot
plt.xlabel("Book Rating")
plt.ylabel("Frequency")
plt.title("Histogram of Book Ratings")
# Display the plot
plt.show()
# Now, with the summary we could already see that the 0 rating was the most prevalent, but after the histogram its very visually evident that 0 IS where the most ratings are. Another thing is that most users dont rate a lot between 1-4, and the rest is more or less evenly distributed between 5 and 10, being 8 the most rated after 0.
# ___
# Now, we create some scatter plots of age and location against book ratings to see if there is a correlation between these demographic characteristics and book preferences:
# For that, we are gonna extract the columns of User-ID, ISBN and Book-Rating from the
# Ratings df and the User-ID, Location and Age from the Users df
ratings = ratings_df[["User-ID", "ISBN", "Book-Rating"]]
users = users_df[["User-ID", "Location", "Age"]]
# we are gonna merge them together:
merged = pd.merge(ratings, users, on="User-ID")
# and we plot for Age
plt.scatter(merged["Age"], merged["Book-Rating"], color="#D8BFD8", alpha=0.1)
plt.xlabel("Age")
plt.ylabel("Book Rating")
plt.title("Scatter Plot of Age vs. Book Ratings")
plt.show()
# we plot for Location
plt.scatter(merged["Location"], merged["Book-Rating"], color="#D8BFD8", alpha=0.1)
plt.xlabel("Location")
plt.ylabel("Book Rating")
plt.title("Scatter Plot of Location vs. Book Ratings")
plt.show()
# and we plot the ratings against the book ISBN
plt.scatter(merged["ISBN"], merged["Book-Rating"], color="#D8BFD8", alpha=0.1)
plt.xlabel("ISBN")
plt.ylabel("Book Rating")
plt.title("Scatter Plot of ISBN vs. Book Ratings")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/078/129078489.ipynb
|
books-dataset
|
saurabhbagchi
|
[{"Id": 129078489, "ScriptId": 38327634, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14338641, "CreationDate": "05/10/2023 20:22:31", "VersionNumber": 3.0, "Title": "Book dataset", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 161.0, "LinesInsertedFromPrevious": 109.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 52.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 184813710, "KernelVersionId": 129078489, "SourceDatasetVersionId": 1546766}]
|
[{"Id": 1546766, "DatasetId": 912577, "DatasourceVersionId": 1581517, "CreatorUserId": 168670, "LicenseName": "CC0: Public Domain", "CreationDate": "10/09/2020 05:14:41", "VersionNumber": 1.0, "Title": "Books Dataset", "Slug": "books-dataset", "Subtitle": "Subset of the books available in Amazon", "Description": "### Context\n\nBooks read by users and ratings provided by them on Amazon\n\n\n### Content\n\nOnline data for books from Amazon along with user ratings and users who bought them\n\n\n### Acknowledgements\n\nPrimarily for building recommender systems.\nThis dataset has been compiled by Cai-Nicolas Ziegler in 2004, and it comprises of three tables for users, books and ratings. \nExplicit ratings are expressed on a scale from 1-10 (higher values denoting higher appreciation) and implicit rating is expressed by 0\nhttp://www2.informatik.uni-freiburg.de/~cziegler/BX/\n\n\n### Inspiration\n\nCan we select and recommend the top 10 books for each user based on past purchase behavior?", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 912577, "CreatorUserId": 168670, "OwnerUserId": 168670.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1546766.0, "CurrentDatasourceVersionId": 1581517.0, "ForumId": 928361, "Type": 2, "CreationDate": "10/09/2020 05:14:41", "LastActivityDate": "10/09/2020", "TotalViews": 46601, "TotalDownloads": 6190, "TotalVotes": 53, "TotalKernels": 5}]
|
[{"Id": 168670, "UserName": "saurabhbagchi", "DisplayName": "Old Monk", "RegisterDate": "02/24/2014", "PerformanceTier": 3}]
|
# # A BOOKISH DATASET
# **Context:**
# There are so many potential questions we could explore with this dataset, but the question that spiked my interest is: Are there any correlations between user demographics (age, gender, location) and book preferences? Do certain types of users tend to prefer certain types of books?
# **Description of the dataset:**
# This dataset has been compiled by Cai-Nicolas Ziegler (2004).
# Inside, there are three tables for users, books and ratings.
# *Lets get started!*
# First, we are gonna import all our libraries, and then proceed to evaluate the dataset.
# We are gonna be analyzing the user data (demographic information) alongside the ratings data, and looking for correlations between demographic factors and book preferences.
# For example, do younger users tend to prefer certain genres of books, or are there regional differences in book preferences?
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
# databases
books_df = pd.read_csv(
"../input/books-dataset/books_data/books.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
ratings_df = pd.read_csv(
"../input/books-dataset/books_data/ratings.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
users_df = pd.read_csv(
"../input/books-dataset/books_data/users.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
# For anyone else using this dataset, there is a problem with the encoding, so try using latin1.
# its a stubborn dataset file, but this should work and fix to default encoding used by pandas, which is utf-8.
# We are also not gonna use the Image URL so we are dropping that at once
books_df.drop(["Image-URL-S", "Image-URL-M", "Image-URL-L"], axis=1, inplace=True)
books_df.head(5)
ratings_df.head(5)
users_df.head(5)
# we see some NA values in users Age, so we are gonna take care of that
users_df = users_df.fillna(0)
users_df_drop_1 = users_df.dropna()
users_df = users_df.replace({"%": ""}, regex=True)
print(users_df.head(5))
users_df.head(5)
users_df.info()
ratings_df.info()
books_df.info()
# new dataframe! We will be using ISBN as our common denominator
ratings_books_df = pd.merge(ratings_df, books_df, on="ISBN")
merged_df = pd.merge(ratings_books_df, users_df, on="User-ID")
# New dataframe! So lovely.
merged_df.head(21)
# Summary statistics for location, as we are trying to determine if demographics are
# correlated with book preferences
# First: LOCATION
location_mean = merged_df["Location"].value_counts().mean()
location_median = merged_df["Location"].value_counts().median()
location_mode = merged_df["Location"].value_counts().idxmax()
location_range = merged_df["Location"].nunique()
location_std = merged_df["Location"].value_counts().std()
# Then: AGE
age_mean = merged_df["Age"].mean()
age_median = merged_df["Age"].median()
age_mode = merged_df["Age"].mode()[0]
age_range = merged_df["Age"].max() - merged_df["Age"].min()
age_std = merged_df["Age"].std()
# And now, book preferences (Which translates to Book-Rating)
rating_count = merged_df["Book-Rating"].count()
rating_mean = merged_df["Book-Rating"].mean()
rating_median = merged_df["Book-Rating"].median()
rating_mode = merged_df["Book-Rating"].mode()[0]
rating_range = merged_df["Book-Rating"].max() - merged_df["Book-Rating"].min()
rating_std = merged_df["Book-Rating"].std()
# And we print the summary statistics
print("Summary statistics for demographic factors:")
print(f"Location mean: {location_mean}")
print(f"Location median: {location_median}")
print(f"Location mode: {location_mode}")
print(f"Location range: {location_range}")
print(f"Location std: {location_std}")
print(f"Age mean: {age_mean}")
print(f"Age median: {age_median}")
print(f"Age mode: {age_mode}")
print(f"Age range: {age_range}")
print(f"Age std: {age_std}")
print("\nSummary statistics for the book preferences:")
print(f"Rating count: {rating_count}")
print(f"Rating mean: {rating_mean}")
print(f"Rating median: {rating_median}")
print(f"Rating mode: {rating_mode}")
print(f"Rating range: {rating_range}")
print(f"Rating std: {rating_std}")
# We are interested in taking a closer look in ratings, so:
# Extract the ratings variable from the ratings DataFrame
ratings = ratings_df["Book-Rating"]
# Histogram
plt.hist(ratings, bins=11, range=(-0.5, 10.5), edgecolor="black", color="#AA98A9")
# Add labels and title to the plot
plt.xlabel("Book Rating")
plt.ylabel("Frequency")
plt.title("Histogram of Book Ratings")
# Display the plot
plt.show()
# Now, with the summary we could already see that the 0 rating was the most prevalent, but after the histogram its very visually evident that 0 IS where the most ratings are. Another thing is that most users dont rate a lot between 1-4, and the rest is more or less evenly distributed between 5 and 10, being 8 the most rated after 0.
# ___
# Now, we create some scatter plots of age and location against book ratings to see if there is a correlation between these demographic characteristics and book preferences:
# For that, we are gonna extract the columns of User-ID, ISBN and Book-Rating from the
# Ratings df and the User-ID, Location and Age from the Users df
ratings = ratings_df[["User-ID", "ISBN", "Book-Rating"]]
users = users_df[["User-ID", "Location", "Age"]]
# we are gonna merge them together:
merged = pd.merge(ratings, users, on="User-ID")
# and we plot for Age
plt.scatter(merged["Age"], merged["Book-Rating"], color="#D8BFD8", alpha=0.1)
plt.xlabel("Age")
plt.ylabel("Book Rating")
plt.title("Scatter Plot of Age vs. Book Ratings")
plt.show()
# we plot for Location
plt.scatter(merged["Location"], merged["Book-Rating"], color="#D8BFD8", alpha=0.1)
plt.xlabel("Location")
plt.ylabel("Book Rating")
plt.title("Scatter Plot of Location vs. Book Ratings")
plt.show()
# and we plot the ratings against the book ISBN
plt.scatter(merged["ISBN"], merged["Book-Rating"], color="#D8BFD8", alpha=0.1)
plt.xlabel("ISBN")
plt.ylabel("Book Rating")
plt.title("Scatter Plot of ISBN vs. Book Ratings")
plt.show()
| false | 3 | 1,840 | 1 | 1,896 | 1,840 |
||
129078719
|
<jupyter_start><jupyter_text>Video Game Sales
This dataset contains a list of video games with sales greater than 100,000 copies. It was generated by a scrape of [vgchartz.com][1].
Fields include
* Rank - Ranking of overall sales
* Name - The games name
* Platform - Platform of the games release (i.e. PC,PS4, etc.)
* Year - Year of the game's release
* Genre - Genre of the game
* Publisher - Publisher of the game
* NA_Sales - Sales in North America (in millions)
* EU_Sales - Sales in Europe (in millions)
* JP_Sales - Sales in Japan (in millions)
* Other_Sales - Sales in the rest of the world (in millions)
* Global_Sales - Total worldwide sales.
The script to scrape the data is available at https://github.com/GregorUT/vgchartzScrape.
It is based on BeautifulSoup using Python.
There are 16,598 records. 2 records were dropped due to incomplete information.
[1]: http://www.vgchartz.com/
Kaggle dataset identifier: videogamesales
<jupyter_code>import pandas as pd
df = pd.read_csv('videogamesales/vgsales.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 16598 entries, 0 to 16597
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Rank 16598 non-null int64
1 Name 16598 non-null object
2 Platform 16598 non-null object
3 Year 16327 non-null float64
4 Genre 16598 non-null object
5 Publisher 16540 non-null object
6 NA_Sales 16598 non-null float64
7 EU_Sales 16598 non-null float64
8 JP_Sales 16598 non-null float64
9 Other_Sales 16598 non-null float64
10 Global_Sales 16598 non-null float64
dtypes: float64(6), int64(1), object(4)
memory usage: 1.4+ MB
<jupyter_text>Examples:
{
"Rank": 1,
"Name": "Wii Sports",
"Platform": "Wii",
"Year": 2006,
"Genre": "Sports",
"Publisher": "Nintendo",
"NA_Sales": 41.49,
"EU_Sales": 29.02,
"JP_Sales": 3.77,
"Other_Sales": 8.46,
"Global_Sales": 82.74
}
{
"Rank": 2,
"Name": "Super Mario Bros.",
"Platform": "NES",
"Year": 1985,
"Genre": "Platform",
"Publisher": "Nintendo",
"NA_Sales": 29.08,
"EU_Sales": 3.58,
"JP_Sales": 6.8100000000000005,
"Other_Sales": 0.77,
"Global_Sales": 40.24
}
{
"Rank": 3,
"Name": "Mario Kart Wii",
"Platform": "Wii",
"Year": 2008,
"Genre": "Racing",
"Publisher": "Nintendo",
"NA_Sales": 15.85,
"EU_Sales": 12.88,
"JP_Sales": 3.79,
"Other_Sales": 3.31,
"Global_Sales": 35.82
}
{
"Rank": 4,
"Name": "Wii Sports Resort",
"Platform": "Wii",
"Year": 2009,
"Genre": "Sports",
"Publisher": "Nintendo",
"NA_Sales": 15.75,
"EU_Sales": 11.01,
"JP_Sales": 3.2800000000000002,
"Other_Sales": 2.96,
"Global_Sales": 33.0
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Lab Class 12: vg-stats
# ## Mohammad Shahin
df = pd.read_csv("/kaggle/input/videogamesales/vgsales.csv")
# df
# type(df)
df.describe()
df.info()
df
type(df)
"""
Q1:Which company is the most common video game publisher?
this will exami the data ditrubution int the Publisher column.
by defult it will arrange them from most used to lowest.
to_frame() method will put it in a frame and make it more readable.
"""
df["Publisher"].value_counts().to_frame()
# df[df["Publisher"].count
"""
Q1 :Which company is the most common video game publisher?
to only return the most commen value the publesher coulumn we can use .idxmax().
The value_counts() will count how many times a value has appered.
idxmax() will return the row index of the most coumon value.
"""
df["Publisher"].value_counts().idxmax()
"""
Q2:What’s the most common platform?
i used the same methods as before
"""
# df['Platform'].value_counts().to_frame() to get the coulumn aranged from most common to least
df["Platform"].value_counts().idxmax() # to only ge the most common value
"""
Q3:What about the most common genre?
i used the same methods as before but for genre.
"""
# df['Genre'].value_counts().to_frame() to get the coulumn aranged from most common to least
df["Genre"].value_counts().idxmax() # to only ge the most common value
"""
Q4: What are the top 20 highest grossing games?
I used the same method as before. i anly added a slice [0:20] to return only the top 20 games
"""
df["Global_Sales"][0:21].value_counts().to_frame()
"""
Q5: For North American video game sales, what’s the median?
I used the same method as before. i anly added a slice [0:20] to return only the top 20 games
"""
median = df["NA_Sales"].median()
median
more_than_median = df[df["NA_Sales"] > median].sort_values("NA_Sales").iloc[:5]
less_than_median = df[df["NA_Sales"] <= median].sort_values("NA_Sales").iloc[-5:]
more_than_median
less_than_median
M_L_median = pd.concat([more_than_median, less_than_median])
M_L_median
# # 6-For the top-selling game of all time, how many standard deviations above/below the mean are its sales for North America?
sales_globel = df["Global_Sales"][0]
print(sales_globel)
mean = df["NA_Sales"].mean()
SD = df["NA_Sales"].std()
answer = (sales_globel - mean) / SD
answer
# # 7-The Nintendo Wii seems to have outdone itself with games. How does its average number of sales compare with all of the other platforms?
"""
Wii_sales: selects only the rows where the 'Platform' column has the value 'Wii'.
Then, it selects the 'Global_Sales' column from those filtered rows.Lastly it will calculate the avarage.
Other_sales: ilke the preveose code but it will select rows where the 'Platform' column does not have the value 'Wii'.
difference: This line calculates the ratio between the average global sales of games on the 'Wii' platform (wii_sales) and the average global sales of games not on the 'Wii' platform
"""
Wii_sales = df[df["Platform"] == "Wii"]["Global_Sales"].mean()
print(Wii_sales)
Other_sales = df[df["Platform"] != "Wii"]["Global_Sales"].mean()
print(Other_sales)
ratio = Wii_sales / Other_sales
ratio
# # 8- find the sum of all NA_Sales?
# sales_sum = df['NA_Sales'].iloc[0:len(df)].sum()
# sorted_asc = sales_sum.sort_values(ascending=False)
# sorted_asc
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/078/129078719.ipynb
|
videogamesales
|
gregorut
|
[{"Id": 129078719, "ScriptId": 38342011, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15004168, "CreationDate": "05/10/2023 20:26:27", "VersionNumber": 1.0, "Title": "vg-stats", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 130.0, "LinesInsertedFromPrevious": 130.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184814088, "KernelVersionId": 129078719, "SourceDatasetVersionId": 618}]
|
[{"Id": 618, "DatasetId": 284, "DatasourceVersionId": 618, "CreatorUserId": 462330, "LicenseName": "Unknown", "CreationDate": "10/26/2016 09:10:49", "VersionNumber": 2.0, "Title": "Video Game Sales", "Slug": "videogamesales", "Subtitle": "Analyze sales data from more than 16,500 games.", "Description": "This dataset contains a list of video games with sales greater than 100,000 copies. It was generated by a scrape of [vgchartz.com][1].\n\nFields include\n\n* Rank - Ranking of overall sales\n\n* Name - The games name\n\n* Platform - Platform of the games release (i.e. PC,PS4, etc.)\n\n* Year - Year of the game's release\n\n* Genre - Genre of the game\n\n* Publisher - Publisher of the game\n\n* NA_Sales - Sales in North America (in millions)\n\n* EU_Sales - Sales in Europe (in millions)\n\n* JP_Sales - Sales in Japan (in millions)\n\n* Other_Sales - Sales in the rest of the world (in millions)\n\n* Global_Sales - Total worldwide sales.\n\nThe script to scrape the data is available at https://github.com/GregorUT/vgchartzScrape.\nIt is based on BeautifulSoup using Python.\nThere are 16,598 records. 2 records were dropped due to incomplete information.\n\n\n [1]: http://www.vgchartz.com/", "VersionNotes": "Cleaned up formating", "TotalCompressedBytes": 1355781.0, "TotalUncompressedBytes": 1355781.0}]
|
[{"Id": 284, "CreatorUserId": 462330, "OwnerUserId": 462330.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 618.0, "CurrentDatasourceVersionId": 618.0, "ForumId": 1788, "Type": 2, "CreationDate": "10/26/2016 08:17:30", "LastActivityDate": "02/06/2018", "TotalViews": 1798828, "TotalDownloads": 471172, "TotalVotes": 5485, "TotalKernels": 1480}]
|
[{"Id": 462330, "UserName": "gregorut", "DisplayName": "GregorySmith", "RegisterDate": "11/09/2015", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Lab Class 12: vg-stats
# ## Mohammad Shahin
df = pd.read_csv("/kaggle/input/videogamesales/vgsales.csv")
# df
# type(df)
df.describe()
df.info()
df
type(df)
"""
Q1:Which company is the most common video game publisher?
this will exami the data ditrubution int the Publisher column.
by defult it will arrange them from most used to lowest.
to_frame() method will put it in a frame and make it more readable.
"""
df["Publisher"].value_counts().to_frame()
# df[df["Publisher"].count
"""
Q1 :Which company is the most common video game publisher?
to only return the most commen value the publesher coulumn we can use .idxmax().
The value_counts() will count how many times a value has appered.
idxmax() will return the row index of the most coumon value.
"""
df["Publisher"].value_counts().idxmax()
"""
Q2:What’s the most common platform?
i used the same methods as before
"""
# df['Platform'].value_counts().to_frame() to get the coulumn aranged from most common to least
df["Platform"].value_counts().idxmax() # to only ge the most common value
"""
Q3:What about the most common genre?
i used the same methods as before but for genre.
"""
# df['Genre'].value_counts().to_frame() to get the coulumn aranged from most common to least
df["Genre"].value_counts().idxmax() # to only ge the most common value
"""
Q4: What are the top 20 highest grossing games?
I used the same method as before. i anly added a slice [0:20] to return only the top 20 games
"""
df["Global_Sales"][0:21].value_counts().to_frame()
"""
Q5: For North American video game sales, what’s the median?
I used the same method as before. i anly added a slice [0:20] to return only the top 20 games
"""
median = df["NA_Sales"].median()
median
more_than_median = df[df["NA_Sales"] > median].sort_values("NA_Sales").iloc[:5]
less_than_median = df[df["NA_Sales"] <= median].sort_values("NA_Sales").iloc[-5:]
more_than_median
less_than_median
M_L_median = pd.concat([more_than_median, less_than_median])
M_L_median
# # 6-For the top-selling game of all time, how many standard deviations above/below the mean are its sales for North America?
sales_globel = df["Global_Sales"][0]
print(sales_globel)
mean = df["NA_Sales"].mean()
SD = df["NA_Sales"].std()
answer = (sales_globel - mean) / SD
answer
# # 7-The Nintendo Wii seems to have outdone itself with games. How does its average number of sales compare with all of the other platforms?
"""
Wii_sales: selects only the rows where the 'Platform' column has the value 'Wii'.
Then, it selects the 'Global_Sales' column from those filtered rows.Lastly it will calculate the avarage.
Other_sales: ilke the preveose code but it will select rows where the 'Platform' column does not have the value 'Wii'.
difference: This line calculates the ratio between the average global sales of games on the 'Wii' platform (wii_sales) and the average global sales of games not on the 'Wii' platform
"""
Wii_sales = df[df["Platform"] == "Wii"]["Global_Sales"].mean()
print(Wii_sales)
Other_sales = df[df["Platform"] != "Wii"]["Global_Sales"].mean()
print(Other_sales)
ratio = Wii_sales / Other_sales
ratio
# # 8- find the sum of all NA_Sales?
# sales_sum = df['NA_Sales'].iloc[0:len(df)].sum()
# sorted_asc = sales_sum.sort_values(ascending=False)
# sorted_asc
|
[{"videogamesales/vgsales.csv": {"column_names": "[\"Rank\", \"Name\", \"Platform\", \"Year\", \"Genre\", \"Publisher\", \"NA_Sales\", \"EU_Sales\", \"JP_Sales\", \"Other_Sales\", \"Global_Sales\"]", "column_data_types": "{\"Rank\": \"int64\", \"Name\": \"object\", \"Platform\": \"object\", \"Year\": \"float64\", \"Genre\": \"object\", \"Publisher\": \"object\", \"NA_Sales\": \"float64\", \"EU_Sales\": \"float64\", \"JP_Sales\": \"float64\", \"Other_Sales\": \"float64\", \"Global_Sales\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 16598 entries, 0 to 16597\nData columns (total 11 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Rank 16598 non-null int64 \n 1 Name 16598 non-null object \n 2 Platform 16598 non-null object \n 3 Year 16327 non-null float64\n 4 Genre 16598 non-null object \n 5 Publisher 16540 non-null object \n 6 NA_Sales 16598 non-null float64\n 7 EU_Sales 16598 non-null float64\n 8 JP_Sales 16598 non-null float64\n 9 Other_Sales 16598 non-null float64\n 10 Global_Sales 16598 non-null float64\ndtypes: float64(6), int64(1), object(4)\nmemory usage: 1.4+ MB\n", "summary": "{\"Rank\": {\"count\": 16598.0, \"mean\": 8300.605253645017, \"std\": 4791.853932896403, \"min\": 1.0, \"25%\": 4151.25, \"50%\": 8300.5, \"75%\": 12449.75, \"max\": 16600.0}, \"Year\": {\"count\": 16327.0, \"mean\": 2006.4064433147546, \"std\": 5.828981114712805, \"min\": 1980.0, \"25%\": 2003.0, \"50%\": 2007.0, \"75%\": 2010.0, \"max\": 2020.0}, \"NA_Sales\": {\"count\": 16598.0, \"mean\": 0.26466742981082064, \"std\": 0.8166830292988796, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.08, \"75%\": 0.24, \"max\": 41.49}, \"EU_Sales\": {\"count\": 16598.0, \"mean\": 0.14665200626581515, \"std\": 0.5053512312869116, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.02, \"75%\": 0.11, \"max\": 29.02}, \"JP_Sales\": {\"count\": 16598.0, \"mean\": 0.077781660441017, \"std\": 0.30929064808220297, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.04, \"max\": 10.22}, \"Other_Sales\": {\"count\": 16598.0, \"mean\": 0.0480630196409206, \"std\": 0.18858840291271461, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.01, \"75%\": 0.04, \"max\": 10.57}, \"Global_Sales\": {\"count\": 16598.0, \"mean\": 0.5374406555006628, \"std\": 1.5550279355699124, \"min\": 0.01, \"25%\": 0.06, \"50%\": 0.17, \"75%\": 0.47, \"max\": 82.74}}", "examples": "{\"Rank\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"Name\":{\"0\":\"Wii Sports\",\"1\":\"Super Mario Bros.\",\"2\":\"Mario Kart Wii\",\"3\":\"Wii Sports Resort\"},\"Platform\":{\"0\":\"Wii\",\"1\":\"NES\",\"2\":\"Wii\",\"3\":\"Wii\"},\"Year\":{\"0\":2006.0,\"1\":1985.0,\"2\":2008.0,\"3\":2009.0},\"Genre\":{\"0\":\"Sports\",\"1\":\"Platform\",\"2\":\"Racing\",\"3\":\"Sports\"},\"Publisher\":{\"0\":\"Nintendo\",\"1\":\"Nintendo\",\"2\":\"Nintendo\",\"3\":\"Nintendo\"},\"NA_Sales\":{\"0\":41.49,\"1\":29.08,\"2\":15.85,\"3\":15.75},\"EU_Sales\":{\"0\":29.02,\"1\":3.58,\"2\":12.88,\"3\":11.01},\"JP_Sales\":{\"0\":3.77,\"1\":6.81,\"2\":3.79,\"3\":3.28},\"Other_Sales\":{\"0\":8.46,\"1\":0.77,\"2\":3.31,\"3\":2.96},\"Global_Sales\":{\"0\":82.74,\"1\":40.24,\"2\":35.82,\"3\":33.0}}"}}]
| true | 1 |
<start_data_description><data_path>videogamesales/vgsales.csv:
<column_names>
['Rank', 'Name', 'Platform', 'Year', 'Genre', 'Publisher', 'NA_Sales', 'EU_Sales', 'JP_Sales', 'Other_Sales', 'Global_Sales']
<column_types>
{'Rank': 'int64', 'Name': 'object', 'Platform': 'object', 'Year': 'float64', 'Genre': 'object', 'Publisher': 'object', 'NA_Sales': 'float64', 'EU_Sales': 'float64', 'JP_Sales': 'float64', 'Other_Sales': 'float64', 'Global_Sales': 'float64'}
<dataframe_Summary>
{'Rank': {'count': 16598.0, 'mean': 8300.605253645017, 'std': 4791.853932896403, 'min': 1.0, '25%': 4151.25, '50%': 8300.5, '75%': 12449.75, 'max': 16600.0}, 'Year': {'count': 16327.0, 'mean': 2006.4064433147546, 'std': 5.828981114712805, 'min': 1980.0, '25%': 2003.0, '50%': 2007.0, '75%': 2010.0, 'max': 2020.0}, 'NA_Sales': {'count': 16598.0, 'mean': 0.26466742981082064, 'std': 0.8166830292988796, 'min': 0.0, '25%': 0.0, '50%': 0.08, '75%': 0.24, 'max': 41.49}, 'EU_Sales': {'count': 16598.0, 'mean': 0.14665200626581515, 'std': 0.5053512312869116, 'min': 0.0, '25%': 0.0, '50%': 0.02, '75%': 0.11, 'max': 29.02}, 'JP_Sales': {'count': 16598.0, 'mean': 0.077781660441017, 'std': 0.30929064808220297, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.04, 'max': 10.22}, 'Other_Sales': {'count': 16598.0, 'mean': 0.0480630196409206, 'std': 0.18858840291271461, 'min': 0.0, '25%': 0.0, '50%': 0.01, '75%': 0.04, 'max': 10.57}, 'Global_Sales': {'count': 16598.0, 'mean': 0.5374406555006628, 'std': 1.5550279355699124, 'min': 0.01, '25%': 0.06, '50%': 0.17, '75%': 0.47, 'max': 82.74}}
<dataframe_info>
RangeIndex: 16598 entries, 0 to 16597
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Rank 16598 non-null int64
1 Name 16598 non-null object
2 Platform 16598 non-null object
3 Year 16327 non-null float64
4 Genre 16598 non-null object
5 Publisher 16540 non-null object
6 NA_Sales 16598 non-null float64
7 EU_Sales 16598 non-null float64
8 JP_Sales 16598 non-null float64
9 Other_Sales 16598 non-null float64
10 Global_Sales 16598 non-null float64
dtypes: float64(6), int64(1), object(4)
memory usage: 1.4+ MB
<some_examples>
{'Rank': {'0': 1, '1': 2, '2': 3, '3': 4}, 'Name': {'0': 'Wii Sports', '1': 'Super Mario Bros.', '2': 'Mario Kart Wii', '3': 'Wii Sports Resort'}, 'Platform': {'0': 'Wii', '1': 'NES', '2': 'Wii', '3': 'Wii'}, 'Year': {'0': 2006.0, '1': 1985.0, '2': 2008.0, '3': 2009.0}, 'Genre': {'0': 'Sports', '1': 'Platform', '2': 'Racing', '3': 'Sports'}, 'Publisher': {'0': 'Nintendo', '1': 'Nintendo', '2': 'Nintendo', '3': 'Nintendo'}, 'NA_Sales': {'0': 41.49, '1': 29.08, '2': 15.85, '3': 15.75}, 'EU_Sales': {'0': 29.02, '1': 3.58, '2': 12.88, '3': 11.01}, 'JP_Sales': {'0': 3.77, '1': 6.81, '2': 3.79, '3': 3.28}, 'Other_Sales': {'0': 8.46, '1': 0.77, '2': 3.31, '3': 2.96}, 'Global_Sales': {'0': 82.74, '1': 40.24, '2': 35.82, '3': 33.0}}
<end_description>
| 1,213 | 0 | 2,327 | 1,213 |
129108265
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/quora-question-pairs/train.csv.zip")
new_df = df.sample(30000, random_state=2)
new_df.shape
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
new_df.isnull().sum()
new_df = new_df.dropna(axis=0)
new_df.isnull().sum()
# distribution of duplicate and non-duplicate data
print(new_df["is_duplicate"].value_counts())
print((new_df["is_duplicate"].value_counts() / new_df["is_duplicate"].count()) * 100)
new_df["is_duplicate"].value_counts().plot(kind="bar")
qid = pd.Series(new_df["qid1"].tolist() + new_df["qid2"].tolist())
print("Number of Uniques questions", np.unique(qid).shape[0])
x = qid.value_counts() > 1
print("Number of question repeated", x[x].shape[0])
# Repeated questions using histogram
plt.hist(qid.value_counts().values, bins=160)
plt.yscale("log")
plt.show()
# 1) length of the bothe question
new_df["q1_len"] = new_df["question1"].str.len()
new_df["q2_len"] = new_df["question2"].str.len()
new_df.head()
# 2) Add number of words columns
new_df["q1_words"] = new_df["question1"].apply(lambda row: len(row.split(" ")))
new_df["q2_words"] = new_df["question2"].apply(lambda row: len(row.split(" ")))
new_df.head()
# 3) Number of same words in the pair of questions
def common_words(row):
w1 = set(map(lambda word: word.lower().strip(), row["question1"].split(" ")))
w2 = set(map(lambda word: word.lower().strip(), row["question2"].split(" ")))
return len(w1 & w2)
new_df["Common_words"] = new_df.apply(common_words, axis=1)
new_df.head()
def word_total(row):
q1 = set(map(lambda word: word.lower().strip(), row["question1"].split(" ")))
q2 = set(map(lambda word: word.lower().strip(), row["question2"].split(" ")))
return len(q1) + len(q2)
new_df["total_words"] = new_df.apply(word_total, axis=1)
new_df.head()
# for word share
def word_share(row):
return round(row["Common_words"] / row["total_words"], 2)
new_df["word_share"] = new_df.apply(word_share, axis=1)
new_df.head()
# for q1 length
sns.displot(new_df["q1_len"])
print("Minimum length:", new_df["q1_len"].min())
print("Maximum length:", new_df["q1_len"].max())
print("Average length:", int(new_df["q1_len"].mean()))
# for q2 length
sns.displot(new_df["q2_len"])
print("Minimum length:", new_df["q2_len"].min())
print("Maximum length:", new_df["q2_len"].max())
print("Average length:", int(new_df["q2_len"].mean()))
# for q1 words
sns.displot(new_df["q1_words"])
print("Minimum length:", new_df["q1_words"].min())
print("Maximum length:", new_df["q1_words"].max())
print("Average length:", int(new_df["q1_words"].mean()))
# for q2 words
sns.displot(new_df["q2_words"])
print("Minimum length:", new_df["q2_words"].min())
print("Maximum length:", new_df["q2_words"].max())
print("Average length:", int(new_df["q2_words"].mean()))
# now for the common words
sns.distplot(new_df[new_df["is_duplicate"] == 0]["Common_words"], label="Non_duplicate")
sns.distplot(new_df[new_df["is_duplicate"] == 1]["Common_words"], label="Duplicate")
plt.legend()
plt.show()
# now for the common words
sns.distplot(new_df[new_df["is_duplicate"] == 0]["total_words"], label="Non_duplicate")
sns.distplot(new_df[new_df["is_duplicate"] == 1]["total_words"], label="Duplicate")
plt.legend()
plt.show()
# now for the common words
sns.distplot(new_df[new_df["is_duplicate"] == 0]["word_share"], label="Non_duplicate")
sns.distplot(new_df[new_df["is_duplicate"] == 1]["word_share"], label="Duplicate")
plt.legend()
plt.show()
# now it is clear from the viulizations that this columns are usefull
ques_df = new_df[["question1", "question2"]]
final_df = new_df.drop(columns=["qid1", "qid2", "id", "question1", "question2"])
final_df.head()
# now we have to apply the bag of words on ques_df and than concate with final_df
from sklearn.feature_extraction.text import CountVectorizer
ques = list(ques_df["question1"]) + list(ques_df["question2"])
cv = CountVectorizer(max_features=5000)
q1_arr, q2_arr = np.vsplit(cv.fit_transform(ques).toarray(), 2)
temp_df1 = pd.DataFrame(q1_arr, index=ques_df.index)
temp_df2 = pd.DataFrame(q2_arr, index=ques_df.index)
temp_df = pd.concat([temp_df1, temp_df2], axis=1)
temp_df.shape
final_df = pd.concat([final_df, temp_df], axis=1)
print(final_df.shape)
final_df.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
final_df.iloc[:, 1:].values,
final_df.iloc[:, 0].values,
test_size=0.2,
random_state=2,
)
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
accuracy_score(y_test, y_pred)
X_train.shape
y_train.shape
from xgboost import XGBClassifier
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
y_pred = xgb.predict(X_test)
accuracy_score(y_test, y_pred)
# ### Optimization
# There is some features which can be helpful in optimizing the mdoel performance
# lets analyze the questions and find some new features again
print(df["question1"][5])
print(df["question2"][5])
print("verdict -", df["is_duplicate"][5])
print("_________________")
print(df["question1"][8])
print(df["question2"][8])
print("verdict -", df["is_duplicate"][8])
# 1) if we find the longest common substring which might help
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/108/129108265.ipynb
| null | null |
[{"Id": 129108265, "ScriptId": 38157989, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9024155, "CreationDate": "05/11/2023 04:27:58", "VersionNumber": 1.0, "Title": "Basic Pre-processing Quora Question pairs", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 239.0, "LinesInsertedFromPrevious": 239.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/quora-question-pairs/train.csv.zip")
new_df = df.sample(30000, random_state=2)
new_df.shape
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
new_df.isnull().sum()
new_df = new_df.dropna(axis=0)
new_df.isnull().sum()
# distribution of duplicate and non-duplicate data
print(new_df["is_duplicate"].value_counts())
print((new_df["is_duplicate"].value_counts() / new_df["is_duplicate"].count()) * 100)
new_df["is_duplicate"].value_counts().plot(kind="bar")
qid = pd.Series(new_df["qid1"].tolist() + new_df["qid2"].tolist())
print("Number of Uniques questions", np.unique(qid).shape[0])
x = qid.value_counts() > 1
print("Number of question repeated", x[x].shape[0])
# Repeated questions using histogram
plt.hist(qid.value_counts().values, bins=160)
plt.yscale("log")
plt.show()
# 1) length of the bothe question
new_df["q1_len"] = new_df["question1"].str.len()
new_df["q2_len"] = new_df["question2"].str.len()
new_df.head()
# 2) Add number of words columns
new_df["q1_words"] = new_df["question1"].apply(lambda row: len(row.split(" ")))
new_df["q2_words"] = new_df["question2"].apply(lambda row: len(row.split(" ")))
new_df.head()
# 3) Number of same words in the pair of questions
def common_words(row):
w1 = set(map(lambda word: word.lower().strip(), row["question1"].split(" ")))
w2 = set(map(lambda word: word.lower().strip(), row["question2"].split(" ")))
return len(w1 & w2)
new_df["Common_words"] = new_df.apply(common_words, axis=1)
new_df.head()
def word_total(row):
q1 = set(map(lambda word: word.lower().strip(), row["question1"].split(" ")))
q2 = set(map(lambda word: word.lower().strip(), row["question2"].split(" ")))
return len(q1) + len(q2)
new_df["total_words"] = new_df.apply(word_total, axis=1)
new_df.head()
# for word share
def word_share(row):
return round(row["Common_words"] / row["total_words"], 2)
new_df["word_share"] = new_df.apply(word_share, axis=1)
new_df.head()
# for q1 length
sns.displot(new_df["q1_len"])
print("Minimum length:", new_df["q1_len"].min())
print("Maximum length:", new_df["q1_len"].max())
print("Average length:", int(new_df["q1_len"].mean()))
# for q2 length
sns.displot(new_df["q2_len"])
print("Minimum length:", new_df["q2_len"].min())
print("Maximum length:", new_df["q2_len"].max())
print("Average length:", int(new_df["q2_len"].mean()))
# for q1 words
sns.displot(new_df["q1_words"])
print("Minimum length:", new_df["q1_words"].min())
print("Maximum length:", new_df["q1_words"].max())
print("Average length:", int(new_df["q1_words"].mean()))
# for q2 words
sns.displot(new_df["q2_words"])
print("Minimum length:", new_df["q2_words"].min())
print("Maximum length:", new_df["q2_words"].max())
print("Average length:", int(new_df["q2_words"].mean()))
# now for the common words
sns.distplot(new_df[new_df["is_duplicate"] == 0]["Common_words"], label="Non_duplicate")
sns.distplot(new_df[new_df["is_duplicate"] == 1]["Common_words"], label="Duplicate")
plt.legend()
plt.show()
# now for the common words
sns.distplot(new_df[new_df["is_duplicate"] == 0]["total_words"], label="Non_duplicate")
sns.distplot(new_df[new_df["is_duplicate"] == 1]["total_words"], label="Duplicate")
plt.legend()
plt.show()
# now for the common words
sns.distplot(new_df[new_df["is_duplicate"] == 0]["word_share"], label="Non_duplicate")
sns.distplot(new_df[new_df["is_duplicate"] == 1]["word_share"], label="Duplicate")
plt.legend()
plt.show()
# now it is clear from the viulizations that this columns are usefull
ques_df = new_df[["question1", "question2"]]
final_df = new_df.drop(columns=["qid1", "qid2", "id", "question1", "question2"])
final_df.head()
# now we have to apply the bag of words on ques_df and than concate with final_df
from sklearn.feature_extraction.text import CountVectorizer
ques = list(ques_df["question1"]) + list(ques_df["question2"])
cv = CountVectorizer(max_features=5000)
q1_arr, q2_arr = np.vsplit(cv.fit_transform(ques).toarray(), 2)
temp_df1 = pd.DataFrame(q1_arr, index=ques_df.index)
temp_df2 = pd.DataFrame(q2_arr, index=ques_df.index)
temp_df = pd.concat([temp_df1, temp_df2], axis=1)
temp_df.shape
final_df = pd.concat([final_df, temp_df], axis=1)
print(final_df.shape)
final_df.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
final_df.iloc[:, 1:].values,
final_df.iloc[:, 0].values,
test_size=0.2,
random_state=2,
)
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
accuracy_score(y_test, y_pred)
X_train.shape
y_train.shape
from xgboost import XGBClassifier
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
y_pred = xgb.predict(X_test)
accuracy_score(y_test, y_pred)
# ### Optimization
# There is some features which can be helpful in optimizing the mdoel performance
# lets analyze the questions and find some new features again
print(df["question1"][5])
print(df["question2"][5])
print("verdict -", df["is_duplicate"][5])
print("_________________")
print(df["question1"][8])
print(df["question2"][8])
print("verdict -", df["is_duplicate"][8])
# 1) if we find the longest common substring which might help
| false | 0 | 2,085 | 1 | 2,085 | 2,085 |
||
129005932
|
<jupyter_start><jupyter_text>Diamonds
### Context
This classic dataset contains the prices and other attributes of almost 54,000 diamonds. It's a great dataset for beginners learning to work with data analysis and visualization.
### Content
**price** price in US dollars (\$326--\$18,823)
**carat** weight of the diamond (0.2--5.01)
**cut** quality of the cut (Fair, Good, Very Good, Premium, Ideal)
**color** diamond colour, from J (worst) to D (best)
**clarity** a measurement of how clear the diamond is (I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best))
**x** length in mm (0--10.74)
**y** width in mm (0--58.9)
**z** depth in mm (0--31.8)
**depth** total depth percentage = z / mean(x, y) = 2 * z / (x + y) (43--79)
**table** width of top of diamond relative to widest point (43--95)
Kaggle dataset identifier: diamonds
<jupyter_code>import pandas as pd
df = pd.read_csv('diamonds/diamonds.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 53940 entries, 0 to 53939
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 53940 non-null int64
1 carat 53940 non-null float64
2 cut 53940 non-null object
3 color 53940 non-null object
4 clarity 53940 non-null object
5 depth 53940 non-null float64
6 table 53940 non-null float64
7 price 53940 non-null int64
8 x 53940 non-null float64
9 y 53940 non-null float64
10 z 53940 non-null float64
dtypes: float64(6), int64(2), object(3)
memory usage: 4.5+ MB
<jupyter_text>Examples:
{
"Unnamed: 0": 1,
"carat": 0.23,
"cut": "Ideal",
"color": "E",
"clarity": "SI2",
"depth": 61.5,
"table": 55,
"price": 326,
"x": 3.95,
"y": 3.98,
"z": 2.43
}
{
"Unnamed: 0": 2,
"carat": 0.21,
"cut": "Premium",
"color": "E",
"clarity": "SI1",
"depth": 59.8,
"table": 61,
"price": 326,
"x": 3.89,
"y": 3.84,
"z": 2.31
}
{
"Unnamed: 0": 3,
"carat": 0.23,
"cut": "Good",
"color": "E",
"clarity": "VS1",
"depth": 56.9,
"table": 65,
"price": 327,
"x": 4.05,
"y": 4.07,
"z": 2.31
}
{
"Unnamed: 0": 4,
"carat": 0.29,
"cut": "Premium",
"color": "I",
"clarity": "VS2",
"depth": 62.4,
"table": 58,
"price": 334,
"x": 4.2,
"y": 4.23,
"z": 2.63
}
<jupyter_script>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
train = pd.read_csv("/kaggle/input/diamonds/diamonds.csv")
train
# price - цена в долларах США (\$326--\$18 823)
# carat - вес бриллианта в каратах (0,2--5,01)
# cut - качество среза (справедливое, хорошее, очень хорошее, Премиум, идеальное)
# color - цвет бриллиантовый, от J (худший) до D (лучший)
# clarity - прозрачность - показатель чистоты алмаза (I1 (худший), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (лучший))
# x - длина в мм (0--10,74)
# y - ширина по оси y в мм (0--58,9)
# z - глубина z в мм (0--31,8)
# depth - глубина общая глубина в процентах = z / среднее значение(x, y) = 2 * z / (x + y) (43--79)
# table - таблица ширины вершины ромба относительно самой широкой точки (43--95)
train.describe()
train.isnull().sum()
# Гистограмма по карату(весу)
plt.hist(train["carat"])
plt.hist(train["price"])
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
data = train
sns.boxplot(x=data["carat"]) # строим ящек с усами по карату
# Выбросы нужно удалить
Q1 = train["carat"].quantile(0.25)
Q3 = train["carat"].quantile(0.75)
IQR = Q3 - Q1 # межквартильный размах
train = train[
(train["carat"] >= Q1 - 1.5 * IQR) & (train["carat"] <= Q3 + 1.5 * IQR)
] # удаляем выбрасы
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
data = train
sns.boxplot(x=data["carat"])
train.info()
from sklearn.preprocessing import LabelEncoder
labelencoder_cut = LabelEncoder()
train["cut"] = labelencoder_cut.fit_transform(train["cut"])
labelencoder_color = LabelEncoder()
train["color"] = labelencoder_color.fit_transform(train["color"])
labelencoder_clarity = LabelEncoder()
train["clarity"] = labelencoder_clarity.fit_transform(train["clarity"])
train.info()
sns.barplot(data=train, x="color", y="price")
plt.figure(figsize=(20, 15))
correlations = train.corr()
sns.heatmap(correlations, cmap="coolwarm", annot=True)
plt.show()
# Создадим данные y=f(x), в количестве сто точек с небольшим зашумлением. Параметр random_state в данном примере фиксирует выбранный нами пример.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
X = train.drop("price", axis=1)
y = train["price"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=57
)
line_model = LinearRegression()
line_model.fit(X_train, y_train)
y_pred = line_model.predict(X_test)
# Построим эту зависимость на графике, так как X - это двумерный массив, то его сначала нужно свести к линейному.
#
sns.scatterplot(x=X.ravel(), y=y)
# Разделим данные на тренировочные и тестовые в пропорции один к трём.
# Создадим модель линейной регрессии и применим ее к тренировочным данным.
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=57
)
line_model = LinearRegression()
line_model.fit(X_train, y_train)
y_pred = line_model.predict(X_test)
# Построим два графика:
# Разделение точек на тренировочные (оранжевые) и тестовые (синие)
# Результат применения к данным модели линейной регресии.
sns.scatterplot(x=X_test.ravel(), y=y_test)
sns.scatterplot(x=X_train.ravel(), y=y_train, color="orange")
sns.scatterplot(x=X_test.ravel(), y=y_test)
sns.scatterplot(x=X_train.ravel(), y=y_train, color="orange")
sns.lineplot(x=X_test.ravel(), y=y_pred, color="purple")
# Проведем численную оценку полученного результата.
line_model.score(X_train, y_train), line_model.score(X_test, y_test)
r2_score(y_test, y_pred)
# MSE
mean_squared_error(y_test, y_pred)
# ПОЛИНОМИАЛЬНАЯ РЕГРЕССИЯ
X, y = make_regression(n_samples=200, n_features=1, noise=15, random_state=18)
y = np.log(y + 250)
sns.scatterplot(x=X.ravel(), y=y)
model = LinearRegression()
model.fit(X, y)
y_l = model.predict(X)
sns.scatterplot(x=X.ravel(), y=y)
sns.lineplot(x=X.ravel(), y=y_l, color="orange")
r2_score(y, y_l), mean_squared_error(y, y_l)
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=3)
X_poly = poly_features.fit_transform(X)
poly_model = LinearRegression()
poly_model.fit(X_poly, y)
y_p = poly_model.predict(X_poly)
sns.scatterplot(x=X.ravel(), y=y)
sns.lineplot(x=X.ravel(), y=y_p, color="orange")
r2_score(y, y_p), mean_squared_error(y, y_p)
# SGDRegressor
import pandas as pd
from sklearn.linear_model import SGDRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Загрузка набора данных и масштабирование признаков
data = train
X, y = data.iloc[:, :-1].values, data.iloc[:, -1].values
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Разделение на обучающую и тестовую выборки
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
# Создание и обучение модели
regressor = SGDRegressor(max_iter=1000, tol=1e-3)
regressor.fit(X_train, y_train)
# Прогнозирование тестовой выборки и вычисление среднеквадратичной ошибки
y_pred = regressor.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print(y_pred, mse)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/005/129005932.ipynb
|
diamonds
|
shivam2503
|
[{"Id": 129005932, "ScriptId": 38346287, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6692497, "CreationDate": "05/10/2023 09:02:12", "VersionNumber": 1.0, "Title": "Task2_Diamonds", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 191.0, "LinesInsertedFromPrevious": 191.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184682364, "KernelVersionId": 129005932, "SourceDatasetVersionId": 2368}]
|
[{"Id": 2368, "DatasetId": 1312, "DatasourceVersionId": 2368, "CreatorUserId": 945829, "LicenseName": "Unknown", "CreationDate": "05/25/2017 03:06:57", "VersionNumber": 1.0, "Title": "Diamonds", "Slug": "diamonds", "Subtitle": "Analyze diamonds by their cut, color, clarity, price, and other attributes", "Description": "### Context \n\nThis classic dataset contains the prices and other attributes of almost 54,000 diamonds. It's a great dataset for beginners learning to work with data analysis and visualization.\n\n### Content\n\n**price** price in US dollars (\\$326--\\$18,823)\n\n**carat** weight of the diamond (0.2--5.01)\n\n**cut** quality of the cut (Fair, Good, Very Good, Premium, Ideal)\n\n**color** diamond colour, from J (worst) to D (best)\n\n**clarity** a measurement of how clear the diamond is (I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best))\n\n**x** length in mm (0--10.74)\n\n**y** width in mm (0--58.9)\n\n**z** depth in mm (0--31.8)\n\n**depth** total depth percentage = z / mean(x, y) = 2 * z / (x + y) (43--79)\n\n**table** width of top of diamond relative to widest point (43--95)", "VersionNotes": "Initial release", "TotalCompressedBytes": 3192560.0, "TotalUncompressedBytes": 3192560.0}]
|
[{"Id": 1312, "CreatorUserId": 945829, "OwnerUserId": 945829.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2368.0, "CurrentDatasourceVersionId": 2368.0, "ForumId": 3701, "Type": 2, "CreationDate": "05/25/2017 03:06:57", "LastActivityDate": "02/06/2018", "TotalViews": 434479, "TotalDownloads": 74575, "TotalVotes": 952, "TotalKernels": 444}]
|
[{"Id": 945829, "UserName": "shivam2503", "DisplayName": "Shivam Agrawal", "RegisterDate": "03/07/2017", "PerformanceTier": 1}]
|
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
train = pd.read_csv("/kaggle/input/diamonds/diamonds.csv")
train
# price - цена в долларах США (\$326--\$18 823)
# carat - вес бриллианта в каратах (0,2--5,01)
# cut - качество среза (справедливое, хорошее, очень хорошее, Премиум, идеальное)
# color - цвет бриллиантовый, от J (худший) до D (лучший)
# clarity - прозрачность - показатель чистоты алмаза (I1 (худший), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (лучший))
# x - длина в мм (0--10,74)
# y - ширина по оси y в мм (0--58,9)
# z - глубина z в мм (0--31,8)
# depth - глубина общая глубина в процентах = z / среднее значение(x, y) = 2 * z / (x + y) (43--79)
# table - таблица ширины вершины ромба относительно самой широкой точки (43--95)
train.describe()
train.isnull().sum()
# Гистограмма по карату(весу)
plt.hist(train["carat"])
plt.hist(train["price"])
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
data = train
sns.boxplot(x=data["carat"]) # строим ящек с усами по карату
# Выбросы нужно удалить
Q1 = train["carat"].quantile(0.25)
Q3 = train["carat"].quantile(0.75)
IQR = Q3 - Q1 # межквартильный размах
train = train[
(train["carat"] >= Q1 - 1.5 * IQR) & (train["carat"] <= Q3 + 1.5 * IQR)
] # удаляем выбрасы
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
data = train
sns.boxplot(x=data["carat"])
train.info()
from sklearn.preprocessing import LabelEncoder
labelencoder_cut = LabelEncoder()
train["cut"] = labelencoder_cut.fit_transform(train["cut"])
labelencoder_color = LabelEncoder()
train["color"] = labelencoder_color.fit_transform(train["color"])
labelencoder_clarity = LabelEncoder()
train["clarity"] = labelencoder_clarity.fit_transform(train["clarity"])
train.info()
sns.barplot(data=train, x="color", y="price")
plt.figure(figsize=(20, 15))
correlations = train.corr()
sns.heatmap(correlations, cmap="coolwarm", annot=True)
plt.show()
# Создадим данные y=f(x), в количестве сто точек с небольшим зашумлением. Параметр random_state в данном примере фиксирует выбранный нами пример.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
X = train.drop("price", axis=1)
y = train["price"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=57
)
line_model = LinearRegression()
line_model.fit(X_train, y_train)
y_pred = line_model.predict(X_test)
# Построим эту зависимость на графике, так как X - это двумерный массив, то его сначала нужно свести к линейному.
#
sns.scatterplot(x=X.ravel(), y=y)
# Разделим данные на тренировочные и тестовые в пропорции один к трём.
# Создадим модель линейной регрессии и применим ее к тренировочным данным.
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=57
)
line_model = LinearRegression()
line_model.fit(X_train, y_train)
y_pred = line_model.predict(X_test)
# Построим два графика:
# Разделение точек на тренировочные (оранжевые) и тестовые (синие)
# Результат применения к данным модели линейной регресии.
sns.scatterplot(x=X_test.ravel(), y=y_test)
sns.scatterplot(x=X_train.ravel(), y=y_train, color="orange")
sns.scatterplot(x=X_test.ravel(), y=y_test)
sns.scatterplot(x=X_train.ravel(), y=y_train, color="orange")
sns.lineplot(x=X_test.ravel(), y=y_pred, color="purple")
# Проведем численную оценку полученного результата.
line_model.score(X_train, y_train), line_model.score(X_test, y_test)
r2_score(y_test, y_pred)
# MSE
mean_squared_error(y_test, y_pred)
# ПОЛИНОМИАЛЬНАЯ РЕГРЕССИЯ
X, y = make_regression(n_samples=200, n_features=1, noise=15, random_state=18)
y = np.log(y + 250)
sns.scatterplot(x=X.ravel(), y=y)
model = LinearRegression()
model.fit(X, y)
y_l = model.predict(X)
sns.scatterplot(x=X.ravel(), y=y)
sns.lineplot(x=X.ravel(), y=y_l, color="orange")
r2_score(y, y_l), mean_squared_error(y, y_l)
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=3)
X_poly = poly_features.fit_transform(X)
poly_model = LinearRegression()
poly_model.fit(X_poly, y)
y_p = poly_model.predict(X_poly)
sns.scatterplot(x=X.ravel(), y=y)
sns.lineplot(x=X.ravel(), y=y_p, color="orange")
r2_score(y, y_p), mean_squared_error(y, y_p)
# SGDRegressor
import pandas as pd
from sklearn.linear_model import SGDRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Загрузка набора данных и масштабирование признаков
data = train
X, y = data.iloc[:, :-1].values, data.iloc[:, -1].values
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Разделение на обучающую и тестовую выборки
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
# Создание и обучение модели
regressor = SGDRegressor(max_iter=1000, tol=1e-3)
regressor.fit(X_train, y_train)
# Прогнозирование тестовой выборки и вычисление среднеквадратичной ошибки
y_pred = regressor.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print(y_pred, mse)
|
[{"diamonds/diamonds.csv": {"column_names": "[\"Unnamed: 0\", \"carat\", \"cut\", \"color\", \"clarity\", \"depth\", \"table\", \"price\", \"x\", \"y\", \"z\"]", "column_data_types": "{\"Unnamed: 0\": \"int64\", \"carat\": \"float64\", \"cut\": \"object\", \"color\": \"object\", \"clarity\": \"object\", \"depth\": \"float64\", \"table\": \"float64\", \"price\": \"int64\", \"x\": \"float64\", \"y\": \"float64\", \"z\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 53940 entries, 0 to 53939\nData columns (total 11 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 53940 non-null int64 \n 1 carat 53940 non-null float64\n 2 cut 53940 non-null object \n 3 color 53940 non-null object \n 4 clarity 53940 non-null object \n 5 depth 53940 non-null float64\n 6 table 53940 non-null float64\n 7 price 53940 non-null int64 \n 8 x 53940 non-null float64\n 9 y 53940 non-null float64\n 10 z 53940 non-null float64\ndtypes: float64(6), int64(2), object(3)\nmemory usage: 4.5+ MB\n", "summary": "{\"Unnamed: 0\": {\"count\": 53940.0, \"mean\": 26970.5, \"std\": 15571.281096942537, \"min\": 1.0, \"25%\": 13485.75, \"50%\": 26970.5, \"75%\": 40455.25, \"max\": 53940.0}, \"carat\": {\"count\": 53940.0, \"mean\": 0.7979397478680014, \"std\": 0.4740112444054184, \"min\": 0.2, \"25%\": 0.4, \"50%\": 0.7, \"75%\": 1.04, \"max\": 5.01}, \"depth\": {\"count\": 53940.0, \"mean\": 61.749404894327036, \"std\": 1.432621318833661, \"min\": 43.0, \"25%\": 61.0, \"50%\": 61.8, \"75%\": 62.5, \"max\": 79.0}, \"table\": {\"count\": 53940.0, \"mean\": 57.45718390804598, \"std\": 2.2344905628213225, \"min\": 43.0, \"25%\": 56.0, \"50%\": 57.0, \"75%\": 59.0, \"max\": 95.0}, \"price\": {\"count\": 53940.0, \"mean\": 3932.799721913237, \"std\": 3989.439738146379, \"min\": 326.0, \"25%\": 950.0, \"50%\": 2401.0, \"75%\": 5324.25, \"max\": 18823.0}, \"x\": {\"count\": 53940.0, \"mean\": 5.731157211716722, \"std\": 1.1217607467924928, \"min\": 0.0, \"25%\": 4.71, \"50%\": 5.7, \"75%\": 6.54, \"max\": 10.74}, \"y\": {\"count\": 53940.0, \"mean\": 5.734525954764553, \"std\": 1.1421346741235552, \"min\": 0.0, \"25%\": 4.72, \"50%\": 5.71, \"75%\": 6.54, \"max\": 58.9}, \"z\": {\"count\": 53940.0, \"mean\": 3.5387337782721544, \"std\": 0.7056988469499941, \"min\": 0.0, \"25%\": 2.91, \"50%\": 3.53, \"75%\": 4.04, \"max\": 31.8}}", "examples": "{\"Unnamed: 0\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"carat\":{\"0\":0.23,\"1\":0.21,\"2\":0.23,\"3\":0.29},\"cut\":{\"0\":\"Ideal\",\"1\":\"Premium\",\"2\":\"Good\",\"3\":\"Premium\"},\"color\":{\"0\":\"E\",\"1\":\"E\",\"2\":\"E\",\"3\":\"I\"},\"clarity\":{\"0\":\"SI2\",\"1\":\"SI1\",\"2\":\"VS1\",\"3\":\"VS2\"},\"depth\":{\"0\":61.5,\"1\":59.8,\"2\":56.9,\"3\":62.4},\"table\":{\"0\":55.0,\"1\":61.0,\"2\":65.0,\"3\":58.0},\"price\":{\"0\":326,\"1\":326,\"2\":327,\"3\":334},\"x\":{\"0\":3.95,\"1\":3.89,\"2\":4.05,\"3\":4.2},\"y\":{\"0\":3.98,\"1\":3.84,\"2\":4.07,\"3\":4.23},\"z\":{\"0\":2.43,\"1\":2.31,\"2\":2.31,\"3\":2.63}}"}}]
| true | 1 |
<start_data_description><data_path>diamonds/diamonds.csv:
<column_names>
['Unnamed: 0', 'carat', 'cut', 'color', 'clarity', 'depth', 'table', 'price', 'x', 'y', 'z']
<column_types>
{'Unnamed: 0': 'int64', 'carat': 'float64', 'cut': 'object', 'color': 'object', 'clarity': 'object', 'depth': 'float64', 'table': 'float64', 'price': 'int64', 'x': 'float64', 'y': 'float64', 'z': 'float64'}
<dataframe_Summary>
{'Unnamed: 0': {'count': 53940.0, 'mean': 26970.5, 'std': 15571.281096942537, 'min': 1.0, '25%': 13485.75, '50%': 26970.5, '75%': 40455.25, 'max': 53940.0}, 'carat': {'count': 53940.0, 'mean': 0.7979397478680014, 'std': 0.4740112444054184, 'min': 0.2, '25%': 0.4, '50%': 0.7, '75%': 1.04, 'max': 5.01}, 'depth': {'count': 53940.0, 'mean': 61.749404894327036, 'std': 1.432621318833661, 'min': 43.0, '25%': 61.0, '50%': 61.8, '75%': 62.5, 'max': 79.0}, 'table': {'count': 53940.0, 'mean': 57.45718390804598, 'std': 2.2344905628213225, 'min': 43.0, '25%': 56.0, '50%': 57.0, '75%': 59.0, 'max': 95.0}, 'price': {'count': 53940.0, 'mean': 3932.799721913237, 'std': 3989.439738146379, 'min': 326.0, '25%': 950.0, '50%': 2401.0, '75%': 5324.25, 'max': 18823.0}, 'x': {'count': 53940.0, 'mean': 5.731157211716722, 'std': 1.1217607467924928, 'min': 0.0, '25%': 4.71, '50%': 5.7, '75%': 6.54, 'max': 10.74}, 'y': {'count': 53940.0, 'mean': 5.734525954764553, 'std': 1.1421346741235552, 'min': 0.0, '25%': 4.72, '50%': 5.71, '75%': 6.54, 'max': 58.9}, 'z': {'count': 53940.0, 'mean': 3.5387337782721544, 'std': 0.7056988469499941, 'min': 0.0, '25%': 2.91, '50%': 3.53, '75%': 4.04, 'max': 31.8}}
<dataframe_info>
RangeIndex: 53940 entries, 0 to 53939
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 53940 non-null int64
1 carat 53940 non-null float64
2 cut 53940 non-null object
3 color 53940 non-null object
4 clarity 53940 non-null object
5 depth 53940 non-null float64
6 table 53940 non-null float64
7 price 53940 non-null int64
8 x 53940 non-null float64
9 y 53940 non-null float64
10 z 53940 non-null float64
dtypes: float64(6), int64(2), object(3)
memory usage: 4.5+ MB
<some_examples>
{'Unnamed: 0': {'0': 1, '1': 2, '2': 3, '3': 4}, 'carat': {'0': 0.23, '1': 0.21, '2': 0.23, '3': 0.29}, 'cut': {'0': 'Ideal', '1': 'Premium', '2': 'Good', '3': 'Premium'}, 'color': {'0': 'E', '1': 'E', '2': 'E', '3': 'I'}, 'clarity': {'0': 'SI2', '1': 'SI1', '2': 'VS1', '3': 'VS2'}, 'depth': {'0': 61.5, '1': 59.8, '2': 56.9, '3': 62.4}, 'table': {'0': 55.0, '1': 61.0, '2': 65.0, '3': 58.0}, 'price': {'0': 326, '1': 326, '2': 327, '3': 334}, 'x': {'0': 3.95, '1': 3.89, '2': 4.05, '3': 4.2}, 'y': {'0': 3.98, '1': 3.84, '2': 4.07, '3': 4.23}, 'z': {'0': 2.43, '1': 2.31, '2': 2.31, '3': 2.63}}
<end_description>
| 2,182 | 0 | 3,226 | 2,182 |
129005548
|
<jupyter_start><jupyter_text>Diabetes prediction dataset
The **Diabetes prediction dataset** is a collection of medical and demographic data from patients, along with their diabetes status (positive or negative). The data includes features such as age, gender, body mass index (BMI), hypertension, heart disease, smoking history, HbA1c level, and blood glucose level. This dataset can be used to build machine learning models to predict diabetes in patients based on their medical history and demographic information. This can be useful for healthcare professionals in identifying patients who may be at risk of developing diabetes and in developing personalized treatment plans. Additionally, the dataset can be used by researchers to explore the relationships between various medical and demographic factors and the likelihood of developing diabetes.
Kaggle dataset identifier: diabetes-prediction-dataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv(
"/kaggle/input/diabetes-prediction-dataset/diabetes_prediction_dataset.csv"
)
df.head()
df.info()
df.describe()
# - the average age is 42
# - the average bmi is 27.3 so most of the patients are overweight and obesity
# - the average blood glucose level is 138 mg/Dl, a level cause diabetes
gender = pd.pivot_table(
df[df["diabetes"] == 1], index="gender", values="age", aggfunc=len
).reset_index()
gender.rename(columns={"age": "amount"}, inplace=True)
plt.title("Diabetes patients per gender", size=30)
sns.barplot(data=gender, x="gender", y="amount")
diabetes = df[df["diabetes"] == 1]
plt.figure(figsize=(15, 10))
sns.stripplot(diabetes, x="smoking_history", y="bmi", hue="gender")
# - Most of female diabetes patients never smoking but they have a bmi above 25.
# - Male diabetes patients have lower bmi but most of them are current,former or ever smoking.
# - So have a too high bmi, meaning obesity is a clearly factor cause diabetes in both gender.
# - Howerver, in male, smoking is a main factor as well.
sns.scatterplot(diabetes, x="HbA1c_level", y="blood_glucose_level", hue="gender")
# - Most of diabetes have HbA1c and blood glucose level higher than average in both gender
sns.stripplot(diabetes, y="age", x="heart_disease", hue="gender")
# - Diabete patients with heart disease are above 30 year olds and male patients make up the majority.
sns.stripplot(diabetes, y="age", x="hypertension", hue="gender")
plt.figure(figsize=(15, 10))
sns.stripplot(diabetes, y="age", x="smoking_history", hue="heart_disease")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/005/129005548.ipynb
|
diabetes-prediction-dataset
|
iammustafatz
|
[{"Id": 129005548, "ScriptId": 38345363, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14708690, "CreationDate": "05/10/2023 08:59:39", "VersionNumber": 1.0, "Title": "another look in diabetes", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 64.0, "LinesInsertedFromPrevious": 64.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 184681843, "KernelVersionId": 129005548, "SourceDatasetVersionId": 5344155}]
|
[{"Id": 5344155, "DatasetId": 3102947, "DatasourceVersionId": 5417553, "CreatorUserId": 11427441, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "04/08/2023 06:11:45", "VersionNumber": 1.0, "Title": "Diabetes prediction dataset", "Slug": "diabetes-prediction-dataset", "Subtitle": "A Comprehensive Dataset for Predicting Diabetes with Medical & Demographic Data", "Description": "The **Diabetes prediction dataset** is a collection of medical and demographic data from patients, along with their diabetes status (positive or negative). The data includes features such as age, gender, body mass index (BMI), hypertension, heart disease, smoking history, HbA1c level, and blood glucose level. This dataset can be used to build machine learning models to predict diabetes in patients based on their medical history and demographic information. This can be useful for healthcare professionals in identifying patients who may be at risk of developing diabetes and in developing personalized treatment plans. Additionally, the dataset can be used by researchers to explore the relationships between various medical and demographic factors and the likelihood of developing diabetes.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3102947, "CreatorUserId": 11427441, "OwnerUserId": 11427441.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5344155.0, "CurrentDatasourceVersionId": 5417553.0, "ForumId": 3166206, "Type": 2, "CreationDate": "04/08/2023 06:11:45", "LastActivityDate": "04/08/2023", "TotalViews": 127619, "TotalDownloads": 24886, "TotalVotes": 309, "TotalKernels": 120}]
|
[{"Id": 11427441, "UserName": "iammustafatz", "DisplayName": "Mohammed Mustafa", "RegisterDate": "08/29/2022", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv(
"/kaggle/input/diabetes-prediction-dataset/diabetes_prediction_dataset.csv"
)
df.head()
df.info()
df.describe()
# - the average age is 42
# - the average bmi is 27.3 so most of the patients are overweight and obesity
# - the average blood glucose level is 138 mg/Dl, a level cause diabetes
gender = pd.pivot_table(
df[df["diabetes"] == 1], index="gender", values="age", aggfunc=len
).reset_index()
gender.rename(columns={"age": "amount"}, inplace=True)
plt.title("Diabetes patients per gender", size=30)
sns.barplot(data=gender, x="gender", y="amount")
diabetes = df[df["diabetes"] == 1]
plt.figure(figsize=(15, 10))
sns.stripplot(diabetes, x="smoking_history", y="bmi", hue="gender")
# - Most of female diabetes patients never smoking but they have a bmi above 25.
# - Male diabetes patients have lower bmi but most of them are current,former or ever smoking.
# - So have a too high bmi, meaning obesity is a clearly factor cause diabetes in both gender.
# - Howerver, in male, smoking is a main factor as well.
sns.scatterplot(diabetes, x="HbA1c_level", y="blood_glucose_level", hue="gender")
# - Most of diabetes have HbA1c and blood glucose level higher than average in both gender
sns.stripplot(diabetes, y="age", x="heart_disease", hue="gender")
# - Diabete patients with heart disease are above 30 year olds and male patients make up the majority.
sns.stripplot(diabetes, y="age", x="hypertension", hue="gender")
plt.figure(figsize=(15, 10))
sns.stripplot(diabetes, y="age", x="smoking_history", hue="heart_disease")
| false | 1 | 737 | 3 | 928 | 737 |
||
129191514
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm.notebook import tqdm
from colorama import Style, Fore
blk = Style.BRIGHT + Fore.BLACK
red = Style.BRIGHT + Fore.RED
blu = Style.BRIGHT + Fore.BLUE
cyan = Style.BRIGHT + Fore.CYAN
res = Style.RESET_ALL
base_dir = "/kaggle/input/asl-fingerspelling"
train_csv = f"{base_dir}/train.csv"
supplemental_csv = f"{base_dir}/supplemental_metadata.csv"
# # Train Dataset
train = pd.read_csv(train_csv)
train.head()
# ## Top 10 Phrase in Training Dataset
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(8, 8))
sns.barplot(
y=train["phrase"].value_counts().head(10).sort_values(ascending=False).index,
x=train["phrase"].value_counts().head(10).sort_values(ascending=False),
ax=ax,
)
ax.set_title("Top 10 Phrase in Training Dataset")
ax.set_xlabel("Number of Training Examples")
ax.set_ylabel("Phrase")
plt.show()
# ## Parquet File of Top 1 Phrase in Train Dataset
select_train = "surprise az"
train_example = train.query("phrase == @select_train")["path"].values[0]
select_landmark_train = pd.read_parquet(f"{base_dir}/{train_example}")
select_landmark_train
# One parquet path consist of more than one sequence_id, here I want to show a landmark for `surprise az` but in one parquet has many sequence_id. It means one parquet can consist of different labels.
seq_target_train = train[
(train["path"] == train_example) & (train["phrase"] == select_train)
]["sequence_id"].values[0]
print(f"{blu}[+]{blk} Sequence ID : {blu}{seq_target_train}")
seq_df_train = select_landmark_train[select_landmark_train.index == seq_target_train]
seq_df_train.head()
def x_y_z(column_names):
x = [col for col in column_names if col.startswith("x")]
y = [col for col in column_names if col.startswith("y")]
z = [col for col in column_names if col.startswith("z")]
return x, y, z
def type_of_landmark(example_landmark):
body_parts = set()
for column in example_landmark.columns:
parts = column.split("_")
if len(parts) >= 2:
if parts[1] == "right":
body_parts.add("right_hand")
elif parts[1] == "left":
body_parts.add("left_hand")
else:
body_parts.add(parts[1])
return body_parts
# ## Check Landmark, Frames, and (X, Y, Z) points
unique_frames = seq_df_train["frame"].nunique()
type_landmark_train = type_of_landmark(seq_df_train)
face_train = [col for col in seq_df_train.columns if "face" in col]
right_hand_train = [col for col in seq_df_train.columns if "right_hand" in col]
left_hand_train = [col for col in seq_df_train.columns if "left_hand" in col]
pose_train = [col for col in seq_df_train.columns if "pose" in col]
x_face_train, y_face_train, z_face_train = x_y_z(face_train)
x_right_hand_train, y_right_hand_train, z_right_hand_train = x_y_z(right_hand_train)
x_left_hand_train, y_left_hand_train, z_left_hand_train = x_y_z(left_hand_train)
x_pose_train, y_pose_train, z_pose_train = x_y_z(pose_train)
print(f"{cyan}{'='*20} ( Train Dataset) {'='*20}")
print(
f"{blk}Landmark file for sequence_id {red}{seq_target_train}{blk} has {red}{unique_frames}{blk} frames "
)
print(
f"{blk}This landmark has {red}{len(type_landmark_train)} {blk}types of landmarks and consists of {red}{type_landmark_train}"
)
print(
f"\n{blu}[+]{blk} {blk}Face landmark has {red}{len(face_train)} {blk}points in x : {red}{len(x_face_train)} points, {blk}y : {red}{len(y_face_train)} points, {blk}z : {red}{len(z_face_train)} points"
)
print(
f"{blu}[+]{blk} {blk}Right hand landmark has {red}{len(right_hand_train)} {blk}points in x : {red}{len(x_right_hand_train)} points, {blk}y : {red}{len(y_right_hand_train)} points, {blk}z : {red}{len(z_right_hand_train)} points"
)
print(
f"{blu}[+]{blk} {blk}Left hand landmark has {red}{len(left_hand_train)} {blk}points in x : {red}{len(x_left_hand_train)} points, {blk}y : {red}{len(y_left_hand_train)} points, {blk}z : {red}{len(z_left_hand_train)} points"
)
print(
f"{blu}[+]{blk} {blk}Pose landmark has {red}{len(pose_train)} {blk}points in x : {red}{len(x_pose_train)} points, {blk}y : {red}{len(y_pose_train)} points, {blk}z : {red}{len(z_pose_train)} points"
)
# # Supplemental Dataset
supplemental = pd.read_csv(supplemental_csv)
supplemental.head()
# ## Top 10 Phrase in Supplemental Dataset
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(8, 8))
sns.barplot(
y=supplemental["phrase"].value_counts().head(10).sort_values(ascending=False).index,
x=supplemental["phrase"].value_counts().head(10).sort_values(ascending=False),
ax=ax,
)
ax.set_title("Top 10 Phrase in Supplemental Dataset")
ax.set_xlabel("Number of Examples")
ax.set_ylabel("Phrase")
plt.show()
# ## Parquet File of Top 1 Phrase in Supplemental Dataset
select_supp = "why do you ask silly questions"
supp_example = supplemental.query("phrase == @select_supp")["path"].values[0]
supp_landmark = pd.read_parquet(f"{base_dir}/{supp_example}")
supp_landmark
# Same as **Train Dataset**, one parquet file also has more than one sequence_id
seq_target_supp = supplemental[
(supplemental["path"] == supp_example) & (supplemental["phrase"] == select_supp)
]["sequence_id"].values[0]
print(f"{blu}[+]{blk} Sequence ID : {blu}{seq_target_supp}")
seq_df_supp = supp_landmark[supp_landmark.index == seq_target_supp]
seq_df_supp.head()
# ## Check Landmark, Frames, and (X, Y, Z) points
unique_frames = seq_df_supp["frame"].nunique()
type_landmark_supp = type_of_landmark(seq_df_supp)
face_supp = [col for col in seq_df_supp.columns if "face" in col]
right_hand_supp = [col for col in seq_df_supp.columns if "right_hand" in col]
left_hand_supp = [col for col in seq_df_supp.columns if "left_hand" in col]
pose_supp = [col for col in seq_df_supp.columns if "pose" in col]
x_face_supp, y_face_supp, z_face_supp = x_y_z(face_supp)
x_right_hand_supp, y_right_hand_supp, z_right_hand_supp = x_y_z(right_hand_supp)
x_left_hand_supp, y_left_hand_supp, z_left_hand_supp = x_y_z(left_hand_supp)
x_pose_supp, y_pose_supp, z_pose_supp = x_y_z(pose_supp)
print(f"{cyan}{'='*20} ( Supplemental Dataset) {'='*20}")
print(
f"{blk}Landmark file for sequence_id {red}{seq_target_supp}{blk} has {red}{unique_frames}{blk} frames "
)
print(
f"{blk}This landmark has {red}{len(type_landmark_supp)} {blk}types of landmarks and consists of {red}{type_landmark_supp}"
)
print(
f"\n{blu}[+]{blk} Face landmark has {red}{len(face_supp)} {blk}points in x : {red}{len(x_face_supp)} points, {blk}y : {red}{len(y_face_supp)} points, {blk}z : {red}{len(z_face_supp)} points"
)
print(
f"{blu}[+]{blk} Right hand landmark has {red}{len(right_hand_supp)} {blk}points in x : {red}{len(x_right_hand_supp)} points, {blk}y : {red}{len(y_right_hand_supp)} points, {blk}z : {red}{len(z_right_hand_supp)} points"
)
print(
f"{blu}[+]{blk} Left hand landmark has {red}{len(left_hand_supp)} {blk}points in x : {red}{len(x_left_hand_supp)} points, {blk}y : {red}{len(y_left_hand_supp)} points, {blk}z : {red}{len(z_left_hand_supp)} points"
)
print(
f"{blu}[+]{blk} Pose landmark has {red}{len(pose_supp)} {blk}points in x : {red}{len(x_pose_supp)} points, {blk}y : {red}{len(y_pose_supp)} points, {blk}z : {red}{len(z_pose_supp)} points"
)
# # Plot in 2D
import mediapipe as mp
mp_hands = mp.solutions.hands
def data_plot(seq, frame, x_col, y_col, df):
x = df.query("sequence_id == @seq and frame == @frame")[x_col].iloc[0].values
y = df.query("sequence_id == @seq and frame == @frame")[y_col].iloc[0].values
landmark_idx = [
int(col.split("_")[-1])
for col in df.query("sequence_id == @seq and frame == @frame")[x_col].columns
]
dataframe = pd.DataFrame({"x": x, "y": y, "landmark_idx": landmark_idx})
return dataframe
# ## Training Data Plot Hands of "surprise az" Phrase
frame = 21
left_hand_train = data_plot(
seq_target_train, frame, x_left_hand_train, y_left_hand_train, seq_df_train
)
right_hand_train = data_plot(
seq_target_train, frame, x_right_hand_train, y_right_hand_train, seq_df_train
)
fig, ax = plt.subplots(figsize=(5, 5))
ax.scatter(right_hand_train["x"], right_hand_train["y"])
ax.scatter(left_hand_train["x"], left_hand_train["y"])
for connection in mp_hands.HAND_CONNECTIONS:
point_a = connection[0]
point_b = connection[1]
x1, y1 = right_hand_train.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x2, y2 = right_hand_train.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x1, x2], [y1, y2], color="red")
x3, y3 = left_hand_train.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x4, y4 = left_hand_train.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x3, x4], [y3, y4], color="red")
ax.set_title("why do you ask silly questions")
plt.show()
# ## Supplement Plot "why do you ask silly questions" Phrase
frame = 130
left_hand_supp = data_plot(
seq_target_supp, frame, x_left_hand_supp, y_left_hand_supp, seq_df_supp
)
right_hand_supp = data_plot(
seq_target_supp, frame, x_right_hand_supp, y_right_hand_supp, seq_df_supp
)
fig, ax = plt.subplots(figsize=(5, 5))
ax.scatter(right_hand_supp["x"], right_hand_supp["y"])
ax.scatter(left_hand_supp["x"], left_hand_supp["y"])
for connection in mp_hands.HAND_CONNECTIONS:
point_a = connection[0]
point_b = connection[1]
x1, y1 = right_hand_supp.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x2, y2 = right_hand_supp.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x1, x2], [y1, y2], color="red")
x3, y3 = left_hand_supp.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x4, y4 = left_hand_supp.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x3, x4], [y3, y4], color="red")
ax.set_title("why do you ask silly questions")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/191/129191514.ipynb
| null | null |
[{"Id": 129191514, "ScriptId": 38377708, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11120740, "CreationDate": "05/11/2023 17:18:24", "VersionNumber": 4.0, "Title": "[EDA]\ud83e\udd1e\ud83c\udffcASLFR", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 219.0, "LinesInsertedFromPrevious": 168.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 51.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm.notebook import tqdm
from colorama import Style, Fore
blk = Style.BRIGHT + Fore.BLACK
red = Style.BRIGHT + Fore.RED
blu = Style.BRIGHT + Fore.BLUE
cyan = Style.BRIGHT + Fore.CYAN
res = Style.RESET_ALL
base_dir = "/kaggle/input/asl-fingerspelling"
train_csv = f"{base_dir}/train.csv"
supplemental_csv = f"{base_dir}/supplemental_metadata.csv"
# # Train Dataset
train = pd.read_csv(train_csv)
train.head()
# ## Top 10 Phrase in Training Dataset
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(8, 8))
sns.barplot(
y=train["phrase"].value_counts().head(10).sort_values(ascending=False).index,
x=train["phrase"].value_counts().head(10).sort_values(ascending=False),
ax=ax,
)
ax.set_title("Top 10 Phrase in Training Dataset")
ax.set_xlabel("Number of Training Examples")
ax.set_ylabel("Phrase")
plt.show()
# ## Parquet File of Top 1 Phrase in Train Dataset
select_train = "surprise az"
train_example = train.query("phrase == @select_train")["path"].values[0]
select_landmark_train = pd.read_parquet(f"{base_dir}/{train_example}")
select_landmark_train
# One parquet path consist of more than one sequence_id, here I want to show a landmark for `surprise az` but in one parquet has many sequence_id. It means one parquet can consist of different labels.
seq_target_train = train[
(train["path"] == train_example) & (train["phrase"] == select_train)
]["sequence_id"].values[0]
print(f"{blu}[+]{blk} Sequence ID : {blu}{seq_target_train}")
seq_df_train = select_landmark_train[select_landmark_train.index == seq_target_train]
seq_df_train.head()
def x_y_z(column_names):
x = [col for col in column_names if col.startswith("x")]
y = [col for col in column_names if col.startswith("y")]
z = [col for col in column_names if col.startswith("z")]
return x, y, z
def type_of_landmark(example_landmark):
body_parts = set()
for column in example_landmark.columns:
parts = column.split("_")
if len(parts) >= 2:
if parts[1] == "right":
body_parts.add("right_hand")
elif parts[1] == "left":
body_parts.add("left_hand")
else:
body_parts.add(parts[1])
return body_parts
# ## Check Landmark, Frames, and (X, Y, Z) points
unique_frames = seq_df_train["frame"].nunique()
type_landmark_train = type_of_landmark(seq_df_train)
face_train = [col for col in seq_df_train.columns if "face" in col]
right_hand_train = [col for col in seq_df_train.columns if "right_hand" in col]
left_hand_train = [col for col in seq_df_train.columns if "left_hand" in col]
pose_train = [col for col in seq_df_train.columns if "pose" in col]
x_face_train, y_face_train, z_face_train = x_y_z(face_train)
x_right_hand_train, y_right_hand_train, z_right_hand_train = x_y_z(right_hand_train)
x_left_hand_train, y_left_hand_train, z_left_hand_train = x_y_z(left_hand_train)
x_pose_train, y_pose_train, z_pose_train = x_y_z(pose_train)
print(f"{cyan}{'='*20} ( Train Dataset) {'='*20}")
print(
f"{blk}Landmark file for sequence_id {red}{seq_target_train}{blk} has {red}{unique_frames}{blk} frames "
)
print(
f"{blk}This landmark has {red}{len(type_landmark_train)} {blk}types of landmarks and consists of {red}{type_landmark_train}"
)
print(
f"\n{blu}[+]{blk} {blk}Face landmark has {red}{len(face_train)} {blk}points in x : {red}{len(x_face_train)} points, {blk}y : {red}{len(y_face_train)} points, {blk}z : {red}{len(z_face_train)} points"
)
print(
f"{blu}[+]{blk} {blk}Right hand landmark has {red}{len(right_hand_train)} {blk}points in x : {red}{len(x_right_hand_train)} points, {blk}y : {red}{len(y_right_hand_train)} points, {blk}z : {red}{len(z_right_hand_train)} points"
)
print(
f"{blu}[+]{blk} {blk}Left hand landmark has {red}{len(left_hand_train)} {blk}points in x : {red}{len(x_left_hand_train)} points, {blk}y : {red}{len(y_left_hand_train)} points, {blk}z : {red}{len(z_left_hand_train)} points"
)
print(
f"{blu}[+]{blk} {blk}Pose landmark has {red}{len(pose_train)} {blk}points in x : {red}{len(x_pose_train)} points, {blk}y : {red}{len(y_pose_train)} points, {blk}z : {red}{len(z_pose_train)} points"
)
# # Supplemental Dataset
supplemental = pd.read_csv(supplemental_csv)
supplemental.head()
# ## Top 10 Phrase in Supplemental Dataset
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(8, 8))
sns.barplot(
y=supplemental["phrase"].value_counts().head(10).sort_values(ascending=False).index,
x=supplemental["phrase"].value_counts().head(10).sort_values(ascending=False),
ax=ax,
)
ax.set_title("Top 10 Phrase in Supplemental Dataset")
ax.set_xlabel("Number of Examples")
ax.set_ylabel("Phrase")
plt.show()
# ## Parquet File of Top 1 Phrase in Supplemental Dataset
select_supp = "why do you ask silly questions"
supp_example = supplemental.query("phrase == @select_supp")["path"].values[0]
supp_landmark = pd.read_parquet(f"{base_dir}/{supp_example}")
supp_landmark
# Same as **Train Dataset**, one parquet file also has more than one sequence_id
seq_target_supp = supplemental[
(supplemental["path"] == supp_example) & (supplemental["phrase"] == select_supp)
]["sequence_id"].values[0]
print(f"{blu}[+]{blk} Sequence ID : {blu}{seq_target_supp}")
seq_df_supp = supp_landmark[supp_landmark.index == seq_target_supp]
seq_df_supp.head()
# ## Check Landmark, Frames, and (X, Y, Z) points
unique_frames = seq_df_supp["frame"].nunique()
type_landmark_supp = type_of_landmark(seq_df_supp)
face_supp = [col for col in seq_df_supp.columns if "face" in col]
right_hand_supp = [col for col in seq_df_supp.columns if "right_hand" in col]
left_hand_supp = [col for col in seq_df_supp.columns if "left_hand" in col]
pose_supp = [col for col in seq_df_supp.columns if "pose" in col]
x_face_supp, y_face_supp, z_face_supp = x_y_z(face_supp)
x_right_hand_supp, y_right_hand_supp, z_right_hand_supp = x_y_z(right_hand_supp)
x_left_hand_supp, y_left_hand_supp, z_left_hand_supp = x_y_z(left_hand_supp)
x_pose_supp, y_pose_supp, z_pose_supp = x_y_z(pose_supp)
print(f"{cyan}{'='*20} ( Supplemental Dataset) {'='*20}")
print(
f"{blk}Landmark file for sequence_id {red}{seq_target_supp}{blk} has {red}{unique_frames}{blk} frames "
)
print(
f"{blk}This landmark has {red}{len(type_landmark_supp)} {blk}types of landmarks and consists of {red}{type_landmark_supp}"
)
print(
f"\n{blu}[+]{blk} Face landmark has {red}{len(face_supp)} {blk}points in x : {red}{len(x_face_supp)} points, {blk}y : {red}{len(y_face_supp)} points, {blk}z : {red}{len(z_face_supp)} points"
)
print(
f"{blu}[+]{blk} Right hand landmark has {red}{len(right_hand_supp)} {blk}points in x : {red}{len(x_right_hand_supp)} points, {blk}y : {red}{len(y_right_hand_supp)} points, {blk}z : {red}{len(z_right_hand_supp)} points"
)
print(
f"{blu}[+]{blk} Left hand landmark has {red}{len(left_hand_supp)} {blk}points in x : {red}{len(x_left_hand_supp)} points, {blk}y : {red}{len(y_left_hand_supp)} points, {blk}z : {red}{len(z_left_hand_supp)} points"
)
print(
f"{blu}[+]{blk} Pose landmark has {red}{len(pose_supp)} {blk}points in x : {red}{len(x_pose_supp)} points, {blk}y : {red}{len(y_pose_supp)} points, {blk}z : {red}{len(z_pose_supp)} points"
)
# # Plot in 2D
import mediapipe as mp
mp_hands = mp.solutions.hands
def data_plot(seq, frame, x_col, y_col, df):
x = df.query("sequence_id == @seq and frame == @frame")[x_col].iloc[0].values
y = df.query("sequence_id == @seq and frame == @frame")[y_col].iloc[0].values
landmark_idx = [
int(col.split("_")[-1])
for col in df.query("sequence_id == @seq and frame == @frame")[x_col].columns
]
dataframe = pd.DataFrame({"x": x, "y": y, "landmark_idx": landmark_idx})
return dataframe
# ## Training Data Plot Hands of "surprise az" Phrase
frame = 21
left_hand_train = data_plot(
seq_target_train, frame, x_left_hand_train, y_left_hand_train, seq_df_train
)
right_hand_train = data_plot(
seq_target_train, frame, x_right_hand_train, y_right_hand_train, seq_df_train
)
fig, ax = plt.subplots(figsize=(5, 5))
ax.scatter(right_hand_train["x"], right_hand_train["y"])
ax.scatter(left_hand_train["x"], left_hand_train["y"])
for connection in mp_hands.HAND_CONNECTIONS:
point_a = connection[0]
point_b = connection[1]
x1, y1 = right_hand_train.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x2, y2 = right_hand_train.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x1, x2], [y1, y2], color="red")
x3, y3 = left_hand_train.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x4, y4 = left_hand_train.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x3, x4], [y3, y4], color="red")
ax.set_title("why do you ask silly questions")
plt.show()
# ## Supplement Plot "why do you ask silly questions" Phrase
frame = 130
left_hand_supp = data_plot(
seq_target_supp, frame, x_left_hand_supp, y_left_hand_supp, seq_df_supp
)
right_hand_supp = data_plot(
seq_target_supp, frame, x_right_hand_supp, y_right_hand_supp, seq_df_supp
)
fig, ax = plt.subplots(figsize=(5, 5))
ax.scatter(right_hand_supp["x"], right_hand_supp["y"])
ax.scatter(left_hand_supp["x"], left_hand_supp["y"])
for connection in mp_hands.HAND_CONNECTIONS:
point_a = connection[0]
point_b = connection[1]
x1, y1 = right_hand_supp.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x2, y2 = right_hand_supp.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x1, x2], [y1, y2], color="red")
x3, y3 = left_hand_supp.query("landmark_idx == @point_a")[["x", "y"]].values[0]
x4, y4 = left_hand_supp.query("landmark_idx == @point_b")[["x", "y"]].values[0]
plt.plot([x3, x4], [y3, y4], color="red")
ax.set_title("why do you ask silly questions")
plt.show()
| false | 0 | 3,667 | 0 | 3,667 | 3,667 |
||
129191308
|
import cv2
import os
import pandas as pd
import numpy as np
# from python_utils import meanHSV
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
import seaborn as sns
# fungsi mencari rata-rata hsv
def meanHSV(img_roi):
# konversi warna dari bgr ke rgb
rgbImg = cv2.cvtColor(img_roi, cv2.COLOR_BGR2RGB)
# konversi warna dari rgb ke hsv
hsvImg = cv2.cvtColor(rgbImg, cv2.COLOR_RGB2HSV)
# mengambil nilai hsv
H = hsvImg[:, :, 0]
S = hsvImg[:, :, 1]
V = hsvImg[:, :, 2]
# mencari rata-rata hsv
meanH = np.mean(H)
meanS = np.mean(S)
meanV = np.mean(V)
return meanH, meanS, meanV
train_path = "/kaggle/input/knn-1380/knn/dataset_kopi_kuning/train"
images = []
for category in os.listdir(train_path):
path = os.path.join(train_path, category)
for img in os.listdir(path):
img_path = os.path.join(path, img)
image = cv2.imread(img_path)
image_resize = cv2.resize(image, (200, 200))
meanH, meanS, meanV = meanHSV(image_resize)
images.append([meanH, meanS, meanV, category])
# set array gambar menjadi array numpy
images = np.array(images)
# memisahkan semua nilai pada array ke dalam masing-masing variable
meanH, meanS, meanV, category = images.T
# set array data untuk dataframe
data = {
"meanH": meanH,
"meanS": meanS,
"meanV": meanV,
"category": category,
}
# set dataframe dari array data
dataFrame = pd.DataFrame(data)
# simpan hasil dataframe ke file csv
dataFrame.to_csv("dataset.csv")
print(dataFrame)
# import dataset
dataFrame = pd.read_csv("./dataset.csv")
# split hasil rata-rata hsv dari dataset
X = dataFrame[["meanH", "meanS", "meanV"]]
# ambil data label hasil kematangan dari dataset
Y = dataFrame["category"]
# menentukan nilai tetangga = 3
knn = KNeighborsClassifier(n_neighbors=3)
# menerapkan data X dan Y pada knn
knn.fit(X.values, Y)
# path image yang akan di tes kematangan
test_path = "/kaggle/input/knn-1380/knn/dataset_kopi_kuning/test_model"
def showImg(display1, predict):
fig, ax1 = plt.subplots()
display1 = cv2.cvtColor(display1, cv2.COLOR_BGR2RGB)
title = "Predict :" + predict
print(predict)
ax1.set_title(title)
ax1.imshow(display1)
# plt.show()
n = acc = 0
t_matang = t_setengah_matang = t_mentah = 0
for category in os.listdir(test_path):
path = os.path.join(test_path, category)
for img in os.listdir(path):
img_path = os.path.join(path, img)
image = cv2.imread(img_path)
image_resize = cv2.resize(image, (200, 200))
# mencari rata-rata hsv dari img yang akan di tes pada file dataset
meanH, meanS, meanV = meanHSV(image_resize)
# memasukan rata-rata hsv ke dalam bentuk array
data = [meanH, meanS, meanV]
# prediksi knn dari variable data
predict = knn.predict([data])
preds = predict[0]
showImg(image_resize, preds)
if preds == category:
acc += 1
if preds == "matang":
t_matang += 1
elif preds == "setengah_matang":
t_setengah_matang += 1
else:
t_mentah += 1
n += 1
print(img)
print("Citra ke-", n, "Label => ", category, "| Result => ", preds)
acc = acc / n
print("\nAccuracy ==> ", acc)
total = [t_matang, t_setengah_matang, t_mentah]
cat = ["matang", "setengah matang", "mentah"]
datas = pd.DataFrame({"category": cat, "total": total})
figure, ax = plt.subplots(figsize=(20, 5))
sns.barplot(x="category", y="total", data=datas)
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/191/129191308.ipynb
| null | null |
[{"Id": 129191308, "ScriptId": 37131707, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11348040, "CreationDate": "05/11/2023 17:16:27", "VersionNumber": 1.0, "Title": "knn-fix-1380", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 122.0, "LinesInsertedFromPrevious": 122.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import cv2
import os
import pandas as pd
import numpy as np
# from python_utils import meanHSV
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
import seaborn as sns
# fungsi mencari rata-rata hsv
def meanHSV(img_roi):
# konversi warna dari bgr ke rgb
rgbImg = cv2.cvtColor(img_roi, cv2.COLOR_BGR2RGB)
# konversi warna dari rgb ke hsv
hsvImg = cv2.cvtColor(rgbImg, cv2.COLOR_RGB2HSV)
# mengambil nilai hsv
H = hsvImg[:, :, 0]
S = hsvImg[:, :, 1]
V = hsvImg[:, :, 2]
# mencari rata-rata hsv
meanH = np.mean(H)
meanS = np.mean(S)
meanV = np.mean(V)
return meanH, meanS, meanV
train_path = "/kaggle/input/knn-1380/knn/dataset_kopi_kuning/train"
images = []
for category in os.listdir(train_path):
path = os.path.join(train_path, category)
for img in os.listdir(path):
img_path = os.path.join(path, img)
image = cv2.imread(img_path)
image_resize = cv2.resize(image, (200, 200))
meanH, meanS, meanV = meanHSV(image_resize)
images.append([meanH, meanS, meanV, category])
# set array gambar menjadi array numpy
images = np.array(images)
# memisahkan semua nilai pada array ke dalam masing-masing variable
meanH, meanS, meanV, category = images.T
# set array data untuk dataframe
data = {
"meanH": meanH,
"meanS": meanS,
"meanV": meanV,
"category": category,
}
# set dataframe dari array data
dataFrame = pd.DataFrame(data)
# simpan hasil dataframe ke file csv
dataFrame.to_csv("dataset.csv")
print(dataFrame)
# import dataset
dataFrame = pd.read_csv("./dataset.csv")
# split hasil rata-rata hsv dari dataset
X = dataFrame[["meanH", "meanS", "meanV"]]
# ambil data label hasil kematangan dari dataset
Y = dataFrame["category"]
# menentukan nilai tetangga = 3
knn = KNeighborsClassifier(n_neighbors=3)
# menerapkan data X dan Y pada knn
knn.fit(X.values, Y)
# path image yang akan di tes kematangan
test_path = "/kaggle/input/knn-1380/knn/dataset_kopi_kuning/test_model"
def showImg(display1, predict):
fig, ax1 = plt.subplots()
display1 = cv2.cvtColor(display1, cv2.COLOR_BGR2RGB)
title = "Predict :" + predict
print(predict)
ax1.set_title(title)
ax1.imshow(display1)
# plt.show()
n = acc = 0
t_matang = t_setengah_matang = t_mentah = 0
for category in os.listdir(test_path):
path = os.path.join(test_path, category)
for img in os.listdir(path):
img_path = os.path.join(path, img)
image = cv2.imread(img_path)
image_resize = cv2.resize(image, (200, 200))
# mencari rata-rata hsv dari img yang akan di tes pada file dataset
meanH, meanS, meanV = meanHSV(image_resize)
# memasukan rata-rata hsv ke dalam bentuk array
data = [meanH, meanS, meanV]
# prediksi knn dari variable data
predict = knn.predict([data])
preds = predict[0]
showImg(image_resize, preds)
if preds == category:
acc += 1
if preds == "matang":
t_matang += 1
elif preds == "setengah_matang":
t_setengah_matang += 1
else:
t_mentah += 1
n += 1
print(img)
print("Citra ke-", n, "Label => ", category, "| Result => ", preds)
acc = acc / n
print("\nAccuracy ==> ", acc)
total = [t_matang, t_setengah_matang, t_mentah]
cat = ["matang", "setengah matang", "mentah"]
datas = pd.DataFrame({"category": cat, "total": total})
figure, ax = plt.subplots(figsize=(20, 5))
sns.barplot(x="category", y="total", data=datas)
plt.show()
| false | 0 | 1,215 | 0 | 1,215 | 1,215 |
||
129191902
|
<jupyter_start><jupyter_text>Spotify and Youtube
Dataset of songs of various artist in the world and for each song is present:
- Several statistics of the music version on spotify, including the number of streams;
- Number of views of the official music video of the song on youtube.
# **Content**
It includes 26 variables for each of the songs collected from spotify. These variables are briefly described next:
- **Track**: name of the song, as visible on the Spotify platform.
- **Artist**: name of the artist.
- **Url_spotify**: the Url of the artist.
- **Album**: the album in wich the song is contained on Spotify.
- **Album_type**: indicates if the song is relesead on Spotify as a single or contained in an album.
- **Uri**: a spotify link used to find the song through the API.
- **Danceability**: describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
- **Energy**: is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.
- **Key**: the key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1.
- **Loudness**: the overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typically range between -60 and 0 db.
- **Speechiness**: detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.
- **Acousticness**: a confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.
- **Instrumentalness**: predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.
- **Liveness**: detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.
- **Valence**: a measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).
- **Tempo**: the overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.
- **Duration_ms**: the duration of the track in milliseconds.
- **Stream**: number of streams of the song on Spotify.
- **Url_youtube**: url of the video linked to the song on Youtube, if it have any.
- **Title**: title of the videoclip on youtube.
- **Channel**: name of the channel that have published the video.
- **Views**: number of views.
- **Likes**: number of likes.
- **Comments**: number of comments.
- **Description**: description of the video on Youtube.
- **Licensed**: Indicates whether the video represents licensed content, which means that the content was uploaded to a channel linked to a YouTube content partner and then claimed by that partner.
- **official_video**: boolean value that indicates if the video found is the official video of the song.
# **Notes**
These datas are heavily dependent on the time they were collected, which is in this case the 7th of February, 2023.
Kaggle dataset identifier: spotify-and-youtube
<jupyter_script># # **Exploratory Data Analysis of Spotify & YouTube Songs**
# ## **Table of Contents**
# #### * [Introduction](#section-1)
# #### * [Viewing the Data and Using Value Counts](#section-2)
# #### * [Bar Charts](#section-3)
# #### * [Comparison of Multiple Categories](#section-4)
# #### * [Conclusion](#section-5)
# ## **Introduction**
# In this notebook, I will perform exploratory data analysis (EDA) to find trends in the data between Spotify and YouTube songs.
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("darkgrid")
# The data is loaded into a data frame and viewed.
df = pd.read_csv("/kaggle/input/spotify-and-youtube/Spotify_Youtube.csv")
df.head()
#
# ## **Viewing the Data and Using Value Counts**
# After loading the data and getting a basic view of the data table, a few adjustments and basic calculations can be performed. There are a couple of columns that will be irrelevant for performing EDA. These columns are removed from the data frame.
df = df.drop(["Unnamed: 0", "Url_spotify", "Uri", "Url_youtube", "Description"], axis=1)
# After removing a few columns, the info and sum commands are used to view the data types and the amount of null values. In addition, the duplicated function is run to see if there are any duplicates in the data frame.
df.info()
df.isnull().sum()
df.duplicated().sum()
# Most of the missing values seem to be from the YouTube data. This is something to keep in mind when completing the EDA.
# A basic form of exploratory data analysis is using value counts to see how the data is distributed. The value counts function is used to see the value counts for artists and album type.
df["Artist"].value_counts()
df["Album_type"].value_counts()
# Another effective method for viewing value counts is a pie chart, which is optimal for categories with 6 or fewer variables. In this case, a pie chart can be used for album type.
sum_album_type = df["Album_type"].value_counts()
# Adds color to pie chart
colors = sns.color_palette()[0:3]
# Plots pie chart
plt.pie(
sum_album_type.values,
labels=sum_album_type.index,
colors=colors,
autopct="%.0f%%",
shadow=True,
)
plt.title("Percentage of Count Values for Each Album Type")
# Edit font size
plt.rcParams["font.size"] = 12
# From the pie chart, the majority of the songs are from albums, followed by singles and compilations.
# ## **Bar Charts**
# Bar charts are an effective method to view the distribution of the data or, in this case, the top values of a category and their proportion to one another.
# First, a bar chart is created to view the artists with the top 10 amount of views on YouTube.
# Group in descending order of total views by artist.
artist_views = (
df.groupby("Artist", as_index=False).sum().sort_values(by="Views", ascending=False)
)
fig, ax = plt.subplots(figsize=(8, 10))
# Create barplot of top 10 artists by views.
sns.barplot(x=artist_views["Views"][:10], y=artist_views["Artist"][:10])
plt.xticks(rotation=90)
ax.set_title("Top 10 Artists by Views")
ax.set_xlabel("Total Views (Billions)")
# A bar chart can also be created to see the top 10 songs by total views on YouTube.
# Group in descending order of total views by song.
song_views = (
df.groupby("Track", as_index=False).sum().sort_values(by="Views", ascending=False)
)
fig, ax = plt.subplots(figsize=(8, 10))
# Create barplot of top 10 artists by views.
sns.barplot(x=song_views["Views"][:10], y=song_views["Track"][:10])
plt.xticks(rotation=90)
ax.set_title("Top 10 Songs by Views")
ax.set_xlabel("Total Views (Billions)")
# Next, a bar chart is created to display the top 10 artists based on the amount of likes on YouTube.
# Group in descending order of total likes by artists.
artist_likes = (
df.groupby("Artist", as_index=False).sum().sort_values(by="Likes", ascending=False)
)
fig, ax = plt.subplots(figsize=(8, 10))
# Create barplot of top 10 artists by likes.
sns.barplot(x=artist_likes["Likes"][:10], y=artist_likes["Artist"][:10])
plt.xticks(rotation=90)
ax.set_title("Top 10 Artists by Likes")
ax.set_xlabel("Total Likes (Billions)")
# Bar charts can help see trends and pattern in the data. For example, the bar charts above illustrate the popularity of artists such as Ed Sheeran and Charlie Puth, as both artists are in all 3 categories (Shape of You is a song by Ed Sheeran, while Charlie Puth is featured in the song See You Again).
# A bar chart can also be created to see an individual artist's top songs. Below is an example of displaying the top 5 songs by Charlie Puth.
# Group in descending order of total views by artist and track.
puth_df = df.groupby(["Artist", "Track"], as_index=False).sum()
# Sort values by the total views with Charlie Puth as the artist.
puth_views = puth_df[puth_df["Artist"] == "Charlie Puth"].sort_values(
by="Views", ascending=False
)
# Create barplot of top 5 Charlie Puth songs by the total amount of views on YouTube.
fig, ax = plt.subplots(figsize=(8, 10))
sns.barplot(x=puth_views["Views"][:5], y=puth_views["Track"][:5])
plt.xticks(rotation=90)
ax.set_title("Top 5 Charlie Puth Songs by Views")
ax.set_xlabel("Total Views (Billions)")
# As seen above, bar charts help give a visual representation of the data's proportionality and distribution. Bar charts are easy to create and easy to interpret, making them a go=to tool in exploratory data analysis.
# ## **Comparison of Multiple Categories**
# Next, it's possible to compare multiple categories in a bar chart. This is useful in seeing the commonalities among popular songs.
# Before creating a multi-category bar chart, a correlation heat map is designed to see which categories to compare.
fig, ax = plt.subplots(figsize=(13, 8))
sns.heatmap(data=df.corr().round(2), annot=True, cmap="coolwarm")
plt.show()
# The correlation heat map only considers variables with numerical values, so some variables will be excluded, such as 'Artist' and 'Album'.
# From the correlation heat map, there appears to be a correlation between views, likes, comments, and stream.
# Since I want to compare the makeup of songs, a new table is created to compare views, energy, valence, danceability, and loudness of songs. The data is sorted by the top 10 artists by views on YouTube.
top_artists = df.sort_values("Views", ascending=False).head(10)
top_artists[["Artist", "Views", "Energy", "Valence", "Danceability", "Loudness"]]
# Now that the new table is created, the multi-category bar chart is created. This is also known as a category plot, since continuous numeric variables are on one axis and a non-numeric variable is on another axis.
# Since the loudness variable is a negative value, I stuck with using energy, valence, and danceability for the plot.
top_categories = top_artists.melt(
id_vars="Artist",
value_vars=["Energy", "Valence", "Danceability"],
var_name="Variables",
value_name="Value",
)
sns.catplot(
x="Artist",
y="Value",
hue="Variables",
data=top_categories,
kind="bar",
legend=False,
height=5,
aspect=2,
)
ax.set_xlabel("Artist")
plt.xticks(rotation=90)
plt.legend(loc="upper right", bbox_to_anchor=(1.20, 1))
# From the chart, it appears the top artists have a high value in two of the three categories. For example, Luis Fonsi has high energy and valence, while Kay Perry has high energy and danceability.
# The same chart is created for the top 10 artists by Spotify streams.
top_artists_stream = df.sort_values("Stream", ascending=False).head(10)
top_artists_stream[
["Artist", "Stream", "Energy", "Valence", "Danceability", "Loudness"]
]
top_categories_stream = top_artists_stream.melt(
id_vars="Artist",
value_vars=["Energy", "Valence", "Danceability"],
var_name="Variables",
value_name="Value",
)
sns.catplot(
x="Artist",
y="Value",
hue="Variables",
data=top_categories_stream,
kind="bar",
legend=False,
height=5,
aspect=2,
)
ax.set_xlabel("Artist")
plt.xticks(rotation=90)
plt.legend(loc="upper right", bbox_to_anchor=(1.20, 1))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/191/129191902.ipynb
|
spotify-and-youtube
|
salvatorerastelli
|
[{"Id": 129191902, "ScriptId": 38407799, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 2941887, "CreationDate": "05/11/2023 17:22:30", "VersionNumber": 1.0, "Title": "Spotify & YouTube EDA", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 172.0, "LinesInsertedFromPrevious": 172.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 185020104, "KernelVersionId": 129191902, "SourceDatasetVersionId": 5201951}]
|
[{"Id": 5201951, "DatasetId": 3025170, "DatasourceVersionId": 5274235, "CreatorUserId": 12271862, "LicenseName": "CC0: Public Domain", "CreationDate": "03/20/2023 15:43:25", "VersionNumber": 2.0, "Title": "Spotify and Youtube", "Slug": "spotify-and-youtube", "Subtitle": "Statistics for the Top 10 songs of various spotify artists and their yt video.", "Description": "Dataset of songs of various artist in the world and for each song is present:\n- Several statistics of the music version on spotify, including the number of streams;\n- Number of views of the official music video of the song on youtube.\n\n\n# **Content**\nIt includes 26 variables for each of the songs collected from spotify. These variables are briefly described next:\n- **Track**: name of the song, as visible on the Spotify platform.\n- **Artist**: name of the artist.\n- **Url_spotify**: the Url of the artist.\n- **Album**: the album in wich the song is contained on Spotify.\n- **Album_type**: indicates if the song is relesead on Spotify as a single or contained in an album.\n- **Uri**: a spotify link used to find the song through the API.\n- **Danceability**: describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.\n- **Energy**: is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.\n- **Key**: the key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C\u266f/D\u266d, 2 = D, and so on. If no key was detected, the value is -1.\n- **Loudness**: the overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typically range between -60 and 0 db.\n- **Speechiness**: detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.\n- **Acousticness**: a confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.\n- **Instrumentalness**: predicts whether a track contains no vocals. \"Ooh\" and \"aah\" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly \"vocal\". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.\n- **Liveness**: detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.\n- **Valence**: a measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).\n- **Tempo**: the overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.\n- **Duration_ms**: the duration of the track in milliseconds.\n- **Stream**: number of streams of the song on Spotify.\n- **Url_youtube**: url of the video linked to the song on Youtube, if it have any.\n- **Title**: title of the videoclip on youtube.\n- **Channel**: name of the channel that have published the video.\n- **Views**: number of views.\n- **Likes**: number of likes.\n- **Comments**: number of comments.\n- **Description**: description of the video on Youtube.\n- **Licensed**: Indicates whether the video represents licensed content, which means that the content was uploaded to a channel linked to a YouTube content partner and then claimed by that partner.\n- **official_video**: boolean value that indicates if the video found is the official video of the song.\n\n# **Notes**\nThese datas are heavily dependent on the time they were collected, which is in this case the 7th of February, 2023.", "VersionNotes": "Data Update 2023/03/20", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3025170, "CreatorUserId": 12271862, "OwnerUserId": 12271862.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5201951.0, "CurrentDatasourceVersionId": 5274235.0, "ForumId": 3064429, "Type": 2, "CreationDate": "03/20/2023 15:22:42", "LastActivityDate": "03/20/2023", "TotalViews": 115230, "TotalDownloads": 17868, "TotalVotes": 494, "TotalKernels": 46}]
|
[{"Id": 12271862, "UserName": "salvatorerastelli", "DisplayName": "Salvatore Rastelli", "RegisterDate": "11/07/2022", "PerformanceTier": 0}]
|
# # **Exploratory Data Analysis of Spotify & YouTube Songs**
# ## **Table of Contents**
# #### * [Introduction](#section-1)
# #### * [Viewing the Data and Using Value Counts](#section-2)
# #### * [Bar Charts](#section-3)
# #### * [Comparison of Multiple Categories](#section-4)
# #### * [Conclusion](#section-5)
# ## **Introduction**
# In this notebook, I will perform exploratory data analysis (EDA) to find trends in the data between Spotify and YouTube songs.
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("darkgrid")
# The data is loaded into a data frame and viewed.
df = pd.read_csv("/kaggle/input/spotify-and-youtube/Spotify_Youtube.csv")
df.head()
#
# ## **Viewing the Data and Using Value Counts**
# After loading the data and getting a basic view of the data table, a few adjustments and basic calculations can be performed. There are a couple of columns that will be irrelevant for performing EDA. These columns are removed from the data frame.
df = df.drop(["Unnamed: 0", "Url_spotify", "Uri", "Url_youtube", "Description"], axis=1)
# After removing a few columns, the info and sum commands are used to view the data types and the amount of null values. In addition, the duplicated function is run to see if there are any duplicates in the data frame.
df.info()
df.isnull().sum()
df.duplicated().sum()
# Most of the missing values seem to be from the YouTube data. This is something to keep in mind when completing the EDA.
# A basic form of exploratory data analysis is using value counts to see how the data is distributed. The value counts function is used to see the value counts for artists and album type.
df["Artist"].value_counts()
df["Album_type"].value_counts()
# Another effective method for viewing value counts is a pie chart, which is optimal for categories with 6 or fewer variables. In this case, a pie chart can be used for album type.
sum_album_type = df["Album_type"].value_counts()
# Adds color to pie chart
colors = sns.color_palette()[0:3]
# Plots pie chart
plt.pie(
sum_album_type.values,
labels=sum_album_type.index,
colors=colors,
autopct="%.0f%%",
shadow=True,
)
plt.title("Percentage of Count Values for Each Album Type")
# Edit font size
plt.rcParams["font.size"] = 12
# From the pie chart, the majority of the songs are from albums, followed by singles and compilations.
# ## **Bar Charts**
# Bar charts are an effective method to view the distribution of the data or, in this case, the top values of a category and their proportion to one another.
# First, a bar chart is created to view the artists with the top 10 amount of views on YouTube.
# Group in descending order of total views by artist.
artist_views = (
df.groupby("Artist", as_index=False).sum().sort_values(by="Views", ascending=False)
)
fig, ax = plt.subplots(figsize=(8, 10))
# Create barplot of top 10 artists by views.
sns.barplot(x=artist_views["Views"][:10], y=artist_views["Artist"][:10])
plt.xticks(rotation=90)
ax.set_title("Top 10 Artists by Views")
ax.set_xlabel("Total Views (Billions)")
# A bar chart can also be created to see the top 10 songs by total views on YouTube.
# Group in descending order of total views by song.
song_views = (
df.groupby("Track", as_index=False).sum().sort_values(by="Views", ascending=False)
)
fig, ax = plt.subplots(figsize=(8, 10))
# Create barplot of top 10 artists by views.
sns.barplot(x=song_views["Views"][:10], y=song_views["Track"][:10])
plt.xticks(rotation=90)
ax.set_title("Top 10 Songs by Views")
ax.set_xlabel("Total Views (Billions)")
# Next, a bar chart is created to display the top 10 artists based on the amount of likes on YouTube.
# Group in descending order of total likes by artists.
artist_likes = (
df.groupby("Artist", as_index=False).sum().sort_values(by="Likes", ascending=False)
)
fig, ax = plt.subplots(figsize=(8, 10))
# Create barplot of top 10 artists by likes.
sns.barplot(x=artist_likes["Likes"][:10], y=artist_likes["Artist"][:10])
plt.xticks(rotation=90)
ax.set_title("Top 10 Artists by Likes")
ax.set_xlabel("Total Likes (Billions)")
# Bar charts can help see trends and pattern in the data. For example, the bar charts above illustrate the popularity of artists such as Ed Sheeran and Charlie Puth, as both artists are in all 3 categories (Shape of You is a song by Ed Sheeran, while Charlie Puth is featured in the song See You Again).
# A bar chart can also be created to see an individual artist's top songs. Below is an example of displaying the top 5 songs by Charlie Puth.
# Group in descending order of total views by artist and track.
puth_df = df.groupby(["Artist", "Track"], as_index=False).sum()
# Sort values by the total views with Charlie Puth as the artist.
puth_views = puth_df[puth_df["Artist"] == "Charlie Puth"].sort_values(
by="Views", ascending=False
)
# Create barplot of top 5 Charlie Puth songs by the total amount of views on YouTube.
fig, ax = plt.subplots(figsize=(8, 10))
sns.barplot(x=puth_views["Views"][:5], y=puth_views["Track"][:5])
plt.xticks(rotation=90)
ax.set_title("Top 5 Charlie Puth Songs by Views")
ax.set_xlabel("Total Views (Billions)")
# As seen above, bar charts help give a visual representation of the data's proportionality and distribution. Bar charts are easy to create and easy to interpret, making them a go=to tool in exploratory data analysis.
# ## **Comparison of Multiple Categories**
# Next, it's possible to compare multiple categories in a bar chart. This is useful in seeing the commonalities among popular songs.
# Before creating a multi-category bar chart, a correlation heat map is designed to see which categories to compare.
fig, ax = plt.subplots(figsize=(13, 8))
sns.heatmap(data=df.corr().round(2), annot=True, cmap="coolwarm")
plt.show()
# The correlation heat map only considers variables with numerical values, so some variables will be excluded, such as 'Artist' and 'Album'.
# From the correlation heat map, there appears to be a correlation between views, likes, comments, and stream.
# Since I want to compare the makeup of songs, a new table is created to compare views, energy, valence, danceability, and loudness of songs. The data is sorted by the top 10 artists by views on YouTube.
top_artists = df.sort_values("Views", ascending=False).head(10)
top_artists[["Artist", "Views", "Energy", "Valence", "Danceability", "Loudness"]]
# Now that the new table is created, the multi-category bar chart is created. This is also known as a category plot, since continuous numeric variables are on one axis and a non-numeric variable is on another axis.
# Since the loudness variable is a negative value, I stuck with using energy, valence, and danceability for the plot.
top_categories = top_artists.melt(
id_vars="Artist",
value_vars=["Energy", "Valence", "Danceability"],
var_name="Variables",
value_name="Value",
)
sns.catplot(
x="Artist",
y="Value",
hue="Variables",
data=top_categories,
kind="bar",
legend=False,
height=5,
aspect=2,
)
ax.set_xlabel("Artist")
plt.xticks(rotation=90)
plt.legend(loc="upper right", bbox_to_anchor=(1.20, 1))
# From the chart, it appears the top artists have a high value in two of the three categories. For example, Luis Fonsi has high energy and valence, while Kay Perry has high energy and danceability.
# The same chart is created for the top 10 artists by Spotify streams.
top_artists_stream = df.sort_values("Stream", ascending=False).head(10)
top_artists_stream[
["Artist", "Stream", "Energy", "Valence", "Danceability", "Loudness"]
]
top_categories_stream = top_artists_stream.melt(
id_vars="Artist",
value_vars=["Energy", "Valence", "Danceability"],
var_name="Variables",
value_name="Value",
)
sns.catplot(
x="Artist",
y="Value",
hue="Variables",
data=top_categories_stream,
kind="bar",
legend=False,
height=5,
aspect=2,
)
ax.set_xlabel("Artist")
plt.xticks(rotation=90)
plt.legend(loc="upper right", bbox_to_anchor=(1.20, 1))
| false | 1 | 2,349 | 3 | 3,542 | 2,349 |
||
129191903
|
<jupyter_start><jupyter_text>FiveThirtyEight Comic Characters Dataset
### Content
# Comic Characters
This folder contains data behind the story [Comic Books Are Still Made By Men, For Men And About Men](http://fivethirtyeight.com/features/women-in-comic-books/).
The data comes from [Marvel Wikia](http://marvel.wikia.com/Main_Page) and [DC Wikia](http://dc.wikia.com/wiki/Main_Page). Characters were scraped on August 24. Appearance counts were scraped on September 2. The month and year of the first issue each character appeared in was pulled on October 6.
The data is split into two files, for DC and Marvel, respectively: `dc-wikia-data.csv` and `marvel-wikia-data.csv`. Each file has the following variables:
Variable | Definition
---|---------
`page_id` | The unique identifier for that characters page within the wikia
`name` | The name of the character
`urlslug` | The unique url within the wikia that takes you to the character
`ID` | The identity status of the character (Secret Identity, Public identity, [on marvel only: No Dual Identity])
`ALIGN` | If the character is Good, Bad or Neutral
`EYE` | Eye color of the character
`HAIR` | Hair color of the character
`SEX` | Sex of the character (e.g. Male, Female, etc.)
`GSM` | If the character is a gender or sexual minority (e.g. Homosexual characters, bisexual characters)
`ALIVE` | If the character is alive or deceased
`APPEARANCES` | The number of appareances of the character in comic books (as of Sep. 2, 2014. Number will become increasingly out of date as time goes on.)
`FIRST APPEARANCE` | The month and year of the character's first appearance in a comic book, if available
`YEAR` | The year of the character's first appearance in a comic book, if available
### Context
This is a dataset from [FiveThirtyEight](https://fivethirtyeight.com/) hosted on their [GitHub](https://github.com/fivethirtyeight/data). Explore FiveThirtyEight data using Kaggle and all of the data sources available through the FiveThirtyEight [organization page](https://www.kaggle.com/fivethirtyeight)!
* Update Frequency: This dataset is updated daily.
Kaggle dataset identifier: fivethirtyeight-comic-characters-dataset
<jupyter_script># # Карпов Даниил Константинович, ИУ5-61Б Вариант №10: номер задачи - 2; номер набора данных - 2.
import pandas as pd
import numpy as np
from sklearn.impute import SimpleImputer
from sklearn.impute import MissingIndicator
import seaborn as sns
import matplotlib.pyplot as plt
from pylab import rcParams # для того, чтобы задавать размер диаграмм
data = pd.read_csv(
"/kaggle/input/fivethirtyeight-comic-characters-dataset/marvel-wikia-data.csv",
sep=",",
)
data.head()
data.isnull().sum()
data.info()
mising_count = data.isnull().sum()
all_count = data.isnull().count()
pd.concat(
[mising_count.sort_values(), (mising_count / all_count * 100).sort_values()],
axis=1,
keys=["Количество пропусков", "Процент пропусков"],
).tail(11)
# ## Обработка пропусков для категориального признака "GSM"
# Выполним удаление данного признака так как отстутствуют 99% данных
data.drop(["GSM"], axis=1, inplace=True)
# ## Обработка пропусков для "APPEARANCES"
# Заполненим этот признак так как пропуски незначительные (всего 5%)
fig, ax = plt.subplots(figsize=(10, 10))
sns.scatterplot(ax=ax, x="APPEARANCES", y="YEAR", data=data, hue="HAIR")
# Для заполнения будем использовать моду "Наиболее вероятный":
indicator = MissingIndicator()
mask_missing_values_only = indicator.fit_transform(data[["APPEARANCES"]])
imp_num = SimpleImputer(strategy="most_frequent")
data_num_imp = imp_num.fit_transform(data[["APPEARANCES"]])
data["APPEARANCES"] = data_num_imp
filled_data = data_num_imp[mask_missing_values_only]
print(
"APPEARANCES",
"most_frequent",
filled_data.size,
filled_data[0],
filled_data[filled_data.size - 1],
sep="; ",
)
# Еще один графичек чтобы был)))
fig, ax = plt.subplots(figsize=(10, 10))
sns.scatterplot(ax=ax, x="SEX", y="YEAR", data=data, hue="ALIVE")
# ## Итоговый вид датасета после обработки пропусков в двух признаках
data.info()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/191/129191903.ipynb
|
fivethirtyeight-comic-characters-dataset
| null |
[{"Id": 129191903, "ScriptId": 38408081, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11548528, "CreationDate": "05/11/2023 17:22:31", "VersionNumber": 1.0, "Title": "rk1_Tazenkov", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 57.0, "LinesInsertedFromPrevious": 57.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185020105, "KernelVersionId": 129191903, "SourceDatasetVersionId": 396588}]
|
[{"Id": 396588, "DatasetId": 56596, "DatasourceVersionId": 411616, "CreatorUserId": 1, "LicenseName": "CC0: Public Domain", "CreationDate": "04/26/2019 15:01:41", "VersionNumber": 111.0, "Title": "FiveThirtyEight Comic Characters Dataset", "Slug": "fivethirtyeight-comic-characters-dataset", "Subtitle": "Explore Data from FiveThirtyEight", "Description": "### Content \n\n# Comic Characters\n\nThis folder contains data behind the story [Comic Books Are Still Made By Men, For Men And About Men](http://fivethirtyeight.com/features/women-in-comic-books/).\n\nThe data comes from [Marvel Wikia](http://marvel.wikia.com/Main_Page) and [DC Wikia](http://dc.wikia.com/wiki/Main_Page). Characters were scraped on August 24. Appearance counts were scraped on September 2. The month and year of the first issue each character appeared in was pulled on October 6.\n\nThe data is split into two files, for DC and Marvel, respectively: `dc-wikia-data.csv` and `marvel-wikia-data.csv`. Each file has the following variables:\n\nVariable | Definition\n---|---------\n`page_id` | The unique identifier for that characters page within the wikia\n`name` | The name of the character\n`urlslug` | The unique url within the wikia that takes you to the character\n`ID` | The identity status of the character (Secret Identity, Public identity, [on marvel only: No Dual Identity])\n`ALIGN` | If the character is Good, Bad or Neutral\n`EYE` | Eye color of the character\n`HAIR` | Hair color of the character\n`SEX` | Sex of the character (e.g. Male, Female, etc.)\n`GSM` | If the character is a gender or sexual minority (e.g. Homosexual characters, bisexual characters)\n`ALIVE` | If the character is alive or deceased\n`APPEARANCES` | The number of appareances of the character in comic books (as of Sep. 2, 2014. Number will become increasingly out of date as time goes on.)\n`FIRST APPEARANCE` | The month and year of the character's first appearance in a comic book, if available\n`YEAR` | The year of the character's first appearance in a comic book, if available\n \n\n### Context \n\nThis is a dataset from [FiveThirtyEight](https://fivethirtyeight.com/) hosted on their [GitHub](https://github.com/fivethirtyeight/data). Explore FiveThirtyEight data using Kaggle and all of the data sources available through the FiveThirtyEight [organization page](https://www.kaggle.com/fivethirtyeight)! \n\n* Update Frequency: This dataset is updated daily.\n\n### Acknowledgements\n\nThis dataset is maintained using GitHub's [API](https://developer.github.com/v3/?) and Kaggle's [API](https://github.com/Kaggle/kaggle-api).\n\nThis dataset is distributed under the [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license. \n\n[Cover photo](https://unsplash.com/photos/pJKpk_rOLnw) by [Zbysiu Rodak](https://unsplash.com/@zbigniew) on [Unsplash](https://unsplash.com/) \n_Unsplash Images are distributed under a unique [Unsplash License](https://unsplash.com/license)._", "VersionNotes": "Automated data update 20190426", "TotalCompressedBytes": 3513875.0, "TotalUncompressedBytes": 591701.0}]
|
[{"Id": 56596, "CreatorUserId": 1, "OwnerUserId": NaN, "OwnerOrganizationId": 170.0, "CurrentDatasetVersionId": 396588.0, "CurrentDatasourceVersionId": 411616.0, "ForumId": 65387, "Type": 2, "CreationDate": "09/26/2018 18:05:51", "LastActivityDate": "09/26/2018", "TotalViews": 163597, "TotalDownloads": 24032, "TotalVotes": 3631, "TotalKernels": 183}]
| null |
# # Карпов Даниил Константинович, ИУ5-61Б Вариант №10: номер задачи - 2; номер набора данных - 2.
import pandas as pd
import numpy as np
from sklearn.impute import SimpleImputer
from sklearn.impute import MissingIndicator
import seaborn as sns
import matplotlib.pyplot as plt
from pylab import rcParams # для того, чтобы задавать размер диаграмм
data = pd.read_csv(
"/kaggle/input/fivethirtyeight-comic-characters-dataset/marvel-wikia-data.csv",
sep=",",
)
data.head()
data.isnull().sum()
data.info()
mising_count = data.isnull().sum()
all_count = data.isnull().count()
pd.concat(
[mising_count.sort_values(), (mising_count / all_count * 100).sort_values()],
axis=1,
keys=["Количество пропусков", "Процент пропусков"],
).tail(11)
# ## Обработка пропусков для категориального признака "GSM"
# Выполним удаление данного признака так как отстутствуют 99% данных
data.drop(["GSM"], axis=1, inplace=True)
# ## Обработка пропусков для "APPEARANCES"
# Заполненим этот признак так как пропуски незначительные (всего 5%)
fig, ax = plt.subplots(figsize=(10, 10))
sns.scatterplot(ax=ax, x="APPEARANCES", y="YEAR", data=data, hue="HAIR")
# Для заполнения будем использовать моду "Наиболее вероятный":
indicator = MissingIndicator()
mask_missing_values_only = indicator.fit_transform(data[["APPEARANCES"]])
imp_num = SimpleImputer(strategy="most_frequent")
data_num_imp = imp_num.fit_transform(data[["APPEARANCES"]])
data["APPEARANCES"] = data_num_imp
filled_data = data_num_imp[mask_missing_values_only]
print(
"APPEARANCES",
"most_frequent",
filled_data.size,
filled_data[0],
filled_data[filled_data.size - 1],
sep="; ",
)
# Еще один графичек чтобы был)))
fig, ax = plt.subplots(figsize=(10, 10))
sns.scatterplot(ax=ax, x="SEX", y="YEAR", data=data, hue="ALIVE")
# ## Итоговый вид датасета после обработки пропусков в двух признаках
data.info()
| false | 0 | 737 | 0 | 1,362 | 737 |
||
129191594
|
import pandas as pd
import networkx as nx
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import random
from karateclub import DeepWalk
import random
import matplotlib.pyplot as plt
edges_data = pd.read_csv("/kaggle/input/twitch/large_twitch_edges.csv")
edges_data.info
node_data = pd.read_csv("/kaggle/input/twitch/large_twitch_features.csv")
node_data.head()
graph = nx.from_pandas_edgelist(edges_data, "numeric_id_1", "numeric_id_2")
# Define DeepWalk parameters
walk_length = 5 # Length of random walks
dimension = 32 # Dimensionality of embeddings
number_of_walks = 5 # Number of random walks per node
window_size = 2 # Context window size for skip-gram model
iterations = 1 # Number of iterations over the graph
# Generate train embeddings
deepwalk = DeepWalk(
walk_length=walk_length,
dimensions=dimension,
walk_number=number_of_walks,
window_size=window_size,
epochs=iterations,
)
deepwalk.fit(graph)
graph_embeddings = deepwalk.get_embedding()
graph_embeddings.shape
len(graph.edges())
num_negative_edges = int(len(graph.edges()))
negative_edges = set()
negative_edges_count = 0
nodes = list(graph.nodes())
edges = set(graph.edges())
while negative_edges_count < num_negative_edges:
u = random.choice(nodes)
v = random.choice(nodes)
if (
u != v
and (u, v) not in edges
and (v, u) not in edges
and (u, v) not in negative_edges
):
negative_edges.add((u, v))
negative_edges_count += 1
# Link prediction using logistic regression
X = []
y = []
for edge in edges:
node1, node2 = edge
embedding1 = graph_embeddings[node1]
embedding2 = graph_embeddings[node2]
X.append(embedding1 - embedding2)
y.append(1)
for edge in negative_edges:
node1, node2 = edge
embedding1 = graph_embeddings[node1]
embedding2 = graph_embeddings[node2]
X.append(embedding1 - embedding2)
y.append(0)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# ## Logistic Regression
model = LogisticRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
from sklearn.metrics import accuracy_score, precision_score
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
# Calculate precision
precision = precision_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Precision:", precision)
# ## MLP Algorithm
from sklearn.neural_network import MLPClassifier
model = MLPClassifier(
hidden_layer_sizes=(10,), activation="relu", solver="adam", random_state=42
)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
# Calculate precision
precision = precision_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Precision:", precision)
# ## Random Forest
from sklearn.svm import SVC
svm_model = SVC()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
# Calculate precision
precision = precision_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Precision:", precision)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/191/129191594.ipynb
| null | null |
[{"Id": 129191594, "ScriptId": 38381263, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1643975, "CreationDate": "05/11/2023 17:19:08", "VersionNumber": 1.0, "Title": "TopicsInDB_FinalPRoject", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 138.0, "LinesInsertedFromPrevious": 138.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import networkx as nx
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import random
from karateclub import DeepWalk
import random
import matplotlib.pyplot as plt
edges_data = pd.read_csv("/kaggle/input/twitch/large_twitch_edges.csv")
edges_data.info
node_data = pd.read_csv("/kaggle/input/twitch/large_twitch_features.csv")
node_data.head()
graph = nx.from_pandas_edgelist(edges_data, "numeric_id_1", "numeric_id_2")
# Define DeepWalk parameters
walk_length = 5 # Length of random walks
dimension = 32 # Dimensionality of embeddings
number_of_walks = 5 # Number of random walks per node
window_size = 2 # Context window size for skip-gram model
iterations = 1 # Number of iterations over the graph
# Generate train embeddings
deepwalk = DeepWalk(
walk_length=walk_length,
dimensions=dimension,
walk_number=number_of_walks,
window_size=window_size,
epochs=iterations,
)
deepwalk.fit(graph)
graph_embeddings = deepwalk.get_embedding()
graph_embeddings.shape
len(graph.edges())
num_negative_edges = int(len(graph.edges()))
negative_edges = set()
negative_edges_count = 0
nodes = list(graph.nodes())
edges = set(graph.edges())
while negative_edges_count < num_negative_edges:
u = random.choice(nodes)
v = random.choice(nodes)
if (
u != v
and (u, v) not in edges
and (v, u) not in edges
and (u, v) not in negative_edges
):
negative_edges.add((u, v))
negative_edges_count += 1
# Link prediction using logistic regression
X = []
y = []
for edge in edges:
node1, node2 = edge
embedding1 = graph_embeddings[node1]
embedding2 = graph_embeddings[node2]
X.append(embedding1 - embedding2)
y.append(1)
for edge in negative_edges:
node1, node2 = edge
embedding1 = graph_embeddings[node1]
embedding2 = graph_embeddings[node2]
X.append(embedding1 - embedding2)
y.append(0)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# ## Logistic Regression
model = LogisticRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
from sklearn.metrics import accuracy_score, precision_score
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
# Calculate precision
precision = precision_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Precision:", precision)
# ## MLP Algorithm
from sklearn.neural_network import MLPClassifier
model = MLPClassifier(
hidden_layer_sizes=(10,), activation="relu", solver="adam", random_state=42
)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
# Calculate precision
precision = precision_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Precision:", precision)
# ## Random Forest
from sklearn.svm import SVC
svm_model = SVC()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
# Calculate precision
precision = precision_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Precision:", precision)
| false | 0 | 978 | 0 | 978 | 978 |
||
129191335
|
# # Using CLIPSeg with Hugging Face Transformers
# Using Hugging Face Transformers, you can easily download and run a pre-trained CLIPSeg model on your images. Let’s start by installing transformers.
import requests
from PIL import Image
import numpy as np
import torch
import cv2
import matplotlib.pyplot as plt
from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
# To download the model, simply instantiate it.
processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
# Now we can load an image to try out the segmentation. We'll choose a picture of a delicious breakfast taken by [Calum Lewis](https://unsplash.com/@calumlewis).
# URL of the image to load
url = "https://i.pinimg.com/564x/f9/92/79/f992799d34ed72382794c1abcebeb50f.jpg"
# Send a GET request to the image URL and store the response
response = requests.get(url, stream=True)
# Open the response content as an image using PIL
image = Image.open(response.raw)
# Display the image
image.show()
# image = Image.open("/content/home.jpg")
# image.show()
# ## Model prediction on Text prompts
# Let’s start by defining some text categories we want to segment.
prompts = ["tv", "sofa", "flowers", "painting", "lamps"]
# prompts = ["backbag","yellow balloon","pair of shoes","door","red chair"]
image = image.convert("RGB")
# Convert to PyTorch tensor
tensor = torch.tensor(np.array(image)).permute(2, 0, 1).unsqueeze(0).float()
# Now that we have our inputs, we can process them and input them to the model.
inputs = processor(
text=prompts,
images=[image] * len(prompts),
padding="max_length",
return_tensors="pt",
)
# predict
with torch.no_grad():
outputs = model(**inputs)
preds = outputs.logits.unsqueeze(1)
masks = torch.sigmoid(preds).squeeze(1)
maskss = outputs.logits
# ## Different visualizations
fig, ax = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
[ax[i].axis("off") for i in range(len(prompts) + 1)]
ax[0].imshow(image)
for i, prompt in enumerate(prompts):
# get the predicted heat map
pred_heatmap = torch.sigmoid(preds[i][0])
# resize the heat map to the original image size
pred_heatmap = pred_heatmap.cpu().numpy()
pred_heatmap = cv2.resize(pred_heatmap, (image.width, image.height))
# threshold the heat map to remove noise
threshold = 0.5
pred_heatmap[pred_heatmap < threshold] = 0
pred_heatmap[pred_heatmap >= threshold] = 1
# apply the heat map on the original image
heatmap = np.uint8(255 * pred_heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
overlayed_img = cv2.addWeighted(np.array(image), 0.5, heatmap, 0.5, 0)
# display the overlayed image
ax[i + 1].imshow(overlayed_img)
ax[i + 1].set_title(prompt)
plt.show()
# print(preds)
# Finally, let’s visualize the output.
_, ax = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
[a.axis("off") for a in ax.flatten()]
ax[0].imshow(image)
[ax[i + 1].imshow(torch.sigmoid(preds[i][0])) for i in range(len(prompts))]
[ax[i + 1].text(0, -15, prompt) for i, prompt in enumerate(prompts)]
fig, ax = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
for i in range(len(prompts)):
mask = masks[i]
ax[i + 1].imshow(mask, cmap="gray")
ax[i + 1].axis("off")
ax[i + 1].set_title(f"Mask {i+1}")
mask = mask.numpy()
mask = np.where(mask > 0.5, 1, 0).astype(np.uint8)
contours, hierarchy = cv2.findContours(
mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE
)
if len(contours) > 0:
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
contour = contours[max_index]
x, y, w, h = cv2.boundingRect(contour)
img_cropped = np.array(image.crop((x, y, x + w, y + h)))
ax[0].imshow(img_cropped)
ax[0].axis("off")
ax[0].set_title(f"Part of Image Corresponding to Mask {i+1}")
plt.show()
# Display the original image and the heat map masks
fig, axs = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
axs[0].imshow(image)
axs[0].axis("off")
for i, prompt in enumerate(prompts):
# Get the heat map for the prompt
heat_map = torch.sigmoid(preds[i][0]).detach().cpu().numpy()
# Resize the heat map to match the original image size
heat_map_resized = cv2.resize(heat_map, (image.size[0], image.size[1]))
# Apply a threshold to the heat map to get the mask
mask = (heat_map_resized > 0.5).astype(np.uint8)
# Apply the mask to the original image
masked_image = cv2.bitwise_and(np.array(image), np.array(image), mask=mask)
# Display the masked image
axs[i + 1].imshow(masked_image)
axs[i + 1].axis("off")
axs[i + 1].text(0, -15, prompt)
plt.show()
# # **Depth estimation**
# ---
#
from transformers import DPTImageProcessor, DPTForDepthEstimation
processor2 = DPTImageProcessor.from_pretrained("Intel/dpt-large")
model2 = DPTForDepthEstimation.from_pretrained("Intel/dpt-large")
# ## Testing the mask from Clipseg objects detected on the second object(idx 1) from prompt list
# Get the heat map for the prompt
heat_map = torch.sigmoid(preds[1][0]).detach().cpu().numpy()
# Resize the heat map to match the original image size
heat_map_resized = cv2.resize(heat_map, (image.size[0], image.size[1]))
# Apply a threshold to the heat map to get the mask
mask = (heat_map_resized > 0.5).astype(np.uint8)
# Apply the mask to the original image
masked_image = cv2.bitwise_and(np.array(image), np.array(image), mask=mask)
plt.axis("off")
plt.text(250, -10, "painting")
plt.imshow(masked_image)
plt.imshow(mask, cmap="gray")
print(np.min(mask))
print(np.max(mask))
# ## Estimating the Depth
inputs = processor2(images=image, return_tensors="pt")
with torch.no_grad():
outputs2 = model2(**inputs)
predicted_depth = outputs2.predicted_depth
predicted_depth.shape
# interpolate to original size
prediction2 = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output2 = prediction2.squeeze().cpu().numpy()
formatted = (output2 * 255 / np.max(output2)).astype("uint8")
depth = Image.fromarray(formatted)
depth
# Apply the mask to the depth image
# depth_masked = cv2.bitwise_and(formatted, formatted, mask=mask)
print(np.shape(depth))
# ## Applying and preprocessing the input `mask` of Clipseg on the output `depth` image of DPT
# Apply the mask to the depth image
# print(np.min(depth))
# print(np.max(depth))
masked_depth = cv2.bitwise_and(np.array(depth), np.array(depth), mask=mask)
# Display the masked depth image
print(np.shape(masked_depth))
plt.imshow(masked_depth, cmap="gray")
plt.axis("off")
plt.show()
mask = np.uint8(mask) * 255
print(np.unique(mask))
# plt.imshow(mask,cmap='gray')
print(np.unique(mask))
x1, x2 = np.where(mask == 255)
print(len(x1))
print(len(x2))
depth = np.array(depth)
depth = np.expand_dims(depth, axis=2)
print(np.shape(depth))
plt.imshow(depth, cmap="gray", vmin=0, vmax=255)
depth_img = np.zeros((mask.shape[0], mask.shape[1], 1))
depth_img = np.uint8(depth_img) * 255
# plt.imshow(depth,cmap='gray')
img_array = np.array(image)
depth_img[x1, x2] = depth[x1, x2]
print(np.min(depth_img))
print(np.max(depth_img))
plt.imshow(depth_img, cmap="gray", vmin=0, vmax=255)
cv2.imwrite("depth_img.jpg", depth_img)
# Calculate the average depth value for the object in the mask
object_depth = np.mean(depth_img[depth_img != 0])
print(depth_img[depth_img != 0])
print(object_depth)
# Determine whether the object is close or far based on its depth value
if object_depth > 60:
print("The object is close")
else:
print("The object is far")
plt.imshow(image)
# ## Final prediction :
# * List of objects detected using command prompt as text search
# * Depth estimation for each objects if found to tell if it's near OR far
#
# Display the original image and the heat map masks
fig, axs = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
axs[0].imshow(depth, cmap="gray", vmin=0, vmax=255)
axs[0].axis("off")
for i, prompt in enumerate(prompts):
# Get the heat map for the prompt
heat_map = torch.sigmoid(preds[i][0]).detach().cpu().numpy()
# Resize the heat map to match the original image size
heat_map_resized = cv2.resize(heat_map, (image.size[0], image.size[1]))
# Apply a threshold to the heat map to get the mask
mask = (heat_map_resized > 0.5).astype(np.uint8)
# Apply the mask to the original image
masked_depth = cv2.bitwise_and(np.array(depth), np.array(depth), mask=mask)
# masked_depth = mask * np.array(depth)
# Calculate the average depth value for the object in the mask
object_depth = np.mean(masked_depth[masked_depth != 0])
print(masked_depth[masked_depth != 0])
print(object_depth)
# Determine whether the object is close or far based on its depth value
if np.count_nonzero(masked_depth) == 0:
print(f"The {prompt} is not found")
elif object_depth > 60:
print(f"The {prompt} object is close")
else:
print(f"The {prompt} object is far")
print("_________________________________________")
# Display the masked image
axs[i + 1].imshow(masked_depth, cmap="gray", vmin=0, vmax=255)
axs[i + 1].axis("off")
axs[i + 1].text(0, -15, prompt)
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/191/129191335.ipynb
| null | null |
[{"Id": 129191335, "ScriptId": 38402854, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12675357, "CreationDate": "05/11/2023 17:16:41", "VersionNumber": 2.0, "Title": "clipseg_zero_shot_Textprompt_Depth_estimation", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 307.0, "LinesInsertedFromPrevious": 4.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 303.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Using CLIPSeg with Hugging Face Transformers
# Using Hugging Face Transformers, you can easily download and run a pre-trained CLIPSeg model on your images. Let’s start by installing transformers.
import requests
from PIL import Image
import numpy as np
import torch
import cv2
import matplotlib.pyplot as plt
from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
# To download the model, simply instantiate it.
processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
# Now we can load an image to try out the segmentation. We'll choose a picture of a delicious breakfast taken by [Calum Lewis](https://unsplash.com/@calumlewis).
# URL of the image to load
url = "https://i.pinimg.com/564x/f9/92/79/f992799d34ed72382794c1abcebeb50f.jpg"
# Send a GET request to the image URL and store the response
response = requests.get(url, stream=True)
# Open the response content as an image using PIL
image = Image.open(response.raw)
# Display the image
image.show()
# image = Image.open("/content/home.jpg")
# image.show()
# ## Model prediction on Text prompts
# Let’s start by defining some text categories we want to segment.
prompts = ["tv", "sofa", "flowers", "painting", "lamps"]
# prompts = ["backbag","yellow balloon","pair of shoes","door","red chair"]
image = image.convert("RGB")
# Convert to PyTorch tensor
tensor = torch.tensor(np.array(image)).permute(2, 0, 1).unsqueeze(0).float()
# Now that we have our inputs, we can process them and input them to the model.
inputs = processor(
text=prompts,
images=[image] * len(prompts),
padding="max_length",
return_tensors="pt",
)
# predict
with torch.no_grad():
outputs = model(**inputs)
preds = outputs.logits.unsqueeze(1)
masks = torch.sigmoid(preds).squeeze(1)
maskss = outputs.logits
# ## Different visualizations
fig, ax = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
[ax[i].axis("off") for i in range(len(prompts) + 1)]
ax[0].imshow(image)
for i, prompt in enumerate(prompts):
# get the predicted heat map
pred_heatmap = torch.sigmoid(preds[i][0])
# resize the heat map to the original image size
pred_heatmap = pred_heatmap.cpu().numpy()
pred_heatmap = cv2.resize(pred_heatmap, (image.width, image.height))
# threshold the heat map to remove noise
threshold = 0.5
pred_heatmap[pred_heatmap < threshold] = 0
pred_heatmap[pred_heatmap >= threshold] = 1
# apply the heat map on the original image
heatmap = np.uint8(255 * pred_heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
overlayed_img = cv2.addWeighted(np.array(image), 0.5, heatmap, 0.5, 0)
# display the overlayed image
ax[i + 1].imshow(overlayed_img)
ax[i + 1].set_title(prompt)
plt.show()
# print(preds)
# Finally, let’s visualize the output.
_, ax = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
[a.axis("off") for a in ax.flatten()]
ax[0].imshow(image)
[ax[i + 1].imshow(torch.sigmoid(preds[i][0])) for i in range(len(prompts))]
[ax[i + 1].text(0, -15, prompt) for i, prompt in enumerate(prompts)]
fig, ax = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
for i in range(len(prompts)):
mask = masks[i]
ax[i + 1].imshow(mask, cmap="gray")
ax[i + 1].axis("off")
ax[i + 1].set_title(f"Mask {i+1}")
mask = mask.numpy()
mask = np.where(mask > 0.5, 1, 0).astype(np.uint8)
contours, hierarchy = cv2.findContours(
mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE
)
if len(contours) > 0:
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
contour = contours[max_index]
x, y, w, h = cv2.boundingRect(contour)
img_cropped = np.array(image.crop((x, y, x + w, y + h)))
ax[0].imshow(img_cropped)
ax[0].axis("off")
ax[0].set_title(f"Part of Image Corresponding to Mask {i+1}")
plt.show()
# Display the original image and the heat map masks
fig, axs = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
axs[0].imshow(image)
axs[0].axis("off")
for i, prompt in enumerate(prompts):
# Get the heat map for the prompt
heat_map = torch.sigmoid(preds[i][0]).detach().cpu().numpy()
# Resize the heat map to match the original image size
heat_map_resized = cv2.resize(heat_map, (image.size[0], image.size[1]))
# Apply a threshold to the heat map to get the mask
mask = (heat_map_resized > 0.5).astype(np.uint8)
# Apply the mask to the original image
masked_image = cv2.bitwise_and(np.array(image), np.array(image), mask=mask)
# Display the masked image
axs[i + 1].imshow(masked_image)
axs[i + 1].axis("off")
axs[i + 1].text(0, -15, prompt)
plt.show()
# # **Depth estimation**
# ---
#
from transformers import DPTImageProcessor, DPTForDepthEstimation
processor2 = DPTImageProcessor.from_pretrained("Intel/dpt-large")
model2 = DPTForDepthEstimation.from_pretrained("Intel/dpt-large")
# ## Testing the mask from Clipseg objects detected on the second object(idx 1) from prompt list
# Get the heat map for the prompt
heat_map = torch.sigmoid(preds[1][0]).detach().cpu().numpy()
# Resize the heat map to match the original image size
heat_map_resized = cv2.resize(heat_map, (image.size[0], image.size[1]))
# Apply a threshold to the heat map to get the mask
mask = (heat_map_resized > 0.5).astype(np.uint8)
# Apply the mask to the original image
masked_image = cv2.bitwise_and(np.array(image), np.array(image), mask=mask)
plt.axis("off")
plt.text(250, -10, "painting")
plt.imshow(masked_image)
plt.imshow(mask, cmap="gray")
print(np.min(mask))
print(np.max(mask))
# ## Estimating the Depth
inputs = processor2(images=image, return_tensors="pt")
with torch.no_grad():
outputs2 = model2(**inputs)
predicted_depth = outputs2.predicted_depth
predicted_depth.shape
# interpolate to original size
prediction2 = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output2 = prediction2.squeeze().cpu().numpy()
formatted = (output2 * 255 / np.max(output2)).astype("uint8")
depth = Image.fromarray(formatted)
depth
# Apply the mask to the depth image
# depth_masked = cv2.bitwise_and(formatted, formatted, mask=mask)
print(np.shape(depth))
# ## Applying and preprocessing the input `mask` of Clipseg on the output `depth` image of DPT
# Apply the mask to the depth image
# print(np.min(depth))
# print(np.max(depth))
masked_depth = cv2.bitwise_and(np.array(depth), np.array(depth), mask=mask)
# Display the masked depth image
print(np.shape(masked_depth))
plt.imshow(masked_depth, cmap="gray")
plt.axis("off")
plt.show()
mask = np.uint8(mask) * 255
print(np.unique(mask))
# plt.imshow(mask,cmap='gray')
print(np.unique(mask))
x1, x2 = np.where(mask == 255)
print(len(x1))
print(len(x2))
depth = np.array(depth)
depth = np.expand_dims(depth, axis=2)
print(np.shape(depth))
plt.imshow(depth, cmap="gray", vmin=0, vmax=255)
depth_img = np.zeros((mask.shape[0], mask.shape[1], 1))
depth_img = np.uint8(depth_img) * 255
# plt.imshow(depth,cmap='gray')
img_array = np.array(image)
depth_img[x1, x2] = depth[x1, x2]
print(np.min(depth_img))
print(np.max(depth_img))
plt.imshow(depth_img, cmap="gray", vmin=0, vmax=255)
cv2.imwrite("depth_img.jpg", depth_img)
# Calculate the average depth value for the object in the mask
object_depth = np.mean(depth_img[depth_img != 0])
print(depth_img[depth_img != 0])
print(object_depth)
# Determine whether the object is close or far based on its depth value
if object_depth > 60:
print("The object is close")
else:
print("The object is far")
plt.imshow(image)
# ## Final prediction :
# * List of objects detected using command prompt as text search
# * Depth estimation for each objects if found to tell if it's near OR far
#
# Display the original image and the heat map masks
fig, axs = plt.subplots(1, len(prompts) + 1, figsize=(3 * (len(prompts) + 1), 4))
axs[0].imshow(depth, cmap="gray", vmin=0, vmax=255)
axs[0].axis("off")
for i, prompt in enumerate(prompts):
# Get the heat map for the prompt
heat_map = torch.sigmoid(preds[i][0]).detach().cpu().numpy()
# Resize the heat map to match the original image size
heat_map_resized = cv2.resize(heat_map, (image.size[0], image.size[1]))
# Apply a threshold to the heat map to get the mask
mask = (heat_map_resized > 0.5).astype(np.uint8)
# Apply the mask to the original image
masked_depth = cv2.bitwise_and(np.array(depth), np.array(depth), mask=mask)
# masked_depth = mask * np.array(depth)
# Calculate the average depth value for the object in the mask
object_depth = np.mean(masked_depth[masked_depth != 0])
print(masked_depth[masked_depth != 0])
print(object_depth)
# Determine whether the object is close or far based on its depth value
if np.count_nonzero(masked_depth) == 0:
print(f"The {prompt} is not found")
elif object_depth > 60:
print(f"The {prompt} object is close")
else:
print(f"The {prompt} object is far")
print("_________________________________________")
# Display the masked image
axs[i + 1].imshow(masked_depth, cmap="gray", vmin=0, vmax=255)
axs[i + 1].axis("off")
axs[i + 1].text(0, -15, prompt)
plt.show()
| false | 0 | 3,188 | 0 | 3,188 | 3,188 |
||
129191688
|
<jupyter_start><jupyter_text>Sentiment Analysis of Commodity News (Gold)
### Context
This is a news dataset for the commodity market where we have manually annotated 10,000+ news headlines across multiple dimensions into various classes. The dataset has been sampled from a period of 20+ years (2000-2021).
### Content
The dataset has been collected from various news sources and annotated by three human annotators who were subject experts. Each news headline was evaluated on various dimensions, for instance - if a headline is a price related news then what is the direction of price movements it is talking about; whether the news headline is talking about the past or future; whether the news item is talking about asset comparison; etc.
Kaggle dataset identifier: sentiment-analysis-in-commodity-market-gold
<jupyter_script># 导入需要用的库
# numpy,pandas用于数据处理
import numpy as np
import pandas as pd
# sklearn用于机械学习
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
accuracy_score,
f1_score,
recall_score,
precision_score,
confusion_matrix,
)
from sklearn.svm import SVC
import re
# 引入nltk库
import nltk
from nltk.stem import WordNetLemmatizer
# 引入collection
from collections import Counter
# mataplotlib用于作图
import matplotlib.pyplot as plt
# 引入词云库
from wordcloud import WordCloud
# 导入数据
df = pd.read_csv(
"/kaggle/input/sentiment-analysis-in-commodity-market-gold/gold-dataset-sinha-khandait.csv"
)
df = df[df["Price Sentiment"] != "none"]
head = df["News"]
polarity = df["Price Sentiment"].tolist()
# 清洗数据,去除其中无意义的标点符号
refine_head = []
for item in head:
item = re.sub("@\S+", "", item)
item = re.sub("http\S+\s*", "", item)
item = re.sub("[%s]" % re.escape("""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~"""), "", item)
refine_head.append(item)
# 使用Tfidf这个提取特征的办法【特征工程在这里,所有带Y的是从对应X的特征】
# 查TfidfVectorizer这个方法有助于理解额
tf_idfvectorizer = TfidfVectorizer(sublinear_tf=True, use_idf=True)
# 通过train_test_split这个方法从原始数据中划分训练集和测试集,test_size是测试集占比
X_train, X_test, Y_train, Y_test = train_test_split(
refine_head, polarity, test_size=0.4
)
# 这两个方法使输入数据满足用于模型训练的格式
# 一般来在感情鉴别中,这个可以从外面找情感字典,有现成的,这个项目用训练集直接当字典用
train_dic_tf_idf = tf_idfvectorizer.fit_transform(X_train)
test_dic_tf_idf = tf_idfvectorizer.transform(X_test)
# 调用SVC这个方法,kernel可以采用默认的,我写rbf是区别原方法。都可以。
Model = SVC(kernel="rbf")
# 模型训练,拟合。使得训练集和感情字典相合。
Model.fit(train_dic_tf_idf, Y_train)
Y_predict = Model.predict(test_dic_tf_idf)
# 输出准确度
print("Accuracy:", accuracy_score(Y_test, Y_predict))
labels = np.unique(Y_test)
m = confusion_matrix(Y_test, Y_predict, labels=labels)
cm = pd.DataFrame(m, index=labels, columns=labels)
cm.index = "Actual: " + cm.index
cm.columns = "Predicted: " + cm.columns
# 保存结果
result = pd.DataFrame()
result["News"] = X_test
result["Actual Sentiment"] = Y_test
result["Predict Sentiment"] = Y_predict
result.to_csv("predicted.csv")
def token_clean(tokens):
# 初始化WordNetLemmatizer对象
lemmatizer = WordNetLemmatizer()
# 将tokens中的大写字母全部转化为小写
tokens = [token.lower() for token in tokens]
# 将复数单词转化为单数
tokens = [lemmatizer.lemmatize(token) for token in tokens]
return tokens
def tokenize_process(input):
# 把输入的句子变成token
tokens = []
for sentence in input:
sentence_tokens = nltk.word_tokenize(sentence)
tokens += sentence_tokens
return tokens
def sentence_classifiy(X, Y):
# token_set的分类
positive_set = []
neutral_set = []
negative_set = []
for i in range(len(Y)):
if Y[i] == "positive":
positive_set.append(X[i])
elif Y[i] == "neutral":
neutral_set.append(X[i])
else:
negative_set.append(X[i])
return positive_set, negative_set, neutral_set
def word_distribution(input):
# 统计token的分布情况
token_counts = Counter(token for token in input)
top_tokens = token_counts.most_common(10)
print("Top 10 tokens:")
for token, count in top_tokens:
print(f"{token}: {count}")
tokens, counts = zip(*top_tokens)
plt.bar(tokens, counts)
plt.xticks(rotation=45)
plt.xlabel("Token")
plt.ylabel("Count")
plt.title("Top 10 Tokens")
plt.show()
def wordcloud_generate(input):
# 生成词云的函数
input = " ".join(input)
wordcloud = WordCloud(
width=800, height=600, background_color="white", max_words=50
).generate(input)
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
def summarize(input):
# 封装统计分布和词云的函数,方便一起调用
temp = token_clean(tokenize_process(input))
temp = [
token
for token in temp
if token != "gold"
and token != "in"
and token != "a"
and token != "on"
and token != "at"
and token != "to"
and token != "or"
and token != "r"
and token != "u"
]
word_distribution(temp)
wordcloud_generate(temp)
def statistic_distribution(input):
# 统计每类数据的数量
counts = {"negative": 0, "positive": 0, "neutral": 0}
for item in input:
counts[item] += 1
# 绘制柱状图
colors = ["blue", "orange", "green"]
labels = ["negative", "positive", "neutral"]
values = [counts[label] for label in labels]
plt.bar(labels, values, color=colors)
# 添加图标题和轴标签
plt.title("Distribution of Data")
plt.xlabel("Data Types")
plt.ylabel("Counts")
# 显示图形
plt.show()
# 显示统计结果
print(counts)
print("-" * 20)
# 训练集中的词频分布
positive, negative, neutral = sentence_classifiy(X_train, Y_train)
print("Positive : ")
print("-" * 20)
summarize(positive)
print("Negative : ")
print("-" * 20)
summarize(negative)
print("Neutral : ")
print("-" * 20)
summarize(neutral)
# 测试集中的真实词频分布
positive, negative, neutral = sentence_classifiy(X_test, Y_test)
print("Positive : ")
print("-" * 20)
summarize(positive)
print("Negative : ")
print("-" * 20)
summarize(negative)
print("Neutral : ")
print("-" * 20)
summarize(neutral)
# 测试集的预测预测词频分布
# 两组图差别不大,意味着模型成功,准确率高
positive, negative, neutral = sentence_classifiy(X_test, Y_predict)
print("Positive : ")
print("-" * 20)
summarize(positive)
print("Negative : ")
print("-" * 20)
summarize(negative)
print("Neutral : ")
print("-" * 20)
summarize(neutral)
print("statistic distribution of Y_train : ")
print("-" * 20)
statistic_distribution(Y_train)
print("\n")
print("statistic distribution of Y_test : ")
print("-" * 20)
statistic_distribution(Y_test)
print("\n")
print("statistic distribution of Y_predict : ")
print("-" * 20)
statistic_distribution(Y_predict)
print("\n")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/191/129191688.ipynb
|
sentiment-analysis-in-commodity-market-gold
|
ankurzing
|
[{"Id": 129191688, "ScriptId": 38275620, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14712357, "CreationDate": "05/11/2023 17:20:00", "VersionNumber": 4.0, "Title": "AI assignment3", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 240.0, "LinesInsertedFromPrevious": 39.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 201.0, "LinesInsertedFromFork": 229.0, "LinesDeletedFromFork": 111.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 11.0, "TotalVotes": 1}]
|
[{"Id": 185019687, "KernelVersionId": 129191688, "SourceDatasetVersionId": 3744643}]
|
[{"Id": 3744643, "DatasetId": 1608405, "DatasourceVersionId": 3799105, "CreatorUserId": 1413741, "LicenseName": "Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)", "CreationDate": "06/04/2022 12:25:19", "VersionNumber": 5.0, "Title": "Sentiment Analysis of Commodity News (Gold)", "Slug": "sentiment-analysis-in-commodity-market-gold", "Subtitle": "The news headlines on Gold commodity have been classified into various classes.", "Description": "### Context\n\nThis is a news dataset for the commodity market where we have manually annotated 10,000+ news headlines across multiple dimensions into various classes. The dataset has been sampled from a period of 20+ years (2000-2021).\n\n\n### Content\n\nThe dataset has been collected from various news sources and annotated by three human annotators who were subject experts. Each news headline was evaluated on various dimensions, for instance - if a headline is a price related news then what is the direction of price movements it is talking about; whether the news headline is talking about the past or future; whether the news item is talking about asset comparison; etc.\n\n\n### Acknowledgements\n\nSinha, Ankur, and Tanmay Khandait. \"Impact of News on the Commodity Market: Dataset and Results.\" In Future of Information and Communication Conference, pp. 589-601. Springer, Cham, 2021.\n\nhttps://arxiv.org/abs/2009.04202\nSinha, Ankur, and Tanmay Khandait. \"Impact of News on the Commodity Market: Dataset and Results.\" arXiv preprint arXiv:2009.04202 (2020)\n\nWe would like to acknowledge the financial support provided by the India Gold Policy Centre (IGPC).\n\n\n### Inspiration\n\nCommodity prices are known to be quite volatile. Machine learning models that understand the commodity news well, will be able to provide an additional input to the short-term and long-term price forecasting models. The dataset will also be useful in creating news-based indicators for commodities.\n\nApart from researchers and practitioners working in the area of news analytics for commodities, the dataset will also be useful for researchers looking to evaluate their models on classification problems in the context of text-analytics. Some of the classes in the dataset are highly imbalanced and may pose challenges to the machine learning algorithms.", "VersionNotes": "Data Update 2022/06/04", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1608405, "CreatorUserId": 1413741, "OwnerUserId": 1413741.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3744643.0, "CurrentDatasourceVersionId": 3799105.0, "ForumId": 1628841, "Type": 2, "CreationDate": "09/24/2021 08:08:39", "LastActivityDate": "09/24/2021", "TotalViews": 13147, "TotalDownloads": 1095, "TotalVotes": 25, "TotalKernels": 4}]
|
[{"Id": 1413741, "UserName": "ankurzing", "DisplayName": "Ankur Sinha", "RegisterDate": "11/13/2017", "PerformanceTier": 2}]
|
# 导入需要用的库
# numpy,pandas用于数据处理
import numpy as np
import pandas as pd
# sklearn用于机械学习
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
accuracy_score,
f1_score,
recall_score,
precision_score,
confusion_matrix,
)
from sklearn.svm import SVC
import re
# 引入nltk库
import nltk
from nltk.stem import WordNetLemmatizer
# 引入collection
from collections import Counter
# mataplotlib用于作图
import matplotlib.pyplot as plt
# 引入词云库
from wordcloud import WordCloud
# 导入数据
df = pd.read_csv(
"/kaggle/input/sentiment-analysis-in-commodity-market-gold/gold-dataset-sinha-khandait.csv"
)
df = df[df["Price Sentiment"] != "none"]
head = df["News"]
polarity = df["Price Sentiment"].tolist()
# 清洗数据,去除其中无意义的标点符号
refine_head = []
for item in head:
item = re.sub("@\S+", "", item)
item = re.sub("http\S+\s*", "", item)
item = re.sub("[%s]" % re.escape("""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~"""), "", item)
refine_head.append(item)
# 使用Tfidf这个提取特征的办法【特征工程在这里,所有带Y的是从对应X的特征】
# 查TfidfVectorizer这个方法有助于理解额
tf_idfvectorizer = TfidfVectorizer(sublinear_tf=True, use_idf=True)
# 通过train_test_split这个方法从原始数据中划分训练集和测试集,test_size是测试集占比
X_train, X_test, Y_train, Y_test = train_test_split(
refine_head, polarity, test_size=0.4
)
# 这两个方法使输入数据满足用于模型训练的格式
# 一般来在感情鉴别中,这个可以从外面找情感字典,有现成的,这个项目用训练集直接当字典用
train_dic_tf_idf = tf_idfvectorizer.fit_transform(X_train)
test_dic_tf_idf = tf_idfvectorizer.transform(X_test)
# 调用SVC这个方法,kernel可以采用默认的,我写rbf是区别原方法。都可以。
Model = SVC(kernel="rbf")
# 模型训练,拟合。使得训练集和感情字典相合。
Model.fit(train_dic_tf_idf, Y_train)
Y_predict = Model.predict(test_dic_tf_idf)
# 输出准确度
print("Accuracy:", accuracy_score(Y_test, Y_predict))
labels = np.unique(Y_test)
m = confusion_matrix(Y_test, Y_predict, labels=labels)
cm = pd.DataFrame(m, index=labels, columns=labels)
cm.index = "Actual: " + cm.index
cm.columns = "Predicted: " + cm.columns
# 保存结果
result = pd.DataFrame()
result["News"] = X_test
result["Actual Sentiment"] = Y_test
result["Predict Sentiment"] = Y_predict
result.to_csv("predicted.csv")
def token_clean(tokens):
# 初始化WordNetLemmatizer对象
lemmatizer = WordNetLemmatizer()
# 将tokens中的大写字母全部转化为小写
tokens = [token.lower() for token in tokens]
# 将复数单词转化为单数
tokens = [lemmatizer.lemmatize(token) for token in tokens]
return tokens
def tokenize_process(input):
# 把输入的句子变成token
tokens = []
for sentence in input:
sentence_tokens = nltk.word_tokenize(sentence)
tokens += sentence_tokens
return tokens
def sentence_classifiy(X, Y):
# token_set的分类
positive_set = []
neutral_set = []
negative_set = []
for i in range(len(Y)):
if Y[i] == "positive":
positive_set.append(X[i])
elif Y[i] == "neutral":
neutral_set.append(X[i])
else:
negative_set.append(X[i])
return positive_set, negative_set, neutral_set
def word_distribution(input):
# 统计token的分布情况
token_counts = Counter(token for token in input)
top_tokens = token_counts.most_common(10)
print("Top 10 tokens:")
for token, count in top_tokens:
print(f"{token}: {count}")
tokens, counts = zip(*top_tokens)
plt.bar(tokens, counts)
plt.xticks(rotation=45)
plt.xlabel("Token")
plt.ylabel("Count")
plt.title("Top 10 Tokens")
plt.show()
def wordcloud_generate(input):
# 生成词云的函数
input = " ".join(input)
wordcloud = WordCloud(
width=800, height=600, background_color="white", max_words=50
).generate(input)
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
def summarize(input):
# 封装统计分布和词云的函数,方便一起调用
temp = token_clean(tokenize_process(input))
temp = [
token
for token in temp
if token != "gold"
and token != "in"
and token != "a"
and token != "on"
and token != "at"
and token != "to"
and token != "or"
and token != "r"
and token != "u"
]
word_distribution(temp)
wordcloud_generate(temp)
def statistic_distribution(input):
# 统计每类数据的数量
counts = {"negative": 0, "positive": 0, "neutral": 0}
for item in input:
counts[item] += 1
# 绘制柱状图
colors = ["blue", "orange", "green"]
labels = ["negative", "positive", "neutral"]
values = [counts[label] for label in labels]
plt.bar(labels, values, color=colors)
# 添加图标题和轴标签
plt.title("Distribution of Data")
plt.xlabel("Data Types")
plt.ylabel("Counts")
# 显示图形
plt.show()
# 显示统计结果
print(counts)
print("-" * 20)
# 训练集中的词频分布
positive, negative, neutral = sentence_classifiy(X_train, Y_train)
print("Positive : ")
print("-" * 20)
summarize(positive)
print("Negative : ")
print("-" * 20)
summarize(negative)
print("Neutral : ")
print("-" * 20)
summarize(neutral)
# 测试集中的真实词频分布
positive, negative, neutral = sentence_classifiy(X_test, Y_test)
print("Positive : ")
print("-" * 20)
summarize(positive)
print("Negative : ")
print("-" * 20)
summarize(negative)
print("Neutral : ")
print("-" * 20)
summarize(neutral)
# 测试集的预测预测词频分布
# 两组图差别不大,意味着模型成功,准确率高
positive, negative, neutral = sentence_classifiy(X_test, Y_predict)
print("Positive : ")
print("-" * 20)
summarize(positive)
print("Negative : ")
print("-" * 20)
summarize(negative)
print("Neutral : ")
print("-" * 20)
summarize(neutral)
print("statistic distribution of Y_train : ")
print("-" * 20)
statistic_distribution(Y_train)
print("\n")
print("statistic distribution of Y_test : ")
print("-" * 20)
statistic_distribution(Y_test)
print("\n")
print("statistic distribution of Y_predict : ")
print("-" * 20)
statistic_distribution(Y_predict)
print("\n")
| false | 1 | 2,058 | 1 | 2,243 | 2,058 |
||
129072618
|
<jupyter_start><jupyter_text>Netflix Stock Price Data set 2002-2022
### Context
This is a Data set for Stock Price of Netflix .
This Data set start from 2002 to 2022 .
It was collected from [Yahoo Finance](https://finance.yahoo.com/quote/NFLX/).
### Source
[Yahoo Finance](https://finance.yahoo.com/quote/NFLX/)
Kaggle dataset identifier: netflix-stock-price-data-set-20022022
<jupyter_script># The main purpose of an LSTM network is to process and analyze sequences of data, so it should perform quite well on stock market. Obviously predicting Stock Market is close to impossible. There are many, many different factors which determine price and move of the stock, and we can't follow them all. Fluctuation of the market is an inseparable element. But still - Stock Market - especially past data - is great source of datasets which we can you to practice new things.
# ### LSTM
# LSTMs are a type of recurrent neural network (RNN) that are designed to better handle long-term dependencies in sequential data. The memory cell is controlled by three gates: the input gate, the forget gate, and the output gate. The input gate decides which values to update in the cell based on the current input, the forget gate decides which values to keep or discard from the previous cell state based on the current input, and the output gate decides which values from the current cell state to output. LSTMs are able to learn complex patterns in sequential data because they can selectively remember or forget information from previous inputs and states.
# # TO DO
# * Hyperparameter tuning
# * Add better comments
# * Test on totally new dataset
# * Delete zeros from plots legend
# * Check whether our model overfit(looking at results, probably yes)
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from sklearn.preprocessing import MinMaxScaler
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
sns.set_palette("flare")
# plt.style.use('dark_background')
# **As a first step, we will read data, parse dates, and preprocess columns. Scaling is quite important - networks perform much better on scaled sets. We have to - also - split our data.**
# Preprocessing (Spliting our data and scaling)
def preprocessing(data):
l = int(data.shape[0] * 0.8)
train_data, test_data = data.iloc[:l, :], data.iloc[l:, :]
scaler = MinMaxScaler()
train_data = scaler.fit_transform(train_data)
test_data = scaler.transform(test_data)
return train_data, test_data, scaler
# Essential function for LSMT. We generate Xs - some past records Ys - Values which will be prediced
# using the Xs.
def create_sequences(data, seq_length):
X = []
y = []
for i in range(len(data) - seq_length):
X.append(data[i : i + seq_length])
y.append(data[i + seq_length])
return X, y
def plotter(y_pred, y_test):
fig, axes = plt.subplots(4, 1, figsize=(13, 12))
labels = {0: "Open Price", 1: "Close Price", 2: "Highest Price", 3: "Lowest Price"}
for i in range(4):
sns.lineplot(
y_test.numpy()[:, i].reshape(-1, 1),
ax=axes[i],
palette=["green"],
label="TEST",
)
sns.lineplot(
y_pred.numpy()[:, i].reshape(-1, 1),
ax=axes[i],
palette=["orange"],
label="PREDICTED",
)
axes[i].set_ylabel(f"{labels[i]}")
plt.show()
# Fitting model to our data
def fit(model, num_epochs, criterion, optimizer, X_train, y_train):
for epoch in range(num_epochs):
outputs = model(X_train) # forward function
loss = criterion(outputs, y_train) # loss calculation
optimizer.zero_grad() # Gradient reset
loss.backward() # Backward prop
optimizer.step() # Weight update
if (epoch + 1) % 10 == 0:
print(f"Epoch: {epoch+1}/{num_epochs}, Loss: {loss.item()}")
return model
# Predicting on our data
def predict(model, X):
with torch.no_grad():
model.eval()
return model(X_test)
# LSTM Model
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTMModel, self).__init__()
self.hidden_size = hidden_size
self.lstm1 = nn.LSTM(input_size, hidden_size)
self.lstm2 = nn.LSTM(hidden_size, hidden_size)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
lstm_out1, _ = self.lstm1(x)
lstm_out2, _ = self.lstm2(lstm_out1)
output = self.fc(lstm_out2[:, -1, :])
return output
# Reading data, selection proper columns, converting date
dataset = pd.read_csv(
"/kaggle/input/netflix-stock-price-data-set-20022022/NFLX.csv",
parse_dates=["Date"],
index_col=0,
)
data = dataset[["Open", "Close", "High", "Low"]]
# some params
seq_length = 5 # length of sequence
# Preprocessing
train_data, test_data, scaler = preprocessing(data)
# Creating sequences
X_train, y_train = create_sequences(train_data, seq_length)
X_test, y_test = create_sequences(test_data, seq_length)
# Shapes
display(np.array(X_train).shape)
display(np.array(X_test).shape)
# Converting our sequences into sensors. It is required by PyTorch
X_train = torch.tensor(X_train).float()
y_train = torch.tensor(y_train).float()
X_test = torch.tensor(X_test).float()
y_test = torch.tensor(y_test).float()
# ##### We created 3D sequences of data. LSTM requires (Batch size, Sequence Length, Feature Dimension). Now we can start to build our model using PyTorch!
# some params
input_size = X_train.shape[2]
hidden_size = 128
output_size = 4
learning_rate = 0.01
num_epochs = 100
model = LSTMModel(input_size, hidden_size, output_size)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Fit to train_data
model = fit(model, num_epochs, criterion, optimizer, X_train, y_train)
# Predicting on test_data
y_pred = predict(model, X_test)
display(criterion(y_pred, y_test))
# Plot
plotter(y_pred, y_test)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/072/129072618.ipynb
|
netflix-stock-price-data-set-20022022
|
meetnagadia
|
[{"Id": 129072618, "ScriptId": 38365937, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13804198, "CreationDate": "05/10/2023 19:01:22", "VersionNumber": 1.0, "Title": "Predicting Netflix stock price using LSTM(PyTorch)", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 149.0, "LinesInsertedFromPrevious": 149.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184804437, "KernelVersionId": 129072618, "SourceDatasetVersionId": 3752853}]
|
[{"Id": 3752853, "DatasetId": 1857773, "DatasourceVersionId": 3807349, "CreatorUserId": 6641125, "LicenseName": "Database: Open Database, Contents: \u00a9 Original Authors", "CreationDate": "06/06/2022 07:01:35", "VersionNumber": 2.0, "Title": "Netflix Stock Price Data set 2002-2022", "Slug": "netflix-stock-price-data-set-20022022", "Subtitle": "Netflix Stock data set", "Description": "### Context\nThis is a Data set for Stock Price of Netflix .\nThis Data set start from 2002 to 2022 . \nIt was collected from [Yahoo Finance](https://finance.yahoo.com/quote/NFLX/).\n### Source\n[Yahoo Finance](https://finance.yahoo.com/quote/NFLX/)", "VersionNotes": "Data Update 2022/06/06", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1857773, "CreatorUserId": 6641125, "OwnerUserId": 6641125.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3752853.0, "CurrentDatasourceVersionId": 3807349.0, "ForumId": 1880767, "Type": 2, "CreationDate": "01/12/2022 05:28:11", "LastActivityDate": "01/12/2022", "TotalViews": 17323, "TotalDownloads": 2746, "TotalVotes": 68, "TotalKernels": 6}]
|
[{"Id": 6641125, "UserName": "meetnagadia", "DisplayName": "Meet Nagadia", "RegisterDate": "02/02/2021", "PerformanceTier": 2}]
|
# The main purpose of an LSTM network is to process and analyze sequences of data, so it should perform quite well on stock market. Obviously predicting Stock Market is close to impossible. There are many, many different factors which determine price and move of the stock, and we can't follow them all. Fluctuation of the market is an inseparable element. But still - Stock Market - especially past data - is great source of datasets which we can you to practice new things.
# ### LSTM
# LSTMs are a type of recurrent neural network (RNN) that are designed to better handle long-term dependencies in sequential data. The memory cell is controlled by three gates: the input gate, the forget gate, and the output gate. The input gate decides which values to update in the cell based on the current input, the forget gate decides which values to keep or discard from the previous cell state based on the current input, and the output gate decides which values from the current cell state to output. LSTMs are able to learn complex patterns in sequential data because they can selectively remember or forget information from previous inputs and states.
# # TO DO
# * Hyperparameter tuning
# * Add better comments
# * Test on totally new dataset
# * Delete zeros from plots legend
# * Check whether our model overfit(looking at results, probably yes)
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from sklearn.preprocessing import MinMaxScaler
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
sns.set_palette("flare")
# plt.style.use('dark_background')
# **As a first step, we will read data, parse dates, and preprocess columns. Scaling is quite important - networks perform much better on scaled sets. We have to - also - split our data.**
# Preprocessing (Spliting our data and scaling)
def preprocessing(data):
l = int(data.shape[0] * 0.8)
train_data, test_data = data.iloc[:l, :], data.iloc[l:, :]
scaler = MinMaxScaler()
train_data = scaler.fit_transform(train_data)
test_data = scaler.transform(test_data)
return train_data, test_data, scaler
# Essential function for LSMT. We generate Xs - some past records Ys - Values which will be prediced
# using the Xs.
def create_sequences(data, seq_length):
X = []
y = []
for i in range(len(data) - seq_length):
X.append(data[i : i + seq_length])
y.append(data[i + seq_length])
return X, y
def plotter(y_pred, y_test):
fig, axes = plt.subplots(4, 1, figsize=(13, 12))
labels = {0: "Open Price", 1: "Close Price", 2: "Highest Price", 3: "Lowest Price"}
for i in range(4):
sns.lineplot(
y_test.numpy()[:, i].reshape(-1, 1),
ax=axes[i],
palette=["green"],
label="TEST",
)
sns.lineplot(
y_pred.numpy()[:, i].reshape(-1, 1),
ax=axes[i],
palette=["orange"],
label="PREDICTED",
)
axes[i].set_ylabel(f"{labels[i]}")
plt.show()
# Fitting model to our data
def fit(model, num_epochs, criterion, optimizer, X_train, y_train):
for epoch in range(num_epochs):
outputs = model(X_train) # forward function
loss = criterion(outputs, y_train) # loss calculation
optimizer.zero_grad() # Gradient reset
loss.backward() # Backward prop
optimizer.step() # Weight update
if (epoch + 1) % 10 == 0:
print(f"Epoch: {epoch+1}/{num_epochs}, Loss: {loss.item()}")
return model
# Predicting on our data
def predict(model, X):
with torch.no_grad():
model.eval()
return model(X_test)
# LSTM Model
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTMModel, self).__init__()
self.hidden_size = hidden_size
self.lstm1 = nn.LSTM(input_size, hidden_size)
self.lstm2 = nn.LSTM(hidden_size, hidden_size)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
lstm_out1, _ = self.lstm1(x)
lstm_out2, _ = self.lstm2(lstm_out1)
output = self.fc(lstm_out2[:, -1, :])
return output
# Reading data, selection proper columns, converting date
dataset = pd.read_csv(
"/kaggle/input/netflix-stock-price-data-set-20022022/NFLX.csv",
parse_dates=["Date"],
index_col=0,
)
data = dataset[["Open", "Close", "High", "Low"]]
# some params
seq_length = 5 # length of sequence
# Preprocessing
train_data, test_data, scaler = preprocessing(data)
# Creating sequences
X_train, y_train = create_sequences(train_data, seq_length)
X_test, y_test = create_sequences(test_data, seq_length)
# Shapes
display(np.array(X_train).shape)
display(np.array(X_test).shape)
# Converting our sequences into sensors. It is required by PyTorch
X_train = torch.tensor(X_train).float()
y_train = torch.tensor(y_train).float()
X_test = torch.tensor(X_test).float()
y_test = torch.tensor(y_test).float()
# ##### We created 3D sequences of data. LSTM requires (Batch size, Sequence Length, Feature Dimension). Now we can start to build our model using PyTorch!
# some params
input_size = X_train.shape[2]
hidden_size = 128
output_size = 4
learning_rate = 0.01
num_epochs = 100
model = LSTMModel(input_size, hidden_size, output_size)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Fit to train_data
model = fit(model, num_epochs, criterion, optimizer, X_train, y_train)
# Predicting on test_data
y_pred = predict(model, X_test)
display(criterion(y_pred, y_test))
# Plot
plotter(y_pred, y_test)
| false | 1 | 1,644 | 0 | 1,774 | 1,644 |
||
129072301
|
<jupyter_start><jupyter_text>youtube_api_dataset_v2
Kaggle dataset identifier: youtube-api-dataset-v2
<jupyter_script>import pandas as pd
df = pd.read_csv("/kaggle/input/youtube-api-dataset-v2/data_set.csv")
df.head()
df.shape
X = df[
["duration", "videoAge", "subscribers", "totalVideos", "totalViews", "channelAge"]
]
X
y = df["views"]
y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# fit and transfrom
X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
# everything has been scaled between 1 and 0
print("Max: ", X_train.max())
print("Min: ", X_train.min())
X_train
X_test
from keras.models import Sequential
from keras.layers import Dense
import keras.metrics
import keras.optimizers
import keras.losses
import keras.metrics
# from keras import backend as K
# def coeff_determination(y_true, y_pred):
# SS_res = K.sum(K.square( y_true-y_pred ))
# SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
model = Sequential()
# input layer
model.add(keras.Input(6))
# hidden layers
model.add(Dense(100, activation="relu"))
model.add(Dense(1000, activation="relu"))
model.add(Dense(1000, activation="relu"))
model.add(Dense(100, activation="relu"))
# output layer
model.add(Dense(1))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.005), loss="mse")
hist = model.fit(x=X_train, y=y_train, batch_size=32, epochs=200, validation_split=0.2)
model.summary()
import matplotlib.pyplot as plt
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.xlabel("epochs")
plt.ylabel("Loss")
plt.title("Model Loss")
plt.legend(["training loss", "validation loss"])
plt.show()
y_pred = model.predict(X_test)
y_pred.shape
y_pred = y_pred.reshape((-1,))
y_pred.shape
model.evaluate(x=X_test, y=y_test)
from sklearn.metrics import r2_score, mean_squared_error
r2 = r2_score(y_test, y_pred)
mse = mean_squared_error(y_true=y_test, y_pred=y_pred)
r2, mse
plt.figure()
plt.plot(y_test[:50])
plt.plot(y_pred[:50])
plt.xlabel("Training sample")
plt.ylabel("Views")
plt.title("Actual and predicted output")
plt.legend(["Actual Views", "Predicted Views"])
plt.show()
plt.figure(figsize=(10, 10))
plt.scatter(y_test, y_pred, c="crimson")
plt.yscale("log")
plt.xscale("log")
p1 = max(max(y_pred), max(y_test))
p2 = min(min(y_pred), min(y_test))
plt.plot([p1, p2], [p1, p2], "b-")
plt.xlabel("True Values", fontsize=15)
plt.ylabel("Predictions", fontsize=15)
plt.axis("equal")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/072/129072301.ipynb
|
youtube-api-dataset-v2
|
amitsingh1555
|
[{"Id": 129072301, "ScriptId": 38369394, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13747440, "CreationDate": "05/10/2023 18:56:15", "VersionNumber": 1.0, "Title": "notebookeb6153c218", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 125.0, "LinesInsertedFromPrevious": 125.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184803974, "KernelVersionId": 129072301, "SourceDatasetVersionId": 5658096}]
|
[{"Id": 5658096, "DatasetId": 3251885, "DatasourceVersionId": 5733505, "CreatorUserId": 13747440, "LicenseName": "Unknown", "CreationDate": "05/10/2023 18:49:09", "VersionNumber": 1.0, "Title": "youtube_api_dataset_v2", "Slug": "youtube-api-dataset-v2", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3251885, "CreatorUserId": 13747440, "OwnerUserId": 13747440.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5658096.0, "CurrentDatasourceVersionId": 5733505.0, "ForumId": 3317293, "Type": 2, "CreationDate": "05/10/2023 18:49:09", "LastActivityDate": "05/10/2023", "TotalViews": 51, "TotalDownloads": 1, "TotalVotes": 0, "TotalKernels": 3}]
|
[{"Id": 13747440, "UserName": "amitsingh1555", "DisplayName": "Amit Singh 1555", "RegisterDate": "02/17/2023", "PerformanceTier": 0}]
|
import pandas as pd
df = pd.read_csv("/kaggle/input/youtube-api-dataset-v2/data_set.csv")
df.head()
df.shape
X = df[
["duration", "videoAge", "subscribers", "totalVideos", "totalViews", "channelAge"]
]
X
y = df["views"]
y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# fit and transfrom
X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
# everything has been scaled between 1 and 0
print("Max: ", X_train.max())
print("Min: ", X_train.min())
X_train
X_test
from keras.models import Sequential
from keras.layers import Dense
import keras.metrics
import keras.optimizers
import keras.losses
import keras.metrics
# from keras import backend as K
# def coeff_determination(y_true, y_pred):
# SS_res = K.sum(K.square( y_true-y_pred ))
# SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
model = Sequential()
# input layer
model.add(keras.Input(6))
# hidden layers
model.add(Dense(100, activation="relu"))
model.add(Dense(1000, activation="relu"))
model.add(Dense(1000, activation="relu"))
model.add(Dense(100, activation="relu"))
# output layer
model.add(Dense(1))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.005), loss="mse")
hist = model.fit(x=X_train, y=y_train, batch_size=32, epochs=200, validation_split=0.2)
model.summary()
import matplotlib.pyplot as plt
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.xlabel("epochs")
plt.ylabel("Loss")
plt.title("Model Loss")
plt.legend(["training loss", "validation loss"])
plt.show()
y_pred = model.predict(X_test)
y_pred.shape
y_pred = y_pred.reshape((-1,))
y_pred.shape
model.evaluate(x=X_test, y=y_test)
from sklearn.metrics import r2_score, mean_squared_error
r2 = r2_score(y_test, y_pred)
mse = mean_squared_error(y_true=y_test, y_pred=y_pred)
r2, mse
plt.figure()
plt.plot(y_test[:50])
plt.plot(y_pred[:50])
plt.xlabel("Training sample")
plt.ylabel("Views")
plt.title("Actual and predicted output")
plt.legend(["Actual Views", "Predicted Views"])
plt.show()
plt.figure(figsize=(10, 10))
plt.scatter(y_test, y_pred, c="crimson")
plt.yscale("log")
plt.xscale("log")
p1 = max(max(y_pred), max(y_test))
p2 = min(min(y_pred), min(y_test))
plt.plot([p1, p2], [p1, p2], "b-")
plt.xlabel("True Values", fontsize=15)
plt.ylabel("Predictions", fontsize=15)
plt.axis("equal")
plt.show()
| false | 1 | 948 | 0 | 978 | 948 |
||
129072743
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
pd.read_csv(
"/kaggle/input/siga-aneel/siga-empreendimentos-geracao.csv",
usecols=lambda x: x != "_id",
)
dados = pd.read_csv(
"/kaggle/input/siga-aneel/siga-empreendimentos-geracao.csv",
usecols=lambda x: x != "_id",
)
dados.head(2)
# Lembrando que:
# 
# 
# 
dados.info()
# # Amostragem aleatória simples
dados.shape
dados.DscFaseUsina.value_counts(normalize=True)
amostra = dados.sample(n=1000, random_state=101)
amostra.head(2)
amostra.DscFaseUsina.value_counts(normalize=True)
# # Estimação
potencia_5000 = dados.query("MdaPotenciaFiscalizadaKw <= 5000").MdaPotenciaFiscalizadaKw
sigma = potencia_5000.std()
sigma
media = potencia_5000.mean()
media
from scipy.stats import norm
z = norm.ppf(0.975)
e = 10
n = (z * (sigma / e)) ** 2
n = int(n.round())
n
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/072/129072743.ipynb
| null | null |
[{"Id": 129072743, "ScriptId": 38366799, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10107352, "CreationDate": "05/10/2023 19:03:07", "VersionNumber": 1.0, "Title": "estatistica_parte_2", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 51.0, "LinesInsertedFromPrevious": 51.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
pd.read_csv(
"/kaggle/input/siga-aneel/siga-empreendimentos-geracao.csv",
usecols=lambda x: x != "_id",
)
dados = pd.read_csv(
"/kaggle/input/siga-aneel/siga-empreendimentos-geracao.csv",
usecols=lambda x: x != "_id",
)
dados.head(2)
# Lembrando que:
# 
# 
# 
dados.info()
# # Amostragem aleatória simples
dados.shape
dados.DscFaseUsina.value_counts(normalize=True)
amostra = dados.sample(n=1000, random_state=101)
amostra.head(2)
amostra.DscFaseUsina.value_counts(normalize=True)
# # Estimação
potencia_5000 = dados.query("MdaPotenciaFiscalizadaKw <= 5000").MdaPotenciaFiscalizadaKw
sigma = potencia_5000.std()
sigma
media = potencia_5000.mean()
media
from scipy.stats import norm
z = norm.ppf(0.975)
e = 10
n = (z * (sigma / e)) ** 2
n = int(n.round())
n
| false | 0 | 542 | 0 | 542 | 542 |
||
129072951
|
<jupyter_start><jupyter_text>Books Dataset
### Context
Books read by users and ratings provided by them on Amazon
### Content
Online data for books from Amazon along with user ratings and users who bought them
Kaggle dataset identifier: books-dataset
<jupyter_script># # A BOOKISH DATASET
# **Context:**
# There are so many potential questions we could explore with this dataset, but the question that spiked my interest is: Are there any correlations between user demographics (age, gender, location) and book preferences? Do certain types of users tend to prefer certain types of books?
# **Description of the dataset:**
# This dataset has been compiled by Cai-Nicolas Ziegler (2004).
# Inside, there are three tables for users, books and ratings.
# *Lets get started!*
# First, we are gonna import all our libraries, and then proceed to evaluate the dataset.
# We are gonna be analyzing the user data (demographic information) alongside the ratings data, and looking for correlations between demographic factors and book preferences.
# For example, do younger users tend to prefer certain genres of books, or are there regional differences in book preferences?
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
# databases
books_df = pd.read_csv(
"../input/books-dataset/books_data/books.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
ratings_df = pd.read_csv(
"../input/books-dataset/books_data/ratings.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
users_df = pd.read_csv(
"../input/books-dataset/books_data/users.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
# For anyone else using this dataset, there is a problem with the encoding, so try using latin1.
# its a stubborn dataset file, but this should work and fix to default encoding used by pandas, which is utf-8.
# We are also not gonna use the Image URL so we are dropping that at once
books_df.drop(["Image-URL-S", "Image-URL-M", "Image-URL-L"], axis=1, inplace=True)
books_df.head(5)
ratings_df.head(5)
users_df.head(5)
# we see some NA values in users Age, so we are gonna take care of that
users_df = users_df.fillna(0)
users_df_drop_1 = users_df.dropna()
users_df = users_df.replace({"%": ""}, regex=True)
print(users_df.head(5))
users_df.head(5)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/072/129072951.ipynb
|
books-dataset
|
saurabhbagchi
|
[{"Id": 129072951, "ScriptId": 38327634, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14338641, "CreationDate": "05/10/2023 19:05:46", "VersionNumber": 2.0, "Title": "Book dataset", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 52.0, "LinesInsertedFromPrevious": 46.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 6.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184804861, "KernelVersionId": 129072951, "SourceDatasetVersionId": 1546766}]
|
[{"Id": 1546766, "DatasetId": 912577, "DatasourceVersionId": 1581517, "CreatorUserId": 168670, "LicenseName": "CC0: Public Domain", "CreationDate": "10/09/2020 05:14:41", "VersionNumber": 1.0, "Title": "Books Dataset", "Slug": "books-dataset", "Subtitle": "Subset of the books available in Amazon", "Description": "### Context\n\nBooks read by users and ratings provided by them on Amazon\n\n\n### Content\n\nOnline data for books from Amazon along with user ratings and users who bought them\n\n\n### Acknowledgements\n\nPrimarily for building recommender systems.\nThis dataset has been compiled by Cai-Nicolas Ziegler in 2004, and it comprises of three tables for users, books and ratings. \nExplicit ratings are expressed on a scale from 1-10 (higher values denoting higher appreciation) and implicit rating is expressed by 0\nhttp://www2.informatik.uni-freiburg.de/~cziegler/BX/\n\n\n### Inspiration\n\nCan we select and recommend the top 10 books for each user based on past purchase behavior?", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 912577, "CreatorUserId": 168670, "OwnerUserId": 168670.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1546766.0, "CurrentDatasourceVersionId": 1581517.0, "ForumId": 928361, "Type": 2, "CreationDate": "10/09/2020 05:14:41", "LastActivityDate": "10/09/2020", "TotalViews": 46601, "TotalDownloads": 6190, "TotalVotes": 53, "TotalKernels": 5}]
|
[{"Id": 168670, "UserName": "saurabhbagchi", "DisplayName": "Old Monk", "RegisterDate": "02/24/2014", "PerformanceTier": 3}]
|
# # A BOOKISH DATASET
# **Context:**
# There are so many potential questions we could explore with this dataset, but the question that spiked my interest is: Are there any correlations between user demographics (age, gender, location) and book preferences? Do certain types of users tend to prefer certain types of books?
# **Description of the dataset:**
# This dataset has been compiled by Cai-Nicolas Ziegler (2004).
# Inside, there are three tables for users, books and ratings.
# *Lets get started!*
# First, we are gonna import all our libraries, and then proceed to evaluate the dataset.
# We are gonna be analyzing the user data (demographic information) alongside the ratings data, and looking for correlations between demographic factors and book preferences.
# For example, do younger users tend to prefer certain genres of books, or are there regional differences in book preferences?
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
# databases
books_df = pd.read_csv(
"../input/books-dataset/books_data/books.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
ratings_df = pd.read_csv(
"../input/books-dataset/books_data/ratings.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
users_df = pd.read_csv(
"../input/books-dataset/books_data/users.csv",
sep=";",
error_bad_lines=False,
encoding="latin-1",
)
# For anyone else using this dataset, there is a problem with the encoding, so try using latin1.
# its a stubborn dataset file, but this should work and fix to default encoding used by pandas, which is utf-8.
# We are also not gonna use the Image URL so we are dropping that at once
books_df.drop(["Image-URL-S", "Image-URL-M", "Image-URL-L"], axis=1, inplace=True)
books_df.head(5)
ratings_df.head(5)
users_df.head(5)
# we see some NA values in users Age, so we are gonna take care of that
users_df = users_df.fillna(0)
users_df_drop_1 = users_df.dropna()
users_df = users_df.replace({"%": ""}, regex=True)
print(users_df.head(5))
users_df.head(5)
| false | 3 | 610 | 0 | 666 | 610 |
||
129072503
|
<jupyter_start><jupyter_text>enthlaphy of vaporization
Kaggle dataset identifier: enthlaphy-of-vaporization
<jupyter_script># Import libraries
import pandas as pd
import numpy as np
import pickle
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from xgboost import XGBRegressor
df = pd.read_excel(
"/kaggle/input/enthlaphy-of-vaporization/Enthalpy of vaporization(1).xlsx", 0
)
df.head()
df.shape
# **1.ML model for α-Pinene**
# Load the data
X = df["Temp"]
y = df["cmp1"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_alphaPinene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_alphaPinene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_alphaPinene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model1_alphaPinene.pkl", "wb") as f:
pickle.dump(model_alphaPinene, f)
# **22.ML model for Alpha-phellandrene.ML model for Alpha-phellandrene**
# Load the data
X = df["Temp"]
y = df["cmp2"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_AlphaPhellandrene = XGBRegressor(
n_estimators=1000, max_depth=4, learning_rate=0.01
)
model_AlphaPhellandrene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_AlphaPhellandrene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model2_AlphaPhellandrene.pkl", "wb") as f:
pickle.dump(model_AlphaPhellandrene, f)
# **3.ML model for O-Cymene**
# Load the data
X = df["Temp"]
y = df["cmp3"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_OCymene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_OCymene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_OCymene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model3_OCymene.pkl", "wb") as f:
pickle.dump(model_OCymene, f)
# **4.ML model for Alpha-copaene**
# Load the data
X = df["Temp"]
y = df["cmp4"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_AlphaCopaene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_AlphaCopaene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_AlphaCopaene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model4_AlphaCopaene.pkl", "wb") as f:
pickle.dump(model_AlphaCopaene, f)
# **5.ML model for β-Linalool-**
# Load the data
X = df["Temp"]
y = df["cmp5"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_βLinalool = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_βLinalool.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_βLinalool.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model5_βLinalool.pkl", "wb") as f:
pickle.dump(model_βLinalool, f)
# **6.ML model for Beta caryophyllene**
# Load the data
X = df["Temp"]
y = df["cmp6"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_βCaryophyllene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_βCaryophyllene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_βCaryophyllene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model6_βCaryophyllene.pkl", "wb") as f:
pickle.dump(model_βCaryophyllene, f)
# **7.ML model for Safrol**
# Load the data
X = df["Temp"]
y = df["cmp7"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_Safrol = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_Safrol.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_Safrol.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model7_Safrol.pkl", "wb") as f:
pickle.dump(model_Safrol, f)
# **8.ML model for Caryophyllene oxide**
# Load the data
X = df["Temp"]
y = df["cmp8"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_CaryophylleneOxide = XGBRegressor(
n_estimators=1000, max_depth=4, learning_rate=0.01
)
model_CaryophylleneOxide.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_CaryophylleneOxide.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model8_CaryophylleneOxide.pkl", "wb") as f:
pickle.dump(model_CaryophylleneOxide, f)
# **9.ML model for Cinnamaldehyde**
# Load the data
X = df["Temp"]
y = df["cmp9"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_Cinnamaldehyde = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_Cinnamaldehyde.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_Cinnamaldehyde.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model9_Cinnamaldehyde.pkl", "wb") as f:
pickle.dump(model_Cinnamaldehyde, f)
# **10.ML model for Eugenol**
# Load the data
X = df["Temp"]
y = df["cmp10"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_Eugenol = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_Eugenol.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_Eugenol.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model10_Eugenol.pkl", "wb") as f:
pickle.dump(model_Eugenol, f)
# **11.ML model for Acetyl Eugenol**
# Load the data
X = df["Temp"]
y = df["cmp11"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_AcetylEugenol = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_AcetylEugenol.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_AcetylEugenol.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model11_AcetylEugenol.pkl", "wb") as f:
pickle.dump(model_AcetylEugenol, f)
# **12.ML model for Phenol, 4-(2-propenyl)-**
# Load the data
X = df["Temp"]
y = df["cmp12"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_PP = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_PP.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_PP.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model12_PP.pkl", "wb") as f:
pickle.dump(model_PP, f)
# **13.ML model for Benzyl Benzoate**
# Load the data
X = df["Temp"]
y = df["cmp13"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_BenzylBenzoate = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_BenzylBenzoate.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_BenzylBenzoate.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model13_BenzylBenzoate.pkl", "wb") as f:
pickle.dump(model_BenzylBenzoate, f)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/072/129072503.ipynb
|
enthlaphy-of-vaporization
|
anuwaz
|
[{"Id": 129072503, "ScriptId": 38368783, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6660933, "CreationDate": "05/10/2023 18:59:41", "VersionNumber": 1.0, "Title": "Enthalpy of vaporization", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 295.0, "LinesInsertedFromPrevious": 295.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 184804309, "KernelVersionId": 129072503, "SourceDatasetVersionId": 5657990}]
|
[{"Id": 5657990, "DatasetId": 3251818, "DatasourceVersionId": 5733399, "CreatorUserId": 6660933, "LicenseName": "Unknown", "CreationDate": "05/10/2023 18:30:01", "VersionNumber": 1.0, "Title": "enthlaphy of vaporization", "Slug": "enthlaphy-of-vaporization", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3251818, "CreatorUserId": 6660933, "OwnerUserId": 6660933.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5657990.0, "CurrentDatasourceVersionId": 5733399.0, "ForumId": 3317225, "Type": 2, "CreationDate": "05/10/2023 18:30:01", "LastActivityDate": "05/10/2023", "TotalViews": 34, "TotalDownloads": 1, "TotalVotes": 0, "TotalKernels": 1}]
|
[{"Id": 6660933, "UserName": "anuwaz", "DisplayName": "Anushka Chathuranga", "RegisterDate": "02/04/2021", "PerformanceTier": 1}]
|
# Import libraries
import pandas as pd
import numpy as np
import pickle
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from xgboost import XGBRegressor
df = pd.read_excel(
"/kaggle/input/enthlaphy-of-vaporization/Enthalpy of vaporization(1).xlsx", 0
)
df.head()
df.shape
# **1.ML model for α-Pinene**
# Load the data
X = df["Temp"]
y = df["cmp1"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_alphaPinene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_alphaPinene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_alphaPinene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model1_alphaPinene.pkl", "wb") as f:
pickle.dump(model_alphaPinene, f)
# **22.ML model for Alpha-phellandrene.ML model for Alpha-phellandrene**
# Load the data
X = df["Temp"]
y = df["cmp2"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_AlphaPhellandrene = XGBRegressor(
n_estimators=1000, max_depth=4, learning_rate=0.01
)
model_AlphaPhellandrene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_AlphaPhellandrene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model2_AlphaPhellandrene.pkl", "wb") as f:
pickle.dump(model_AlphaPhellandrene, f)
# **3.ML model for O-Cymene**
# Load the data
X = df["Temp"]
y = df["cmp3"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_OCymene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_OCymene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_OCymene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model3_OCymene.pkl", "wb") as f:
pickle.dump(model_OCymene, f)
# **4.ML model for Alpha-copaene**
# Load the data
X = df["Temp"]
y = df["cmp4"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_AlphaCopaene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_AlphaCopaene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_AlphaCopaene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model4_AlphaCopaene.pkl", "wb") as f:
pickle.dump(model_AlphaCopaene, f)
# **5.ML model for β-Linalool-**
# Load the data
X = df["Temp"]
y = df["cmp5"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_βLinalool = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_βLinalool.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_βLinalool.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model5_βLinalool.pkl", "wb") as f:
pickle.dump(model_βLinalool, f)
# **6.ML model for Beta caryophyllene**
# Load the data
X = df["Temp"]
y = df["cmp6"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_βCaryophyllene = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_βCaryophyllene.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_βCaryophyllene.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model6_βCaryophyllene.pkl", "wb") as f:
pickle.dump(model_βCaryophyllene, f)
# **7.ML model for Safrol**
# Load the data
X = df["Temp"]
y = df["cmp7"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_Safrol = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_Safrol.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_Safrol.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model7_Safrol.pkl", "wb") as f:
pickle.dump(model_Safrol, f)
# **8.ML model for Caryophyllene oxide**
# Load the data
X = df["Temp"]
y = df["cmp8"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_CaryophylleneOxide = XGBRegressor(
n_estimators=1000, max_depth=4, learning_rate=0.01
)
model_CaryophylleneOxide.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_CaryophylleneOxide.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model8_CaryophylleneOxide.pkl", "wb") as f:
pickle.dump(model_CaryophylleneOxide, f)
# **9.ML model for Cinnamaldehyde**
# Load the data
X = df["Temp"]
y = df["cmp9"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_Cinnamaldehyde = XGBRegressor(n_estimators=1000, max_depth=4, learning_rate=0.01)
model_Cinnamaldehyde.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_Cinnamaldehyde.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model9_Cinnamaldehyde.pkl", "wb") as f:
pickle.dump(model_Cinnamaldehyde, f)
# **10.ML model for Eugenol**
# Load the data
X = df["Temp"]
y = df["cmp10"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_Eugenol = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_Eugenol.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_Eugenol.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model10_Eugenol.pkl", "wb") as f:
pickle.dump(model_Eugenol, f)
# **11.ML model for Acetyl Eugenol**
# Load the data
X = df["Temp"]
y = df["cmp11"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_AcetylEugenol = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_AcetylEugenol.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_AcetylEugenol.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model11_AcetylEugenol.pkl", "wb") as f:
pickle.dump(model_AcetylEugenol, f)
# **12.ML model for Phenol, 4-(2-propenyl)-**
# Load the data
X = df["Temp"]
y = df["cmp12"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_PP = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_PP.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_PP.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model12_PP.pkl", "wb") as f:
pickle.dump(model_PP, f)
# **13.ML model for Benzyl Benzoate**
# Load the data
X = df["Temp"]
y = df["cmp13"]
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create and fit the model
model_BenzylBenzoate = XGBRegressor(n_estimators=1400, max_depth=5, learning_rate=0.01)
model_BenzylBenzoate.fit(X_train, y_train)
# Make predictions and evaluate the model
y_pred = model_BenzylBenzoate.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
# Import pickle
import pickle
# Save the model to a file
with open("model13_BenzylBenzoate.pkl", "wb") as f:
pickle.dump(model_BenzylBenzoate, f)
| false | 0 | 3,650 | 2 | 3,681 | 3,650 |
||
129072436
|
# # Understanding Tensorflow's Global and Operation level seeds
# ## Aim
# When I started learning TensorFlow, it wasn't easy for me to grasp the concepts quickly or retain all the information in my mind. TensorFlow's documentation can be overwhelming for beginners. Randomization is a widely used and important concept, from regression to neural networks. It can be used to generate data or initialize model weights. Therefore, I am writing this mini article to simplify some concepts, help you remember them, and ensure correct usage at different stages of your ML pipeline. The article will discuss the behavior of TensorFlow's global and operation level seeds, using `tensorflow.random.set_seed(seed)`.
# Learning is a two-way process, and I am confident that your collaboration will help me improve this article and enhance my understanding.
# ## Introduction
# While studying tensorflow.random.set_seed(seed) from [TensorFlow's official documentation](https://www.tensorflow.org/api_docs/python/tf/random/set_seed), I realized the need to explore all the cases in depth using simple code snippets. At the end of the article, I have also discussed some key takeaways to help you quickly refresh the concepts.
# ## Cases
import tensorflow as tf
# ### Case 1 - No seed is set at any level
"""
Case 1:
Setup: When no seed is set at any level-
- Since global level seed is not set, the sequence will start from different numbers
- Since operation level seed is not set, the sequence will contain different numbers
"""
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
First re-run of Case 1 to check if we get diffrent sequence
"""
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
Second re-run of Case 1 to check if we get diffrent sequence
"""
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
# ### Case 2 - Only global level seed is set
"""
Case 2:
Setup: When only global level seed is set -
- Since global level seed is set, the sequence will wil be exact same after restarts
- Since operation level seed is not set, the sequence will contain different numbers, but because of the global seed, the
sequence will be same.
"""
tf.random.set_seed(1234)
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
First re-run of Case 2 to check if we get the same sequence
"""
tf.random.set_seed(1234)
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
Second re-run of Case 2 to check if we get the same sequence
"""
tf.random.set_seed(1234)
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
# ### Case 3 - Using tf.function when only global level seed is set
"""
Case 3:
Setup: Using tf.function - tf.function is just like restart
When only global level seed is set -
- Since global level seed is set, the sequence will wil be exact same after restarts
- Since operation level seed is not set, the sequence will contain different numbers, but because of global seed
sequence will be same.
"""
tf.random.set_seed(1234)
@tf.function
def f1():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
@tf.function
def f2():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
print(f1())
print(f2())
# ### Case 4 - Using tf.function when no seed is set at any level
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(5)
#### Start of Case 4 ####
"""
Case 4:
Setup: Using tf.function - tf.function is just like restart
No seed set, hence we will always get a random response.
Random o/p after program restart as well (as expected).
"""
import tensorflow as tf
@tf.function
def f1():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
@tf.function
def f2():
a = tf.random.uniform([1]) # generates 'A3'
b = tf.random.uniform([1]) # generates 'A4'
return a, b
print(f1())
print(f2())
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(5)
"""
First re-run of Case 4 to check if we get a different set of 4 tensors.
Setup: Using tf.function - tf.function is just like restart
No seed set, hence we will always get a random response.
Random o/p after program restart as well (as expected).
"""
import tensorflow as tf
@tf.function
def f1():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
@tf.function
def f2():
a = tf.random.uniform([1]) # generates 'A3'
b = tf.random.uniform([1]) # generates 'A4'
return a, b
print(f1())
print(f2())
# ### Case 5 - Only operation level seed is set
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(20)
#### Start of Case 5 ####
import tensorflow as tf
"""
Case 5:
Setup: When only operation level seed is set -
- Since operation level seed is set, the sequence will start from same number after restarts,
and sequence will be same
"""
print(tf.random.uniform([1], seed=1)) # generates 'A1'
print(tf.random.uniform([1], seed=1)) # generates 'A2'
print(tf.random.uniform([1], seed=1)) # generates 'A3'
print(tf.random.uniform([1], seed=1)) # generates 'A4'
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(30)
import tensorflow as tf
"""
First re-run of the Case 5 to check if we get the same sequence
"""
print(tf.random.uniform([1], seed=1)) # generates 'A1'
print(tf.random.uniform([1], seed=1)) # generates 'A2'
print(tf.random.uniform([1], seed=1)) # generates 'A3'
print(tf.random.uniform([1], seed=1)) # generates 'A4'
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(5)
import tensorflow as tf
"""
Second re-run of the Case 5 to check if we get the same sequence
"""
print(tf.random.uniform([1], seed=1)) # generates 'A1'
print(tf.random.uniform([1], seed=1)) # generates 'A2'
print(tf.random.uniform([1], seed=1)) # generates 'A3'
print(tf.random.uniform([1], seed=1)) # generates 'A4'
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/072/129072436.ipynb
| null | null |
[{"Id": 129072436, "ScriptId": 38338769, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 874338, "CreationDate": "05/10/2023 18:58:27", "VersionNumber": 1.0, "Title": "Tensorflow's Global and Operation level seeds", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 298.0, "LinesInsertedFromPrevious": 298.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Understanding Tensorflow's Global and Operation level seeds
# ## Aim
# When I started learning TensorFlow, it wasn't easy for me to grasp the concepts quickly or retain all the information in my mind. TensorFlow's documentation can be overwhelming for beginners. Randomization is a widely used and important concept, from regression to neural networks. It can be used to generate data or initialize model weights. Therefore, I am writing this mini article to simplify some concepts, help you remember them, and ensure correct usage at different stages of your ML pipeline. The article will discuss the behavior of TensorFlow's global and operation level seeds, using `tensorflow.random.set_seed(seed)`.
# Learning is a two-way process, and I am confident that your collaboration will help me improve this article and enhance my understanding.
# ## Introduction
# While studying tensorflow.random.set_seed(seed) from [TensorFlow's official documentation](https://www.tensorflow.org/api_docs/python/tf/random/set_seed), I realized the need to explore all the cases in depth using simple code snippets. At the end of the article, I have also discussed some key takeaways to help you quickly refresh the concepts.
# ## Cases
import tensorflow as tf
# ### Case 1 - No seed is set at any level
"""
Case 1:
Setup: When no seed is set at any level-
- Since global level seed is not set, the sequence will start from different numbers
- Since operation level seed is not set, the sequence will contain different numbers
"""
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
First re-run of Case 1 to check if we get diffrent sequence
"""
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
Second re-run of Case 1 to check if we get diffrent sequence
"""
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
# ### Case 2 - Only global level seed is set
"""
Case 2:
Setup: When only global level seed is set -
- Since global level seed is set, the sequence will wil be exact same after restarts
- Since operation level seed is not set, the sequence will contain different numbers, but because of the global seed, the
sequence will be same.
"""
tf.random.set_seed(1234)
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
First re-run of Case 2 to check if we get the same sequence
"""
tf.random.set_seed(1234)
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
"""
Second re-run of Case 2 to check if we get the same sequence
"""
tf.random.set_seed(1234)
print(tf.random.uniform([1])) # generates 'A1'
print(tf.random.uniform([1])) # generates 'A2'
print(tf.random.uniform([1])) # generates 'A3'
print(tf.random.uniform([1])) # generates 'A4'
print(tf.random.uniform([1])) # generates 'A5'
print(tf.random.uniform([1])) # generates 'A6'
# ### Case 3 - Using tf.function when only global level seed is set
"""
Case 3:
Setup: Using tf.function - tf.function is just like restart
When only global level seed is set -
- Since global level seed is set, the sequence will wil be exact same after restarts
- Since operation level seed is not set, the sequence will contain different numbers, but because of global seed
sequence will be same.
"""
tf.random.set_seed(1234)
@tf.function
def f1():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
@tf.function
def f2():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
print(f1())
print(f2())
# ### Case 4 - Using tf.function when no seed is set at any level
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(5)
#### Start of Case 4 ####
"""
Case 4:
Setup: Using tf.function - tf.function is just like restart
No seed set, hence we will always get a random response.
Random o/p after program restart as well (as expected).
"""
import tensorflow as tf
@tf.function
def f1():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
@tf.function
def f2():
a = tf.random.uniform([1]) # generates 'A3'
b = tf.random.uniform([1]) # generates 'A4'
return a, b
print(f1())
print(f2())
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(5)
"""
First re-run of Case 4 to check if we get a different set of 4 tensors.
Setup: Using tf.function - tf.function is just like restart
No seed set, hence we will always get a random response.
Random o/p after program restart as well (as expected).
"""
import tensorflow as tf
@tf.function
def f1():
a = tf.random.uniform([1]) # generates 'A1'
b = tf.random.uniform([1]) # generates 'A2'
return a, b
@tf.function
def f2():
a = tf.random.uniform([1]) # generates 'A3'
b = tf.random.uniform([1]) # generates 'A4'
return a, b
print(f1())
print(f2())
# ### Case 5 - Only operation level seed is set
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(20)
#### Start of Case 5 ####
import tensorflow as tf
"""
Case 5:
Setup: When only operation level seed is set -
- Since operation level seed is set, the sequence will start from same number after restarts,
and sequence will be same
"""
print(tf.random.uniform([1], seed=1)) # generates 'A1'
print(tf.random.uniform([1], seed=1)) # generates 'A2'
print(tf.random.uniform([1], seed=1)) # generates 'A3'
print(tf.random.uniform([1], seed=1)) # generates 'A4'
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(30)
import tensorflow as tf
"""
First re-run of the Case 5 to check if we get the same sequence
"""
print(tf.random.uniform([1], seed=1)) # generates 'A1'
print(tf.random.uniform([1], seed=1)) # generates 'A2'
print(tf.random.uniform([1], seed=1)) # generates 'A3'
print(tf.random.uniform([1], seed=1)) # generates 'A4'
"""
IMPORTANT - Please see to it that in the previous cell, we set a global level seed.
To nullify its effect, we will restart our kernel here.
The code to restart the cell is taken from ChatGPT
"""
import IPython.display as display
import time
# Restart the cell
display.Javascript("Jupyter.notebook.kernel.restart()")
# Wait for the kernel to restart
time.sleep(5)
import tensorflow as tf
"""
Second re-run of the Case 5 to check if we get the same sequence
"""
print(tf.random.uniform([1], seed=1)) # generates 'A1'
print(tf.random.uniform([1], seed=1)) # generates 'A2'
print(tf.random.uniform([1], seed=1)) # generates 'A3'
print(tf.random.uniform([1], seed=1)) # generates 'A4'
| false | 0 | 2,711 | 0 | 2,711 | 2,711 |
||
129072330
|
<jupyter_start><jupyter_text>Datasets used in my study of target encodings
Kaggle dataset identifier: targetencodingsdata
<jupyter_script># Task Name:vg-stats
# Toqa Bany Yassen
# Dataset :Video Game Sales
# 9/5/2023
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/targetencodingsdata/vgsales1.csv")
df.head(100)
# Which company is the most common video game publisher?
# df=["Publisher"].mode()
df["Publisher"].mode()
# What’s the most common platform?
df["Platform"].mode()
# What about the most common genre?
df["Genre"].mode()
# What are the top 20 highest grossing games?
df[["Global_Sales"]].sort_values("Global_Sales", ascending=False)
df.head(20)
# For North American video game sales, what’s the median?
df["NA_Sales"].median()
# Provide a secondary output showing ten games surrounding the median sales output.
my_median = df["NA_Sales"].median()
surrounding_df = df[
(df["NA_Sales"] >= my_median - 0.03) & (df["NA_Sales"] <= my_median + 0.09)
]
result_df = surrounding_df.sort_values("NA_Sales").iloc[:10]
result_df
# The Nintendo Wii seems to have outdone itself with games. How does its average number of sales compare with all of the other platforms?
#
# wii_sales = df[df['Platform'] == "Wii"]
# wii_sales['Global_Sales']
wii_sales = df.loc[df["Platform"] == "Wii", "Global_Sales"]
wii_sales
# Assume that games with same median value are sorted in descending order.
descending_order = wii_sales.sort_values(ascending=False)
descending_order
df1 = df["NA_Sales"].mean()
df1
df2 = df["NA_Sales"].std()
df2
df3 = df["NA_Sales"].max()
df3
(number_of_standard_deviations) = (df3 - df1) / df2
number_of_standard_deviations
avg1 = df["NA_Sales"].median()
wii_df = df[df["Platform"] == "Wii"]
wii_mean_sales = wii_df["Global_Sales"].mean()
other_mean_sales = df[df["Platform"] != "Wii"]["Global_Sales"].mean()
if wii_mean_sales > other_mean_sales:
print(
"The Nintendo Wii has a higher average number of sales compared to other platforms."
)
else:
print(
"The Nintendo Wii does not have a higher average number of sales compared to other platforms."
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/072/129072330.ipynb
|
targetencodingsdata
|
vprokopev
|
[{"Id": 129072330, "ScriptId": 38310349, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8004461, "CreationDate": "05/10/2023 18:56:43", "VersionNumber": 1.0, "Title": "vg-stats", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 96.0, "LinesInsertedFromPrevious": 96.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184804063, "KernelVersionId": 129072330, "SourceDatasetVersionId": 105573}]
|
[{"Id": 105573, "DatasetId": 55223, "DatasourceVersionId": 115433, "CreatorUserId": 1963394, "LicenseName": "Unknown", "CreationDate": "09/21/2018 20:10:42", "VersionNumber": 1.0, "Title": "Datasets used in my study of target encodings", "Slug": "targetencodingsdata", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 58903623.0, "TotalUncompressedBytes": 9718587.0}]
|
[{"Id": 55223, "CreatorUserId": 1963394, "OwnerUserId": 1963394.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 105573.0, "CurrentDatasourceVersionId": 115433.0, "ForumId": 63983, "Type": 2, "CreationDate": "09/21/2018 20:10:42", "LastActivityDate": "09/21/2018", "TotalViews": 2668, "TotalDownloads": 275, "TotalVotes": 3, "TotalKernels": 9}]
|
[{"Id": 1963394, "UserName": "vprokopev", "DisplayName": "Viacheslav Prokopev", "RegisterDate": "06/03/2018", "PerformanceTier": 1}]
|
# Task Name:vg-stats
# Toqa Bany Yassen
# Dataset :Video Game Sales
# 9/5/2023
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/targetencodingsdata/vgsales1.csv")
df.head(100)
# Which company is the most common video game publisher?
# df=["Publisher"].mode()
df["Publisher"].mode()
# What’s the most common platform?
df["Platform"].mode()
# What about the most common genre?
df["Genre"].mode()
# What are the top 20 highest grossing games?
df[["Global_Sales"]].sort_values("Global_Sales", ascending=False)
df.head(20)
# For North American video game sales, what’s the median?
df["NA_Sales"].median()
# Provide a secondary output showing ten games surrounding the median sales output.
my_median = df["NA_Sales"].median()
surrounding_df = df[
(df["NA_Sales"] >= my_median - 0.03) & (df["NA_Sales"] <= my_median + 0.09)
]
result_df = surrounding_df.sort_values("NA_Sales").iloc[:10]
result_df
# The Nintendo Wii seems to have outdone itself with games. How does its average number of sales compare with all of the other platforms?
#
# wii_sales = df[df['Platform'] == "Wii"]
# wii_sales['Global_Sales']
wii_sales = df.loc[df["Platform"] == "Wii", "Global_Sales"]
wii_sales
# Assume that games with same median value are sorted in descending order.
descending_order = wii_sales.sort_values(ascending=False)
descending_order
df1 = df["NA_Sales"].mean()
df1
df2 = df["NA_Sales"].std()
df2
df3 = df["NA_Sales"].max()
df3
(number_of_standard_deviations) = (df3 - df1) / df2
number_of_standard_deviations
avg1 = df["NA_Sales"].median()
wii_df = df[df["Platform"] == "Wii"]
wii_mean_sales = wii_df["Global_Sales"].mean()
other_mean_sales = df[df["Platform"] != "Wii"]["Global_Sales"].mean()
if wii_mean_sales > other_mean_sales:
print(
"The Nintendo Wii has a higher average number of sales compared to other platforms."
)
else:
print(
"The Nintendo Wii does not have a higher average number of sales compared to other platforms."
)
| false | 1 | 838 | 0 | 864 | 838 |
||
129538894
|
<jupyter_start><jupyter_text>Squid Game Netflix Twitter Data

- The dataset contains the recent tweets about the record-breaking Netflix show "Squid Game"
- The data is collected using tweepy Python package to access Twitter API.
Kaggle dataset identifier: squid-game-netflix-twitter-data
<jupyter_code>import pandas as pd
df = pd.read_csv('squid-game-netflix-twitter-data/tweets_v8.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 80019 entries, 0 to 80018
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 user_name 80015 non-null object
1 user_location 56149 non-null object
2 user_description 74808 non-null object
3 user_created 80019 non-null object
4 user_followers 80019 non-null int64
5 user_friends 80019 non-null int64
6 user_favourites 80019 non-null int64
7 user_verified 80019 non-null bool
8 date 80019 non-null object
9 text 80019 non-null object
10 source 80019 non-null object
11 is_retweet 80019 non-null bool
dtypes: bool(2), int64(3), object(7)
memory usage: 6.3+ MB
<jupyter_text>Examples:
{
"user_name": "the _\u00fbnd\u00ebr-rat\u00e8d nigg\u00e1h\ud83d\udc4a\ud83c\udffe",
"user_location": null,
"user_description": "@ManUtd die hard\u2764\ufe0f\u2764\ufe0f\ud83d\udcaa\ud83c\udfff\ud83d\udcaa\ud83c\udfff\n\n\nYOLO\n\n\nJ'ai besoin de quelqu'un qui peut m'aimer au pire\ud83e\udd17\nNon, je ne suis pas parfait, mais j'esp\u00e8re que tu vois ma valeur\ud83e\udd1e\ud83c\udffe",
"user_created": "2019-09-06 19:24:57+00:00",
"user_followers": 581,
"user_friends": 1035,
"user_favourites": 8922,
"user_verified": false,
"date": "2021-10-06 12:05:38+00:00",
"text": "When life hits and the same time poverty strikes you\nGong Yoo : Lets play a game \n#SquidGame #Netflix https://t.co/Cx7ifmZ8cN",
"source": "Twitter for Android",
"is_retweet": false
}
{
"user_name": "Best uncle on planet earth",
"user_location": null,
"user_description": null,
"user_created": "2013-05-08 19:35:26+00:00",
"user_followers": 741,
"user_friends": 730,
"user_favourites": 8432,
"user_verified": false,
"date": "2021-10-06 12:05:22+00:00",
"text": "That marble episode of #SquidGame ruined me. \ud83d\ude2d\ud83d\ude2d\ud83d\ude2d",
"source": "Twitter for Android",
"is_retweet": false
}
{
"user_name": "marcie",
"user_location": null,
"user_description": "animal crossing. chicken nuggets. baby yoda. smol animals. tv shows. \ud83c\udff3\ufe0f\u200d\ud83c\udf08 pronouns: any",
"user_created": "2009-02-21 10:31:30+00:00",
"user_followers": 562,
"user_friends": 1197,
"user_favourites": 62732,
"user_verified": false,
"date": "2021-10-06 12:05:22+00:00",
"text": "#Squidgame time",
"source": "Twitter Web App",
"is_retweet": false
}
{
"user_name": "YoMo.Mdp",
"user_location": "Any pronouns ",
"user_description": "Where the heck is the karma\nI'm going on my school grave brb\n#Technosupport",
"user_created": "2021-02-14 13:21:22+00:00",
"user_followers": 3,
"user_friends": 277,
"user_favourites": 1341,
"user_verified": false,
"date": "2021-10-06 12:05:04+00:00",
"text": "//Blood on 1st slide\nI'm joining the squidgame thing, I'm already dead by sugar honeycomb ofc\n\n#SquidGame\u2026 https://t.co/N4UGv9hxx8",
"source": "Twitter Web App",
"is_retweet": false
}
<jupyter_script># importing libraries
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
from wordcloud import WordCloud, STOPWORDS
import string
from datetime import datetime
from textblob import TextBlob
import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
from sklearn.svm import SVC, LinearSVC
# importing the dataset
tweet = pd.read_csv("/kaggle/input/squid-game-netflix-twitter-data/tweets_v8.csv")
tweet.head()
# shape of the dataset
tweet.shape
# checking whether the datsset contains the null values or not
tweet.isnull().sum()
# dropping unnecessary columns
tweet.dropna(subset="user_name", inplace=True)
# dropping the duplicate values of the dataset
tweet = tweet.drop_duplicates()
# approximately 500 values have been dropped
tweet.shape
tweet["user_location"].nunique()
# the location is little messed up, here value_counts =1 values don't have any locations in the world map
tweet["user_location"].value_counts()
# countplot whether the users are verified or not
sns.countplot(x="user_verified", data=tweet, palette="coolwarm")
plt.title("Count of user verified")
plt.xlabel("User verified")
plt.ylabel("Count")
plt.show()
tweet["source"].nunique()
# source has so many values, some sources are bots as well.
tweet["source"].unique()
tweet["source"].value_counts().head(20)
def change(application):
application = application.lower()
if ("iphone" in application) or ("ipad" in application) or ("ios" in application):
return "iPhone/iPad"
elif ("android" in application) or ("mobile" in application):
return "Android"
elif "app" in application:
return "App"
elif ("bot" in application) or ("auto" in application) or (".io" in application):
return "Bot"
else:
return "Others"
tweet["source"] = tweet["source"].apply(change)
tweet.head()
# countplot of source
sns.countplot(x="source", data=tweet, palette="colorblind")
plt.title("Countplot of source")
plt.xlabel("Source")
plt.ylabel("Count")
# removing all the stopwords, punctuations
def cleaning(review):
nopunc = [line for line in review if line not in string.punctuation]
nopunc = "".join(nopunc)
nopunc = nopunc.split()
reviews = [
word for word in nopunc if word.lower() not in stopwords.words("english")
]
return " ".join(reviews)
tweet["text"] = tweet["text"].apply(cleaning)
# Wordcloud for reviews
cleaned_plot = WordCloud(
background_color="black", stopwords=STOPWORDS, width=3000, height=2500
).generate("".join(tweet["text"]))
plt.imshow(cleaned_plot)
plt.axis("off")
plt.show()
# changing the date format from year-month-date hours:minutes:seconds to datemonth
def date_change(dates):
format = "%Y-%m-%d %H:%M:%S%z"
dates = datetime.strptime(dates, format).strftime("%d%b")
return dates
tweet["new_date"] = tweet["date"].apply(date_change)
tweet.head()
# countplot for tweets tweeted in a day
plt.figure(figsize=(12, 5))
sns.countplot(x="new_date", data=tweet)
plt.xlabel("Date")
plt.ylabel("Tweets in a day")
plt.show()
tweets1 = " ".join(tweet[tweet["new_date"] == "18Oct"]["text"])
tweets2 = " ".join(tweet[tweet["new_date"] == "28Oct"]["text"])
# wordcloud for 18Oct
plot = WordCloud(
background_color="black", stopwords=STOPWORDS, width=3000, height=2500
).generate(tweets1)
plt.imshow(plot)
plt.axis("off")
plt.show()
# wordcloud for 18Oct
plot = WordCloud(
background_color="black", stopwords=STOPWORDS, width=3000, height=2500
).generate(tweets2)
plt.imshow(plot)
plt.axis("off")
plt.show()
# # TextBlob is a Python library and can determine the sentiment of a piece of text as positive, negative, and neutral.
# determining the polarity of sentiments
def sentiment(texts):
text = TextBlob(texts)
if text.sentiment.polarity > 0:
return 1
elif text.sentiment.polarity < 0:
return -1
else:
return 0
tweet["sentiment_polarity"] = tweet["text"].apply(sentiment)
tweet.head(20)
# wordcloud for sentiment polarity =1
positive = "".join(tweet[tweet["sentiment_polarity"] == 1]["text"])
plot1 = WordCloud(
background_color="black", width=600, height=600, stopwords=STOPWORDS
).generate(positive)
plt.imshow(plot1)
plt.axis("off")
plt.show()
# wordcloud for sentiment polarity =-1
negative = "".join(tweet[tweet["sentiment_polarity"] == -1]["text"])
plot2 = WordCloud(
background_color="black", width=600, height=600, stopwords=STOPWORDS
).generate(negative)
plt.imshow(plot2)
plt.axis("off")
plt.show()
# wordcloud for sentiment polarity =0
neutral = "".join(tweet[tweet["sentiment_polarity"] == 0]["text"])
plot3 = WordCloud(
background_color="black", width=600, height=600, stopwords=STOPWORDS
).generate(neutral)
plt.imshow(plot3)
plt.axis("off")
plt.show()
# countplot for the sentiment polarity
count = sns.countplot(x="sentiment_polarity", data=tweet)
plt.title("countplot for the sentiment polarity")
plt.xlabel("Polarity")
for value in count.patches:
x = value.get_x() + value.get_width() / 2 - 0.05
y = value.get_y() + value.get_height() + 500
count.annotate(value.get_height(), (x, y), size=9.5)
def score(text):
analyzer = SentimentIntensityAnalyzer()
score = analyzer.polarity_scores(text)
if score["compound"] >= 0.05:
return 1
elif score["compound"] <= -0.05:
return -1
else:
return 0
nltk.download("vader_lexicon")
tweet["sentiment"] = tweet["text"].apply(score)
tweet.head()
# countplot for sentiments
sns.countplot(x="sentiment", data=tweet)
plt.title("countplot for sentiments")
plt.xlabel("Sentiments")
x = tweet["text"]
y = tweet["sentiment"]
# creating an object for TfidfVectorize
vector = TfidfVectorizer()
x = vector.fit_transform(x)
# splitting x, y for training, testing dataset
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
# # **Logistic Regression**
log = LogisticRegression()
log.fit(x_train, y_train)
pred1 = log.predict(x_test)
print("The accuracy score for logistic regression is: ", accuracy_score(pred1, y_test))
print(classification_report(pred1, y_test))
svm = SVC(C=1.0, kernel="linear", gamma=100)
# # **SVM**
svm.fit(x_train, y_train)
pred2 = svm.predict(x_test)
print("The accuracy score for SVM is: ", accuracy_score(pred2, y_test))
print(classification_report(pred2, y_test))
lsvc = LinearSVC()
lsvc.fit(x_train, y_train)
pred3 = lsvc.predict(x_test)
print("The accuracy for LinearSVC is:", accuracy_score(pred3, y_test))
print(classification_report(pred3, y_test))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/538/129538894.ipynb
|
squid-game-netflix-twitter-data
|
deepcontractor
|
[{"Id": 129538894, "ScriptId": 38500559, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13768211, "CreationDate": "05/14/2023 16:33:56", "VersionNumber": 1.0, "Title": "tweet squid game", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 241.0, "LinesInsertedFromPrevious": 241.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185703279, "KernelVersionId": 129538894, "SourceDatasetVersionId": 3969153}]
|
[{"Id": 3969153, "DatasetId": 1631883, "DatasourceVersionId": 4024742, "CreatorUserId": 3682357, "LicenseName": "CC0: Public Domain", "CreationDate": "07/21/2022 11:35:43", "VersionNumber": 12.0, "Title": "Squid Game Netflix Twitter Data", "Slug": "squid-game-netflix-twitter-data", "Subtitle": "This data set contains twitter dump for the hashtag #squidgame.", "Description": "\n\n- The dataset contains the recent tweets about the record-breaking Netflix show \"Squid Game\"\n\n- The data is collected using tweepy Python package to access Twitter API.", "VersionNotes": "Data Update 2022/07/21", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1631883, "CreatorUserId": 3682357, "OwnerUserId": 3682357.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3969153.0, "CurrentDatasourceVersionId": 4024742.0, "ForumId": 1652583, "Type": 2, "CreationDate": "10/06/2021 12:52:46", "LastActivityDate": "10/06/2021", "TotalViews": 25671, "TotalDownloads": 2460, "TotalVotes": 94, "TotalKernels": 7}]
|
[{"Id": 3682357, "UserName": "deepcontractor", "DisplayName": "Deep Contractor", "RegisterDate": "09/09/2019", "PerformanceTier": 4}]
|
# importing libraries
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
from wordcloud import WordCloud, STOPWORDS
import string
from datetime import datetime
from textblob import TextBlob
import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
from sklearn.svm import SVC, LinearSVC
# importing the dataset
tweet = pd.read_csv("/kaggle/input/squid-game-netflix-twitter-data/tweets_v8.csv")
tweet.head()
# shape of the dataset
tweet.shape
# checking whether the datsset contains the null values or not
tweet.isnull().sum()
# dropping unnecessary columns
tweet.dropna(subset="user_name", inplace=True)
# dropping the duplicate values of the dataset
tweet = tweet.drop_duplicates()
# approximately 500 values have been dropped
tweet.shape
tweet["user_location"].nunique()
# the location is little messed up, here value_counts =1 values don't have any locations in the world map
tweet["user_location"].value_counts()
# countplot whether the users are verified or not
sns.countplot(x="user_verified", data=tweet, palette="coolwarm")
plt.title("Count of user verified")
plt.xlabel("User verified")
plt.ylabel("Count")
plt.show()
tweet["source"].nunique()
# source has so many values, some sources are bots as well.
tweet["source"].unique()
tweet["source"].value_counts().head(20)
def change(application):
application = application.lower()
if ("iphone" in application) or ("ipad" in application) or ("ios" in application):
return "iPhone/iPad"
elif ("android" in application) or ("mobile" in application):
return "Android"
elif "app" in application:
return "App"
elif ("bot" in application) or ("auto" in application) or (".io" in application):
return "Bot"
else:
return "Others"
tweet["source"] = tweet["source"].apply(change)
tweet.head()
# countplot of source
sns.countplot(x="source", data=tweet, palette="colorblind")
plt.title("Countplot of source")
plt.xlabel("Source")
plt.ylabel("Count")
# removing all the stopwords, punctuations
def cleaning(review):
nopunc = [line for line in review if line not in string.punctuation]
nopunc = "".join(nopunc)
nopunc = nopunc.split()
reviews = [
word for word in nopunc if word.lower() not in stopwords.words("english")
]
return " ".join(reviews)
tweet["text"] = tweet["text"].apply(cleaning)
# Wordcloud for reviews
cleaned_plot = WordCloud(
background_color="black", stopwords=STOPWORDS, width=3000, height=2500
).generate("".join(tweet["text"]))
plt.imshow(cleaned_plot)
plt.axis("off")
plt.show()
# changing the date format from year-month-date hours:minutes:seconds to datemonth
def date_change(dates):
format = "%Y-%m-%d %H:%M:%S%z"
dates = datetime.strptime(dates, format).strftime("%d%b")
return dates
tweet["new_date"] = tweet["date"].apply(date_change)
tweet.head()
# countplot for tweets tweeted in a day
plt.figure(figsize=(12, 5))
sns.countplot(x="new_date", data=tweet)
plt.xlabel("Date")
plt.ylabel("Tweets in a day")
plt.show()
tweets1 = " ".join(tweet[tweet["new_date"] == "18Oct"]["text"])
tweets2 = " ".join(tweet[tweet["new_date"] == "28Oct"]["text"])
# wordcloud for 18Oct
plot = WordCloud(
background_color="black", stopwords=STOPWORDS, width=3000, height=2500
).generate(tweets1)
plt.imshow(plot)
plt.axis("off")
plt.show()
# wordcloud for 18Oct
plot = WordCloud(
background_color="black", stopwords=STOPWORDS, width=3000, height=2500
).generate(tweets2)
plt.imshow(plot)
plt.axis("off")
plt.show()
# # TextBlob is a Python library and can determine the sentiment of a piece of text as positive, negative, and neutral.
# determining the polarity of sentiments
def sentiment(texts):
text = TextBlob(texts)
if text.sentiment.polarity > 0:
return 1
elif text.sentiment.polarity < 0:
return -1
else:
return 0
tweet["sentiment_polarity"] = tweet["text"].apply(sentiment)
tweet.head(20)
# wordcloud for sentiment polarity =1
positive = "".join(tweet[tweet["sentiment_polarity"] == 1]["text"])
plot1 = WordCloud(
background_color="black", width=600, height=600, stopwords=STOPWORDS
).generate(positive)
plt.imshow(plot1)
plt.axis("off")
plt.show()
# wordcloud for sentiment polarity =-1
negative = "".join(tweet[tweet["sentiment_polarity"] == -1]["text"])
plot2 = WordCloud(
background_color="black", width=600, height=600, stopwords=STOPWORDS
).generate(negative)
plt.imshow(plot2)
plt.axis("off")
plt.show()
# wordcloud for sentiment polarity =0
neutral = "".join(tweet[tweet["sentiment_polarity"] == 0]["text"])
plot3 = WordCloud(
background_color="black", width=600, height=600, stopwords=STOPWORDS
).generate(neutral)
plt.imshow(plot3)
plt.axis("off")
plt.show()
# countplot for the sentiment polarity
count = sns.countplot(x="sentiment_polarity", data=tweet)
plt.title("countplot for the sentiment polarity")
plt.xlabel("Polarity")
for value in count.patches:
x = value.get_x() + value.get_width() / 2 - 0.05
y = value.get_y() + value.get_height() + 500
count.annotate(value.get_height(), (x, y), size=9.5)
def score(text):
analyzer = SentimentIntensityAnalyzer()
score = analyzer.polarity_scores(text)
if score["compound"] >= 0.05:
return 1
elif score["compound"] <= -0.05:
return -1
else:
return 0
nltk.download("vader_lexicon")
tweet["sentiment"] = tweet["text"].apply(score)
tweet.head()
# countplot for sentiments
sns.countplot(x="sentiment", data=tweet)
plt.title("countplot for sentiments")
plt.xlabel("Sentiments")
x = tweet["text"]
y = tweet["sentiment"]
# creating an object for TfidfVectorize
vector = TfidfVectorizer()
x = vector.fit_transform(x)
# splitting x, y for training, testing dataset
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
# # **Logistic Regression**
log = LogisticRegression()
log.fit(x_train, y_train)
pred1 = log.predict(x_test)
print("The accuracy score for logistic regression is: ", accuracy_score(pred1, y_test))
print(classification_report(pred1, y_test))
svm = SVC(C=1.0, kernel="linear", gamma=100)
# # **SVM**
svm.fit(x_train, y_train)
pred2 = svm.predict(x_test)
print("The accuracy score for SVM is: ", accuracy_score(pred2, y_test))
print(classification_report(pred2, y_test))
lsvc = LinearSVC()
lsvc.fit(x_train, y_train)
pred3 = lsvc.predict(x_test)
print("The accuracy for LinearSVC is:", accuracy_score(pred3, y_test))
print(classification_report(pred3, y_test))
|
[{"squid-game-netflix-twitter-data/tweets_v8.csv": {"column_names": "[\"user_name\", \"user_location\", \"user_description\", \"user_created\", \"user_followers\", \"user_friends\", \"user_favourites\", \"user_verified\", \"date\", \"text\", \"source\", \"is_retweet\"]", "column_data_types": "{\"user_name\": \"object\", \"user_location\": \"object\", \"user_description\": \"object\", \"user_created\": \"object\", \"user_followers\": \"int64\", \"user_friends\": \"int64\", \"user_favourites\": \"int64\", \"user_verified\": \"bool\", \"date\": \"object\", \"text\": \"object\", \"source\": \"object\", \"is_retweet\": \"bool\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 80019 entries, 0 to 80018\nData columns (total 12 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 user_name 80015 non-null object\n 1 user_location 56149 non-null object\n 2 user_description 74808 non-null object\n 3 user_created 80019 non-null object\n 4 user_followers 80019 non-null int64 \n 5 user_friends 80019 non-null int64 \n 6 user_favourites 80019 non-null int64 \n 7 user_verified 80019 non-null bool \n 8 date 80019 non-null object\n 9 text 80019 non-null object\n 10 source 80019 non-null object\n 11 is_retweet 80019 non-null bool \ndtypes: bool(2), int64(3), object(7)\nmemory usage: 6.3+ MB\n", "summary": "{\"user_followers\": {\"count\": 80019.0, \"mean\": 17945.86838125945, \"std\": 245115.91942044927, \"min\": 0.0, \"25%\": 62.0, \"50%\": 291.0, \"75%\": 1183.0, \"max\": 16846417.0}, \"user_friends\": {\"count\": 80019.0, \"mean\": 1071.0196078431372, \"std\": 6751.347848793119, \"min\": 0.0, \"25%\": 118.0, \"50%\": 393.0, \"75%\": 986.0, \"max\": 1211576.0}, \"user_favourites\": {\"count\": 80019.0, \"mean\": 17964.492408053087, \"std\": 48503.582457692486, \"min\": 0.0, \"25%\": 442.0, \"50%\": 3028.0, \"75%\": 14940.0, \"max\": 1144792.0}}", "examples": "{\"user_name\":{\"0\":\"the _\\u00fbnd\\u00ebr-rat\\u00e8d nigg\\u00e1h\\ud83d\\udc4a\\ud83c\\udffe\",\"1\":\"Best uncle on planet earth\",\"2\":\"marcie\",\"3\":\"YoMo.Mdp\"},\"user_location\":{\"0\":null,\"1\":null,\"2\":null,\"3\":\"Any pronouns \"},\"user_description\":{\"0\":\"@ManUtd die hard\\u2764\\ufe0f\\u2764\\ufe0f\\ud83d\\udcaa\\ud83c\\udfff\\ud83d\\udcaa\\ud83c\\udfff\\n\\n\\nYOLO\\n\\n\\nJ'ai besoin de quelqu'un qui peut m'aimer au pire\\ud83e\\udd17\\nNon, je ne suis pas parfait, mais j'esp\\u00e8re que tu vois ma valeur\\ud83e\\udd1e\\ud83c\\udffe\",\"1\":null,\"2\":\"animal crossing. chicken nuggets. baby yoda. smol animals. tv shows. \\ud83c\\udff3\\ufe0f\\u200d\\ud83c\\udf08 pronouns: any\",\"3\":\"Where the heck is the karma\\nI'm going on my school grave brb\\n#Technosupport\"},\"user_created\":{\"0\":\"2019-09-06 19:24:57+00:00\",\"1\":\"2013-05-08 19:35:26+00:00\",\"2\":\"2009-02-21 10:31:30+00:00\",\"3\":\"2021-02-14 13:21:22+00:00\"},\"user_followers\":{\"0\":581,\"1\":741,\"2\":562,\"3\":3},\"user_friends\":{\"0\":1035,\"1\":730,\"2\":1197,\"3\":277},\"user_favourites\":{\"0\":8922,\"1\":8432,\"2\":62732,\"3\":1341},\"user_verified\":{\"0\":false,\"1\":false,\"2\":false,\"3\":false},\"date\":{\"0\":\"2021-10-06 12:05:38+00:00\",\"1\":\"2021-10-06 12:05:22+00:00\",\"2\":\"2021-10-06 12:05:22+00:00\",\"3\":\"2021-10-06 12:05:04+00:00\"},\"text\":{\"0\":\"When life hits and the same time poverty strikes you\\nGong Yoo : Lets play a game \\n#SquidGame #Netflix https:\\/\\/t.co\\/Cx7ifmZ8cN\",\"1\":\"That marble episode of #SquidGame ruined me. \\ud83d\\ude2d\\ud83d\\ude2d\\ud83d\\ude2d\",\"2\":\"#Squidgame time\",\"3\":\"\\/\\/Blood on 1st slide\\nI'm joining the squidgame thing, I'm already dead by sugar honeycomb ofc\\n\\n#SquidGame\\u2026 https:\\/\\/t.co\\/N4UGv9hxx8\"},\"source\":{\"0\":\"Twitter for Android\",\"1\":\"Twitter for Android\",\"2\":\"Twitter Web App\",\"3\":\"Twitter Web App\"},\"is_retweet\":{\"0\":false,\"1\":false,\"2\":false,\"3\":false}}"}}]
| true | 1 |
<start_data_description><data_path>squid-game-netflix-twitter-data/tweets_v8.csv:
<column_names>
['user_name', 'user_location', 'user_description', 'user_created', 'user_followers', 'user_friends', 'user_favourites', 'user_verified', 'date', 'text', 'source', 'is_retweet']
<column_types>
{'user_name': 'object', 'user_location': 'object', 'user_description': 'object', 'user_created': 'object', 'user_followers': 'int64', 'user_friends': 'int64', 'user_favourites': 'int64', 'user_verified': 'bool', 'date': 'object', 'text': 'object', 'source': 'object', 'is_retweet': 'bool'}
<dataframe_Summary>
{'user_followers': {'count': 80019.0, 'mean': 17945.86838125945, 'std': 245115.91942044927, 'min': 0.0, '25%': 62.0, '50%': 291.0, '75%': 1183.0, 'max': 16846417.0}, 'user_friends': {'count': 80019.0, 'mean': 1071.0196078431372, 'std': 6751.347848793119, 'min': 0.0, '25%': 118.0, '50%': 393.0, '75%': 986.0, 'max': 1211576.0}, 'user_favourites': {'count': 80019.0, 'mean': 17964.492408053087, 'std': 48503.582457692486, 'min': 0.0, '25%': 442.0, '50%': 3028.0, '75%': 14940.0, 'max': 1144792.0}}
<dataframe_info>
RangeIndex: 80019 entries, 0 to 80018
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 user_name 80015 non-null object
1 user_location 56149 non-null object
2 user_description 74808 non-null object
3 user_created 80019 non-null object
4 user_followers 80019 non-null int64
5 user_friends 80019 non-null int64
6 user_favourites 80019 non-null int64
7 user_verified 80019 non-null bool
8 date 80019 non-null object
9 text 80019 non-null object
10 source 80019 non-null object
11 is_retweet 80019 non-null bool
dtypes: bool(2), int64(3), object(7)
memory usage: 6.3+ MB
<some_examples>
{'user_name': {'0': 'the _ûndër-ratèd niggáh👊🏾', '1': 'Best uncle on planet earth', '2': 'marcie', '3': 'YoMo.Mdp'}, 'user_location': {'0': None, '1': None, '2': None, '3': 'Any pronouns '}, 'user_description': {'0': "@ManUtd die hard❤️❤️💪🏿💪🏿\n\n\nYOLO\n\n\nJ'ai besoin de quelqu'un qui peut m'aimer au pire🤗\nNon, je ne suis pas parfait, mais j'espère que tu vois ma valeur🤞🏾", '1': None, '2': 'animal crossing. chicken nuggets. baby yoda. smol animals. tv shows. 🏳️\u200d🌈 pronouns: any', '3': "Where the heck is the karma\nI'm going on my school grave brb\n#Technosupport"}, 'user_created': {'0': '2019-09-06 19:24:57+00:00', '1': '2013-05-08 19:35:26+00:00', '2': '2009-02-21 10:31:30+00:00', '3': '2021-02-14 13:21:22+00:00'}, 'user_followers': {'0': 581, '1': 741, '2': 562, '3': 3}, 'user_friends': {'0': 1035, '1': 730, '2': 1197, '3': 277}, 'user_favourites': {'0': 8922, '1': 8432, '2': 62732, '3': 1341}, 'user_verified': {'0': False, '1': False, '2': False, '3': False}, 'date': {'0': '2021-10-06 12:05:38+00:00', '1': '2021-10-06 12:05:22+00:00', '2': '2021-10-06 12:05:22+00:00', '3': '2021-10-06 12:05:04+00:00'}, 'text': {'0': 'When life hits and the same time poverty strikes you\nGong Yoo : Lets play a game \n#SquidGame #Netflix https://t.co/Cx7ifmZ8cN', '1': 'That marble episode of #SquidGame ruined me. 😭😭😭', '2': '#Squidgame time', '3': "//Blood on 1st slide\nI'm joining the squidgame thing, I'm already dead by sugar honeycomb ofc\n\n#SquidGame… https://t.co/N4UGv9hxx8"}, 'source': {'0': 'Twitter for Android', '1': 'Twitter for Android', '2': 'Twitter Web App', '3': 'Twitter Web App'}, 'is_retweet': {'0': False, '1': False, '2': False, '3': False}}
<end_description>
| 2,117 | 0 | 3,661 | 2,117 |
129538785
|
<jupyter_start><jupyter_text>Data Science Salaries 2023 💸
Data Science Job Salaries Dataset contains 11 columns, each are:
1. work_year: The year the salary was paid.
2. experience_level: The experience level in the job during the year
3. employment_type: The type of employment for the role
4. job_title: The role worked in during the year.
5. salary: The total gross salary amount paid.
6. salary_currency: The currency of the salary paid as an ISO 4217 currency code.
7. salaryinusd: The salary in USD
8. employee_residence: Employee's primary country of residence in during the work year as an ISO 3166 country code.
9. remote_ratio: The overall amount of work done remotely
10. company_location: The country of the employer's main office or contracting branch
11. company_size: The median number of people that worked for the company during the year
Kaggle dataset identifier: data-science-salaries-2023
<jupyter_script>import pandas as pd
# Gt,Csk,Mi,Lsg,Rcb,Rr,Pbks,Kkr,Srh,Dc
# 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
data = {
"position": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"points": [16, 15, 14, 13, 12, 12, 12, 10, 8, 8],
"mat": [12, 12, 12, 12, 12, 12, 12, 12, 12, 12],
}
df = pd.DataFrame(data)
print(df)
# # The Data shows points and position and matches of all IPL Teams.
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(x="position", y="points", data=data)
plt.show()
# The Graph shows the position and points of all IPL Teams.
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/538/129538785.ipynb
|
data-science-salaries-2023
|
arnabchaki
|
[{"Id": 129538785, "ScriptId": 38479407, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14864652, "CreationDate": "05/14/2023 16:32:50", "VersionNumber": 1.0, "Title": "IPLTeamsAlgorithm", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 35.0, "LinesInsertedFromPrevious": 35.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 185702903, "KernelVersionId": 129538785, "SourceDatasetVersionId": 5392837}]
|
[{"Id": 5392837, "DatasetId": 3125926, "DatasourceVersionId": 5466555, "CreatorUserId": 7428813, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "04/13/2023 09:55:16", "VersionNumber": 1.0, "Title": "Data Science Salaries 2023 \ud83d\udcb8", "Slug": "data-science-salaries-2023", "Subtitle": "Salaries of Different Data Science Fields in the Data Science Domain", "Description": "Data Science Job Salaries Dataset contains 11 columns, each are:\n\n1. work_year: The year the salary was paid.\n2. experience_level: The experience level in the job during the year\n3. employment_type: The type of employment for the role\n4. job_title: The role worked in during the year.\n5. salary: The total gross salary amount paid.\n6. salary_currency: The currency of the salary paid as an ISO 4217 currency code.\n7. salaryinusd: The salary in USD\n8. employee_residence: Employee's primary country of residence in during the work year as an ISO 3166 country code.\n9. remote_ratio: The overall amount of work done remotely\n10. company_location: The country of the employer's main office or contracting branch\n11. company_size: The median number of people that worked for the company during the year", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3125926, "CreatorUserId": 7428813, "OwnerUserId": 7428813.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5392837.0, "CurrentDatasourceVersionId": 5466555.0, "ForumId": 3189506, "Type": 2, "CreationDate": "04/13/2023 09:55:16", "LastActivityDate": "04/13/2023", "TotalViews": 234449, "TotalDownloads": 44330, "TotalVotes": 1244, "TotalKernels": 184}]
|
[{"Id": 7428813, "UserName": "arnabchaki", "DisplayName": "randomarnab", "RegisterDate": "05/16/2021", "PerformanceTier": 2}]
|
import pandas as pd
# Gt,Csk,Mi,Lsg,Rcb,Rr,Pbks,Kkr,Srh,Dc
# 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
data = {
"position": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"points": [16, 15, 14, 13, 12, 12, 12, 10, 8, 8],
"mat": [12, 12, 12, 12, 12, 12, 12, 12, 12, 12],
}
df = pd.DataFrame(data)
print(df)
# # The Data shows points and position and matches of all IPL Teams.
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(x="position", y="points", data=data)
plt.show()
# The Graph shows the position and points of all IPL Teams.
| false | 0 | 276 | 2 | 525 | 276 |
||
129538624
|
# **The aim of this notebook is to let some one like me (though I think there aren't many of us left, and finally there can only be one) get in one quick iteration, from reading in the data to making a valid submission. It may be difficult to tell, but I have made no efforts to get a good score. But this can be a platform to take better potshots at those `$$$` floating around**
# # Imports
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, f1_score, roc_auc_score
# # Read Data
raw_data = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
greek_data = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
hack_data = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
sample = pd.read_csv(
"/kaggle/input/icr-identify-age-related-conditions/sample_submission.csv"
)
for df in [raw_data, greek_data, hack_data, sample]:
print(f"SHAPE: {df.shape} \tNULLS: {(df.isna().sum().sum())}")
# # Basic EDA and prep
raw_data.head()
for col in raw_data.columns:
nulls = raw_data[col].isna().sum()
if nulls:
print(col, nulls)
# So BQ and EL have about 10% nulls. Hmmm...
# This won't do at all now, will it?
# Eliminate the nulls
raw_data = raw_data.fillna(raw_data.median())
raw_data = raw_data.ffill()
hack_data = hack_data.fillna(hack_data.median())
hack_data = hack_data.ffill()
print("NULLS remaining:", raw_data.isna().sum().sum())
print("NULLS remaining:", hack_data.isna().sum().sum())
# We have divergents in our midst, a couple of categorical columns hiding in plain sight.
# This will need to be fixed
for col in raw_data.columns:
if raw_data[col].dtypes == "O":
print(col, raw_data[col].nunique())
# First, I need to be sure that that Kaggle team has not given away an easy win
common_ids = set(raw_data["Id"]).intersection(set(hack_data["Id"]))
if common_ids:
print(f"Eureka !! {len(common_ids)} golden eggs")
else:
print("Drat, no training data id figures in test data :( :(")
raw_data.drop("Id", axis=1, inplace=True)
hack_data.drop("Id", axis=1, inplace=True)
# Next, let's tackle the EJ enigma
raw_data["EJ"] = raw_data["EJ"].replace({"A": 0, "B": 1})
hack_data["EJ"] = hack_data["EJ"].replace({"A": 0, "B": 1})
# Fianlly, we check the Class balance
raw_data["Class"].value_counts()
# People's exhibit no 1 - the jury will not disregard this when push comes to shove.
# Note to self - am I mixing metaphors here, or am I mixing something else altogether. Never was a good mixologist.
# Get the data ready for modelling
# And let's ignore the Greeks for now, shall we.
X = raw_data.drop("Class", axis=1)
y = raw_data["Class"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
) # Hi there Douglas Adams
for thing in [X_train, X_test, y_train, y_test]:
print(thing.shape)
# # Model
numeric_transformer = Pipeline(steps=[("scaler", StandardScaler())])
# I mean, who doesn't love transformers
preprocessor = ColumnTransformer(transformers=[("num", numeric_transformer, X.columns)])
# And that made me think of a food processor, which made me hungry, which then becomes a different pipeline "thoughts of food --> hunger pangs"
# So I will need a quick byte here, hang on.
# Right, back again...
rf_model = RandomForestClassifier(n_estimators=100, random_state=42)
pipeline = Pipeline(steps=[("preprocessor", preprocessor), ("model", rf_model)])
pipeline.fit(X_train, y_train)
y_train_prob = pipeline.predict_proba(X_train)[:, 1]
y_test_prob = pipeline.predict_proba(X_test)[:, 1]
def get_scores(truth, prediction):
prediction = (prediction >= 0.5).astype(int)
accuracy = accuracy_score(truth, prediction)
f1 = f1_score(truth, prediction)
roc_auc = roc_auc_score(truth, prediction)
return accuracy, f1, roc_auc
acc_train, f1_train, auc_train = get_scores(y_train, y_train_prob)
acc_test, f1_test, auc_test = get_scores(y_test, y_test_prob)
# print metrics for training and testing data
print("Training data:")
print("Accuracy:", acc_train)
print("F1-score:", f1_train)
print("AUC: ", auc_train)
print()
print("Testing data:")
print("Accuracy:", acc_test)
print("F1-score:", f1_test)
print("AUC: ", auc_test)
# make predictions on hack_data using the trained pipeline
hack_data_probabilities = pipeline.predict_proba(hack_data)
display(pd.DataFrame(hack_data_probabilities).describe().T)
# # Submission File Generation
sub_dict = {
"Id": sample["Id"],
"class_0": hack_data_probabilities[:, 0],
"class_1": hack_data_probabilities[:, 1],
}
sub = pd.DataFrame(sub_dict)
display(sub.describe().T)
sub.to_csv("submission.csv", index=False)
sub = pd.read_csv("submission.csv")
display(sub)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/538/129538624.ipynb
| null | null |
[{"Id": 129538624, "ScriptId": 38494176, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4088703, "CreationDate": "05/14/2023 16:31:17", "VersionNumber": 4.0, "Title": "01 quick data to submission", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 137.0, "LinesInsertedFromPrevious": 16.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 121.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
| null | null | null | null |
# **The aim of this notebook is to let some one like me (though I think there aren't many of us left, and finally there can only be one) get in one quick iteration, from reading in the data to making a valid submission. It may be difficult to tell, but I have made no efforts to get a good score. But this can be a platform to take better potshots at those `$$$` floating around**
# # Imports
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, f1_score, roc_auc_score
# # Read Data
raw_data = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
greek_data = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
hack_data = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
sample = pd.read_csv(
"/kaggle/input/icr-identify-age-related-conditions/sample_submission.csv"
)
for df in [raw_data, greek_data, hack_data, sample]:
print(f"SHAPE: {df.shape} \tNULLS: {(df.isna().sum().sum())}")
# # Basic EDA and prep
raw_data.head()
for col in raw_data.columns:
nulls = raw_data[col].isna().sum()
if nulls:
print(col, nulls)
# So BQ and EL have about 10% nulls. Hmmm...
# This won't do at all now, will it?
# Eliminate the nulls
raw_data = raw_data.fillna(raw_data.median())
raw_data = raw_data.ffill()
hack_data = hack_data.fillna(hack_data.median())
hack_data = hack_data.ffill()
print("NULLS remaining:", raw_data.isna().sum().sum())
print("NULLS remaining:", hack_data.isna().sum().sum())
# We have divergents in our midst, a couple of categorical columns hiding in plain sight.
# This will need to be fixed
for col in raw_data.columns:
if raw_data[col].dtypes == "O":
print(col, raw_data[col].nunique())
# First, I need to be sure that that Kaggle team has not given away an easy win
common_ids = set(raw_data["Id"]).intersection(set(hack_data["Id"]))
if common_ids:
print(f"Eureka !! {len(common_ids)} golden eggs")
else:
print("Drat, no training data id figures in test data :( :(")
raw_data.drop("Id", axis=1, inplace=True)
hack_data.drop("Id", axis=1, inplace=True)
# Next, let's tackle the EJ enigma
raw_data["EJ"] = raw_data["EJ"].replace({"A": 0, "B": 1})
hack_data["EJ"] = hack_data["EJ"].replace({"A": 0, "B": 1})
# Fianlly, we check the Class balance
raw_data["Class"].value_counts()
# People's exhibit no 1 - the jury will not disregard this when push comes to shove.
# Note to self - am I mixing metaphors here, or am I mixing something else altogether. Never was a good mixologist.
# Get the data ready for modelling
# And let's ignore the Greeks for now, shall we.
X = raw_data.drop("Class", axis=1)
y = raw_data["Class"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
) # Hi there Douglas Adams
for thing in [X_train, X_test, y_train, y_test]:
print(thing.shape)
# # Model
numeric_transformer = Pipeline(steps=[("scaler", StandardScaler())])
# I mean, who doesn't love transformers
preprocessor = ColumnTransformer(transformers=[("num", numeric_transformer, X.columns)])
# And that made me think of a food processor, which made me hungry, which then becomes a different pipeline "thoughts of food --> hunger pangs"
# So I will need a quick byte here, hang on.
# Right, back again...
rf_model = RandomForestClassifier(n_estimators=100, random_state=42)
pipeline = Pipeline(steps=[("preprocessor", preprocessor), ("model", rf_model)])
pipeline.fit(X_train, y_train)
y_train_prob = pipeline.predict_proba(X_train)[:, 1]
y_test_prob = pipeline.predict_proba(X_test)[:, 1]
def get_scores(truth, prediction):
prediction = (prediction >= 0.5).astype(int)
accuracy = accuracy_score(truth, prediction)
f1 = f1_score(truth, prediction)
roc_auc = roc_auc_score(truth, prediction)
return accuracy, f1, roc_auc
acc_train, f1_train, auc_train = get_scores(y_train, y_train_prob)
acc_test, f1_test, auc_test = get_scores(y_test, y_test_prob)
# print metrics for training and testing data
print("Training data:")
print("Accuracy:", acc_train)
print("F1-score:", f1_train)
print("AUC: ", auc_train)
print()
print("Testing data:")
print("Accuracy:", acc_test)
print("F1-score:", f1_test)
print("AUC: ", auc_test)
# make predictions on hack_data using the trained pipeline
hack_data_probabilities = pipeline.predict_proba(hack_data)
display(pd.DataFrame(hack_data_probabilities).describe().T)
# # Submission File Generation
sub_dict = {
"Id": sample["Id"],
"class_0": hack_data_probabilities[:, 0],
"class_1": hack_data_probabilities[:, 1],
}
sub = pd.DataFrame(sub_dict)
display(sub.describe().T)
sub.to_csv("submission.csv", index=False)
sub = pd.read_csv("submission.csv")
display(sub)
| false | 0 | 1,607 | 5 | 1,607 | 1,607 |
||
129538272
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from tensorflow.keras.applications.vgg16 import (
VGG16,
preprocess_input,
decode_predictions,
)
from tensorflow.keras.preprocessing import image
import urllib.request
model = VGG16(weights="imagenet")
import matplotlib.pyplot as plt
import urllib.request
# Load and display the image
url = "https://images.livemint.com/img/2022/10/18/600x338/Nissan_Qashqai_1666088585096_1666088594612_1666088594612.jpg"
urllib.request.urlretrieve(url, "car.jpg")
img_path = "car.jpg"
img = plt.imread(img_path)
img = np.array(img)
img = np.resize(img, (224, 224, 3))
plt.imshow(img)
plt.axis("off")
plt.show()
x = image.img_to_array(img)
x = preprocess_input(x)
x = np.expand_dims(x, axis=0)
preds = model.predict(x)
print("Predicted:", decode_predictions(preds, top=3)[0])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/538/129538272.ipynb
| null | null |
[{"Id": 129538272, "ScriptId": 38517625, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13643031, "CreationDate": "05/14/2023 16:27:48", "VersionNumber": 1.0, "Title": "notebookdaaea93838", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 48.0, "LinesInsertedFromPrevious": 48.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from tensorflow.keras.applications.vgg16 import (
VGG16,
preprocess_input,
decode_predictions,
)
from tensorflow.keras.preprocessing import image
import urllib.request
model = VGG16(weights="imagenet")
import matplotlib.pyplot as plt
import urllib.request
# Load and display the image
url = "https://images.livemint.com/img/2022/10/18/600x338/Nissan_Qashqai_1666088585096_1666088594612_1666088594612.jpg"
urllib.request.urlretrieve(url, "car.jpg")
img_path = "car.jpg"
img = plt.imread(img_path)
img = np.array(img)
img = np.resize(img, (224, 224, 3))
plt.imshow(img)
plt.axis("off")
plt.show()
x = image.img_to_array(img)
x = preprocess_input(x)
x = np.expand_dims(x, axis=0)
preds = model.predict(x)
print("Predicted:", decode_predictions(preds, top=3)[0])
| false | 0 | 498 | 0 | 498 | 498 |
||
129962792
|
from simplet5 import SimpleT5
from sklearn.model_selection import train_test_split
from transformers import T5Tokenizer, T5ForConditionalGeneration
import pandas as pd
file = "/kaggle/input/multipledata/LatestData.xlsx"
df1 = pd.read_excel(file)
df1
df1.dropna(inplace=True)
df1[["ABSTRACT", "TERM"]] = df1[["ABSTRACT", "TERM"]].applymap(
lambda x: x.strip() if isinstance(x, str) else x
)
# Group by 'Col1' and aggregate 'Col2' into a list
df = df1.groupby("ABSTRACT")["TERM"].apply(list).reset_index()
df
df = df.rename(columns={"ABSTRACT": "source_text"}).rename(
columns={"TERM": "target_text"}
)
df["target_text"] = df["target_text"].astype(str).str.replace(r"\[|\]", "")
# df['target_text'] = df['target_text'].str.replace(r'\[|\]', '', regex=True).astype('str')
df = df[["source_text", "target_text"]]
df
train_df, test_df = train_test_split(df, test_size=0.2) # Split 70-30 %
train_df.shape, test_df.shape
# Initialize and train the text summarization model
model = SimpleT5()
model.from_pretrained(model_type="t5", model_name="t5-base")
# First Working Model configuration
model.train(
train_df=train_df,
eval_df=test_df,
source_max_token_len=512,
batch_size=1,
max_epochs=1,
use_gpu=True,
)
from simplet5 import SimpleT5
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Case_2
model.load_model("t5", "outputs/simplet5-epoch-3-train-loss-0.5889-val-loss-1.6707")
text = """
"""
for i in range(30):
print(model.predict(text))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/962/129962792.ipynb
| null | null |
[{"Id": 129962792, "ScriptId": 38654423, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15151314, "CreationDate": "05/17/2023 18:35:17", "VersionNumber": 1.0, "Title": "KeyTermExtraction", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 59.0, "LinesInsertedFromPrevious": 59.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
from simplet5 import SimpleT5
from sklearn.model_selection import train_test_split
from transformers import T5Tokenizer, T5ForConditionalGeneration
import pandas as pd
file = "/kaggle/input/multipledata/LatestData.xlsx"
df1 = pd.read_excel(file)
df1
df1.dropna(inplace=True)
df1[["ABSTRACT", "TERM"]] = df1[["ABSTRACT", "TERM"]].applymap(
lambda x: x.strip() if isinstance(x, str) else x
)
# Group by 'Col1' and aggregate 'Col2' into a list
df = df1.groupby("ABSTRACT")["TERM"].apply(list).reset_index()
df
df = df.rename(columns={"ABSTRACT": "source_text"}).rename(
columns={"TERM": "target_text"}
)
df["target_text"] = df["target_text"].astype(str).str.replace(r"\[|\]", "")
# df['target_text'] = df['target_text'].str.replace(r'\[|\]', '', regex=True).astype('str')
df = df[["source_text", "target_text"]]
df
train_df, test_df = train_test_split(df, test_size=0.2) # Split 70-30 %
train_df.shape, test_df.shape
# Initialize and train the text summarization model
model = SimpleT5()
model.from_pretrained(model_type="t5", model_name="t5-base")
# First Working Model configuration
model.train(
train_df=train_df,
eval_df=test_df,
source_max_token_len=512,
batch_size=1,
max_epochs=1,
use_gpu=True,
)
from simplet5 import SimpleT5
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Case_2
model.load_model("t5", "outputs/simplet5-epoch-3-train-loss-0.5889-val-loss-1.6707")
text = """
"""
for i in range(30):
print(model.predict(text))
| false | 0 | 535 | 0 | 535 | 535 |
||
129962025
|
# # Stable Diffusion
# Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It’s trained on 512x512 images from a subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/).
# [Here is a list of some Stable Diffusion Prompts](https://prompthero.com/stable-diffusion-prompts)
import torch
from diffusers import StableDiffusionPipeline
model_id = "prompthero/midjourney-v4-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding horse on mars, highly detailed, raytracing, sharpfocus, smooth, photorealistic, 8k, moody, intricate, sharp focus, depth of field, f/1. 8, 85mm"
image = pipe(prompt).images[0]
image.save("astronaut_horse.png")
from IPython.display import Image
Image("astronaut_horse.png")
prompt = " magazine infographic of retrofuturism bodywear | primitive | vintage | intricate detail | digital art | digital painting | concept art | poster | award winning | max detail | 8k "
image = pipe(prompt).images[0]
image.save("future primitive.png")
from IPython.display import Image
Image("future primitive.png")
from datetime import datetime
from IPython.display import Image
now = datetime.now() # current date and time
time = now.strftime("%H:%M:%S")
prompt = "selfie of a group of smiling WWII soldiers| max detail | f1. 4 | 8k "
image = pipe(prompt).images[0]
filename = "stabilizer-" + time + ".png"
image.save(filename)
Image(filename)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/962/129962025.ipynb
| null | null |
[{"Id": 129962025, "ScriptId": 38571784, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1434476, "CreationDate": "05/17/2023 18:27:57", "VersionNumber": 4.0, "Title": "Stable Diffusion", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 37.0, "LinesInsertedFromPrevious": 17.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 20.0, "LinesInsertedFromFork": 17.0, "LinesDeletedFromFork": 7.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 20.0, "TotalVotes": 0}]
| null | null | null | null |
# # Stable Diffusion
# Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It’s trained on 512x512 images from a subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/).
# [Here is a list of some Stable Diffusion Prompts](https://prompthero.com/stable-diffusion-prompts)
import torch
from diffusers import StableDiffusionPipeline
model_id = "prompthero/midjourney-v4-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding horse on mars, highly detailed, raytracing, sharpfocus, smooth, photorealistic, 8k, moody, intricate, sharp focus, depth of field, f/1. 8, 85mm"
image = pipe(prompt).images[0]
image.save("astronaut_horse.png")
from IPython.display import Image
Image("astronaut_horse.png")
prompt = " magazine infographic of retrofuturism bodywear | primitive | vintage | intricate detail | digital art | digital painting | concept art | poster | award winning | max detail | 8k "
image = pipe(prompt).images[0]
image.save("future primitive.png")
from IPython.display import Image
Image("future primitive.png")
from datetime import datetime
from IPython.display import Image
now = datetime.now() # current date and time
time = now.strftime("%H:%M:%S")
prompt = "selfie of a group of smiling WWII soldiers| max detail | f1. 4 | 8k "
image = pipe(prompt).images[0]
filename = "stabilizer-" + time + ".png"
image.save(filename)
Image(filename)
| false | 0 | 503 | 0 | 503 | 503 |
||
129962551
|
<jupyter_start><jupyter_text>New York City Airbnb Open Data
###Context
Since 2008, guests and hosts have used Airbnb to expand on traveling possibilities and present more unique, personalized way of experiencing the world. This dataset describes the listing activity and metrics in NYC, NY for 2019.
###Content
This data file includes all needed information to find out more about hosts, geographical availability, necessary metrics to make predictions and draw conclusions.
###Acknowledgements
This public dataset is part of Airbnb, and the original source can be found on this [website](http://insideairbnb.com).
###Inspiration
- What can we learn about different hosts and areas?
- What can we learn from predictions? (ex: locations, prices, reviews, etc)
- Which hosts are the busiest and why?
- Is there any noticeable difference of traffic among different areas and what could be the reason for it?
Kaggle dataset identifier: new-york-city-airbnb-open-data
<jupyter_code>import pandas as pd
df = pd.read_csv('new-york-city-airbnb-open-data/AB_NYC_2019.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 48895 entries, 0 to 48894
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 48895 non-null int64
1 name 48879 non-null object
2 host_id 48895 non-null int64
3 host_name 48874 non-null object
4 neighbourhood_group 48895 non-null object
5 neighbourhood 48895 non-null object
6 latitude 48895 non-null float64
7 longitude 48895 non-null float64
8 room_type 48895 non-null object
9 price 48895 non-null int64
10 minimum_nights 48895 non-null int64
11 number_of_reviews 48895 non-null int64
12 last_review 38843 non-null object
13 reviews_per_month 38843 non-null float64
14 calculated_host_listings_count 48895 non-null int64
15 availability_365 48895 non-null int64
dtypes: float64(3), int64(7), object(6)
memory usage: 6.0+ MB
<jupyter_text>Examples:
{
"id": 2539,
"name": "Clean & quiet apt home by the park",
"host_id": 2787,
"host_name": "John",
"neighbourhood_group": "Brooklyn",
"neighbourhood": "Kensington",
"latitude": 40.64749,
"longitude": -73.97237,
"room_type": "Private room",
"price": 149,
"minimum_nights": 1,
"number_of_reviews": 9,
"last_review": "2018-10-19",
"reviews_per_month": 0.21,
"calculated_host_listings_count": 6,
"availability_365": 365
}
{
"id": 2595,
"name": "Skylit Midtown Castle",
"host_id": 2845,
"host_name": "Jennifer",
"neighbourhood_group": "Manhattan",
"neighbourhood": "Midtown",
"latitude": 40.75362,
"longitude": -73.98377,
"room_type": "Entire home/apt",
"price": 225,
"minimum_nights": 1,
"number_of_reviews": 45,
"last_review": "2019-05-21",
"reviews_per_month": 0.38,
"calculated_host_listings_count": 2,
"availability_365": 355
}
{
"id": 3647,
"name": "THE VILLAGE OF HARLEM....NEW YORK !",
"host_id": 4632,
"host_name": "Elisabeth",
"neighbourhood_group": "Manhattan",
"neighbourhood": "Harlem",
"latitude": 40.80902,
"longitude": -73.9419,
"room_type": "Private room",
"price": 150,
"minimum_nights": 3,
"number_of_reviews": 0,
"last_review": null,
"reviews_per_month": NaN,
"calculated_host_listings_count": 1,
"availability_365": 365
}
{
"id": 3831,
"name": "Cozy Entire Floor of Brownstone",
"host_id": 4869,
"host_name": "LisaRoxanne",
"neighbourhood_group": "Brooklyn",
"neighbourhood": "Clinton Hill",
"latitude": 40.68514,
"longitude": -73.95976,
"room_type": "Entire home/apt",
"price": 89,
"minimum_nights": 1,
"number_of_reviews": 270,
"last_review": "2019-07-05",
"reviews_per_month": 4.64,
"calculated_host_listings_count": 1,
"availability_365": 194
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
df0 = pd.read_csv("/kaggle/input/new-york-city-airbnb-open-data/AB_NYC_2019.csv")
df0.head()
df0.columns
# Choosing the variables that are going to get useful for the analysis.
df1 = df0[
["id", "neighbourhood_group", "neighbourhood", "room_type", "price", "last_review"]
]
df1.head()
df1.info()
# Looking for null values
df1.isnull().sum()
# Filling the gaps (I don't like to drop data).
df1.last_review.mode()
df1.fillna("2019-06-23", inplace=True)
df1.head()
df1.isnull().sum()
# No more null values, let's look for duplicated values
df1.duplicated().value_counts()
# No duplicate data, no null values, let's convert the data to the right type.
df1["last_review"] = pd.to_datetime(df1["last_review"])
df1.info()
# Everything looks good with the data, let's start with the univariable analysis
df1["neighbourhood_group"].value_counts().reset_index()
fig, ax = plt.subplots(figsize=(8, 6))
sns.countplot(data=df1, x="neighbourhood_group", palette="muted")
ax.set_title("Neighbourhood Group", size=16)
ax.set_xlabel("Group", size=15)
ax.set_ylabel("Count", size=15)
ax.set_ylim(0, 25000)
plt.xticks(rotation=30, size=13)
plt.show()
# Manhattan has the biggest number of airbnb is worth another analysis.# There are so many neighbourhoods that the graphic doesn't have any sense.
fig, ax = plt.subplots(figsize=(8, 6))
sns.countplot(data=df1, x="room_type", palette="muted")
ax.set_title("Room Type", size=16)
ax.set_xlabel("Room Type", size=15)
ax.set_ylabel("Count", size=15)
ax.set_ylim(0, 28000)
plt.xticks(size=13)
plt.show()
sns.catplot(data=df1, x="price", kind="box")
plt.show()
df1.price.describe().round(2)
# As we can se, the data is wide spread, it has outlier values, could be a better option to subset the data for the studie.
# Let's subset the data between the values of 25% and 75 %
df_values = df1[(df1["price"] >= 69.00) & (df1["price"] <= 175.00)]
df_values.head()
df_values.price.describe().round(2).reset_index()
p = sns.catplot(data=df_values, x="price", kind="box")
plt.xlabel("Price", fontsize=15)
plt.show()
# So inside our last df 'df_values' let's scan again.
fig, ax = plt.subplots(figsize=(8, 6))
sns.countplot(data=df_values, x="neighbourhood_group", palette="muted")
ax.set_title("Neighbourhood Group", size=16)
ax.set_xlabel("Group", size=15)
ax.set_ylabel("Count", size=15)
ax.set_ylim(0, 13000)
plt.xticks(rotation=30, size=13)
plt.show()
# Manhattan still the place with most airb, good idea to dive deep in the area
manhattan = df_values[df_values["neighbourhood_group"] == "Manhattan"]
manhattan.head()
manhattan.shape
fig, ax = plt.subplots(figsize=(10, 8))
sns.countplot(data=manhattan, y=manhattan["neighbourhood"])
ax.set_title("Neighbourhood Group", size=16)
ax.set_xlabel("Group", size=15)
ax.set_ylabel("Count", size=15)
plt.xticks(rotation=30, size=13)
plt.show()
# Top 10 expensive places in Manhatan
manhattan.sort_values(by="price", ascending=False).head(10)
# Top 10 cheaper places in Manhatan
manhattan.sort_values(by="price", ascending=False).tail(10)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/962/129962551.ipynb
|
new-york-city-airbnb-open-data
|
dgomonov
|
[{"Id": 129962551, "ScriptId": 38658639, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8193952, "CreationDate": "05/17/2023 18:32:52", "VersionNumber": 1.0, "Title": "New York City Airbnb", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 150.0, "LinesInsertedFromPrevious": 150.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186398868, "KernelVersionId": 129962551, "SourceDatasetVersionId": 611395}]
|
[{"Id": 611395, "DatasetId": 268833, "DatasourceVersionId": 630459, "CreatorUserId": 3023930, "LicenseName": "CC0: Public Domain", "CreationDate": "08/12/2019 16:24:45", "VersionNumber": 3.0, "Title": "New York City Airbnb Open Data", "Slug": "new-york-city-airbnb-open-data", "Subtitle": "Airbnb listings and metrics in NYC, NY, USA (2019)", "Description": "###Context\n\nSince 2008, guests and hosts have used Airbnb to expand on traveling possibilities and present more unique, personalized way of experiencing the world. This dataset describes the listing activity and metrics in NYC, NY for 2019.\n\n###Content\n\nThis data file includes all needed information to find out more about hosts, geographical availability, necessary metrics to make predictions and draw conclusions.\n\n###Acknowledgements\n\nThis public dataset is part of Airbnb, and the original source can be found on this [website](http://insideairbnb.com).\n\n###Inspiration\n\n- What can we learn about different hosts and areas?\n- What can we learn from predictions? (ex: locations, prices, reviews, etc)\n- Which hosts are the busiest and why?\n- Is there any noticeable difference of traffic among different areas and what could be the reason for it?", "VersionNotes": "v_3", "TotalCompressedBytes": 192340.0, "TotalUncompressedBytes": 2552732.0}]
|
[{"Id": 268833, "CreatorUserId": 3023930, "OwnerUserId": 3023930.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 611395.0, "CurrentDatasourceVersionId": 630459.0, "ForumId": 280171, "Type": 2, "CreationDate": "07/18/2019 19:16:23", "LastActivityDate": "07/18/2019", "TotalViews": 1035218, "TotalDownloads": 154021, "TotalVotes": 2809, "TotalKernels": 731}]
|
[{"Id": 3023930, "UserName": "dgomonov", "DisplayName": "Dgomonov", "RegisterDate": "04/01/2019", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
df0 = pd.read_csv("/kaggle/input/new-york-city-airbnb-open-data/AB_NYC_2019.csv")
df0.head()
df0.columns
# Choosing the variables that are going to get useful for the analysis.
df1 = df0[
["id", "neighbourhood_group", "neighbourhood", "room_type", "price", "last_review"]
]
df1.head()
df1.info()
# Looking for null values
df1.isnull().sum()
# Filling the gaps (I don't like to drop data).
df1.last_review.mode()
df1.fillna("2019-06-23", inplace=True)
df1.head()
df1.isnull().sum()
# No more null values, let's look for duplicated values
df1.duplicated().value_counts()
# No duplicate data, no null values, let's convert the data to the right type.
df1["last_review"] = pd.to_datetime(df1["last_review"])
df1.info()
# Everything looks good with the data, let's start with the univariable analysis
df1["neighbourhood_group"].value_counts().reset_index()
fig, ax = plt.subplots(figsize=(8, 6))
sns.countplot(data=df1, x="neighbourhood_group", palette="muted")
ax.set_title("Neighbourhood Group", size=16)
ax.set_xlabel("Group", size=15)
ax.set_ylabel("Count", size=15)
ax.set_ylim(0, 25000)
plt.xticks(rotation=30, size=13)
plt.show()
# Manhattan has the biggest number of airbnb is worth another analysis.# There are so many neighbourhoods that the graphic doesn't have any sense.
fig, ax = plt.subplots(figsize=(8, 6))
sns.countplot(data=df1, x="room_type", palette="muted")
ax.set_title("Room Type", size=16)
ax.set_xlabel("Room Type", size=15)
ax.set_ylabel("Count", size=15)
ax.set_ylim(0, 28000)
plt.xticks(size=13)
plt.show()
sns.catplot(data=df1, x="price", kind="box")
plt.show()
df1.price.describe().round(2)
# As we can se, the data is wide spread, it has outlier values, could be a better option to subset the data for the studie.
# Let's subset the data between the values of 25% and 75 %
df_values = df1[(df1["price"] >= 69.00) & (df1["price"] <= 175.00)]
df_values.head()
df_values.price.describe().round(2).reset_index()
p = sns.catplot(data=df_values, x="price", kind="box")
plt.xlabel("Price", fontsize=15)
plt.show()
# So inside our last df 'df_values' let's scan again.
fig, ax = plt.subplots(figsize=(8, 6))
sns.countplot(data=df_values, x="neighbourhood_group", palette="muted")
ax.set_title("Neighbourhood Group", size=16)
ax.set_xlabel("Group", size=15)
ax.set_ylabel("Count", size=15)
ax.set_ylim(0, 13000)
plt.xticks(rotation=30, size=13)
plt.show()
# Manhattan still the place with most airb, good idea to dive deep in the area
manhattan = df_values[df_values["neighbourhood_group"] == "Manhattan"]
manhattan.head()
manhattan.shape
fig, ax = plt.subplots(figsize=(10, 8))
sns.countplot(data=manhattan, y=manhattan["neighbourhood"])
ax.set_title("Neighbourhood Group", size=16)
ax.set_xlabel("Group", size=15)
ax.set_ylabel("Count", size=15)
plt.xticks(rotation=30, size=13)
plt.show()
# Top 10 expensive places in Manhatan
manhattan.sort_values(by="price", ascending=False).head(10)
# Top 10 cheaper places in Manhatan
manhattan.sort_values(by="price", ascending=False).tail(10)
|
[{"new-york-city-airbnb-open-data/AB_NYC_2019.csv": {"column_names": "[\"id\", \"name\", \"host_id\", \"host_name\", \"neighbourhood_group\", \"neighbourhood\", \"latitude\", \"longitude\", \"room_type\", \"price\", \"minimum_nights\", \"number_of_reviews\", \"last_review\", \"reviews_per_month\", \"calculated_host_listings_count\", \"availability_365\"]", "column_data_types": "{\"id\": \"int64\", \"name\": \"object\", \"host_id\": \"int64\", \"host_name\": \"object\", \"neighbourhood_group\": \"object\", \"neighbourhood\": \"object\", \"latitude\": \"float64\", \"longitude\": \"float64\", \"room_type\": \"object\", \"price\": \"int64\", \"minimum_nights\": \"int64\", \"number_of_reviews\": \"int64\", \"last_review\": \"object\", \"reviews_per_month\": \"float64\", \"calculated_host_listings_count\": \"int64\", \"availability_365\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 48895 entries, 0 to 48894\nData columns (total 16 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 id 48895 non-null int64 \n 1 name 48879 non-null object \n 2 host_id 48895 non-null int64 \n 3 host_name 48874 non-null object \n 4 neighbourhood_group 48895 non-null object \n 5 neighbourhood 48895 non-null object \n 6 latitude 48895 non-null float64\n 7 longitude 48895 non-null float64\n 8 room_type 48895 non-null object \n 9 price 48895 non-null int64 \n 10 minimum_nights 48895 non-null int64 \n 11 number_of_reviews 48895 non-null int64 \n 12 last_review 38843 non-null object \n 13 reviews_per_month 38843 non-null float64\n 14 calculated_host_listings_count 48895 non-null int64 \n 15 availability_365 48895 non-null int64 \ndtypes: float64(3), int64(7), object(6)\nmemory usage: 6.0+ MB\n", "summary": "{\"id\": {\"count\": 48895.0, \"mean\": 19017143.236179568, \"std\": 10983108.385610096, \"min\": 2539.0, \"25%\": 9471945.0, \"50%\": 19677284.0, \"75%\": 29152178.5, \"max\": 36487245.0}, \"host_id\": {\"count\": 48895.0, \"mean\": 67620010.64661008, \"std\": 78610967.03266661, \"min\": 2438.0, \"25%\": 7822033.0, \"50%\": 30793816.0, \"75%\": 107434423.0, \"max\": 274321313.0}, \"latitude\": {\"count\": 48895.0, \"mean\": 40.72894888066264, \"std\": 0.054530078057371915, \"min\": 40.49979, \"25%\": 40.6901, \"50%\": 40.72307, \"75%\": 40.763115, \"max\": 40.91306}, \"longitude\": {\"count\": 48895.0, \"mean\": -73.95216961468454, \"std\": 0.04615673610637153, \"min\": -74.24442, \"25%\": -73.98307, \"50%\": -73.95568, \"75%\": -73.936275, \"max\": -73.71299}, \"price\": {\"count\": 48895.0, \"mean\": 152.7206871868289, \"std\": 240.15416974718758, \"min\": 0.0, \"25%\": 69.0, \"50%\": 106.0, \"75%\": 175.0, \"max\": 10000.0}, \"minimum_nights\": {\"count\": 48895.0, \"mean\": 7.029962163820431, \"std\": 20.51054953317987, \"min\": 1.0, \"25%\": 1.0, \"50%\": 3.0, \"75%\": 5.0, \"max\": 1250.0}, \"number_of_reviews\": {\"count\": 48895.0, \"mean\": 23.274465691788528, \"std\": 44.55058226668393, \"min\": 0.0, \"25%\": 1.0, \"50%\": 5.0, \"75%\": 24.0, \"max\": 629.0}, \"reviews_per_month\": {\"count\": 38843.0, \"mean\": 1.3732214298586618, \"std\": 1.6804419952744725, \"min\": 0.01, \"25%\": 0.19, \"50%\": 0.72, \"75%\": 2.02, \"max\": 58.5}, \"calculated_host_listings_count\": {\"count\": 48895.0, \"mean\": 7.143982002249719, \"std\": 32.95251884941993, \"min\": 1.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 327.0}, \"availability_365\": {\"count\": 48895.0, \"mean\": 112.78132733408324, \"std\": 131.62228885171479, \"min\": 0.0, \"25%\": 0.0, \"50%\": 45.0, \"75%\": 227.0, \"max\": 365.0}}", "examples": "{\"id\":{\"0\":2539,\"1\":2595,\"2\":3647,\"3\":3831},\"name\":{\"0\":\"Clean & quiet apt home by the park\",\"1\":\"Skylit Midtown Castle\",\"2\":\"THE VILLAGE OF HARLEM....NEW YORK !\",\"3\":\"Cozy Entire Floor of Brownstone\"},\"host_id\":{\"0\":2787,\"1\":2845,\"2\":4632,\"3\":4869},\"host_name\":{\"0\":\"John\",\"1\":\"Jennifer\",\"2\":\"Elisabeth\",\"3\":\"LisaRoxanne\"},\"neighbourhood_group\":{\"0\":\"Brooklyn\",\"1\":\"Manhattan\",\"2\":\"Manhattan\",\"3\":\"Brooklyn\"},\"neighbourhood\":{\"0\":\"Kensington\",\"1\":\"Midtown\",\"2\":\"Harlem\",\"3\":\"Clinton Hill\"},\"latitude\":{\"0\":40.64749,\"1\":40.75362,\"2\":40.80902,\"3\":40.68514},\"longitude\":{\"0\":-73.97237,\"1\":-73.98377,\"2\":-73.9419,\"3\":-73.95976},\"room_type\":{\"0\":\"Private room\",\"1\":\"Entire home\\/apt\",\"2\":\"Private room\",\"3\":\"Entire home\\/apt\"},\"price\":{\"0\":149,\"1\":225,\"2\":150,\"3\":89},\"minimum_nights\":{\"0\":1,\"1\":1,\"2\":3,\"3\":1},\"number_of_reviews\":{\"0\":9,\"1\":45,\"2\":0,\"3\":270},\"last_review\":{\"0\":\"2018-10-19\",\"1\":\"2019-05-21\",\"2\":null,\"3\":\"2019-07-05\"},\"reviews_per_month\":{\"0\":0.21,\"1\":0.38,\"2\":null,\"3\":4.64},\"calculated_host_listings_count\":{\"0\":6,\"1\":2,\"2\":1,\"3\":1},\"availability_365\":{\"0\":365,\"1\":355,\"2\":365,\"3\":194}}"}}]
| true | 1 |
<start_data_description><data_path>new-york-city-airbnb-open-data/AB_NYC_2019.csv:
<column_names>
['id', 'name', 'host_id', 'host_name', 'neighbourhood_group', 'neighbourhood', 'latitude', 'longitude', 'room_type', 'price', 'minimum_nights', 'number_of_reviews', 'last_review', 'reviews_per_month', 'calculated_host_listings_count', 'availability_365']
<column_types>
{'id': 'int64', 'name': 'object', 'host_id': 'int64', 'host_name': 'object', 'neighbourhood_group': 'object', 'neighbourhood': 'object', 'latitude': 'float64', 'longitude': 'float64', 'room_type': 'object', 'price': 'int64', 'minimum_nights': 'int64', 'number_of_reviews': 'int64', 'last_review': 'object', 'reviews_per_month': 'float64', 'calculated_host_listings_count': 'int64', 'availability_365': 'int64'}
<dataframe_Summary>
{'id': {'count': 48895.0, 'mean': 19017143.236179568, 'std': 10983108.385610096, 'min': 2539.0, '25%': 9471945.0, '50%': 19677284.0, '75%': 29152178.5, 'max': 36487245.0}, 'host_id': {'count': 48895.0, 'mean': 67620010.64661008, 'std': 78610967.03266661, 'min': 2438.0, '25%': 7822033.0, '50%': 30793816.0, '75%': 107434423.0, 'max': 274321313.0}, 'latitude': {'count': 48895.0, 'mean': 40.72894888066264, 'std': 0.054530078057371915, 'min': 40.49979, '25%': 40.6901, '50%': 40.72307, '75%': 40.763115, 'max': 40.91306}, 'longitude': {'count': 48895.0, 'mean': -73.95216961468454, 'std': 0.04615673610637153, 'min': -74.24442, '25%': -73.98307, '50%': -73.95568, '75%': -73.936275, 'max': -73.71299}, 'price': {'count': 48895.0, 'mean': 152.7206871868289, 'std': 240.15416974718758, 'min': 0.0, '25%': 69.0, '50%': 106.0, '75%': 175.0, 'max': 10000.0}, 'minimum_nights': {'count': 48895.0, 'mean': 7.029962163820431, 'std': 20.51054953317987, 'min': 1.0, '25%': 1.0, '50%': 3.0, '75%': 5.0, 'max': 1250.0}, 'number_of_reviews': {'count': 48895.0, 'mean': 23.274465691788528, 'std': 44.55058226668393, 'min': 0.0, '25%': 1.0, '50%': 5.0, '75%': 24.0, 'max': 629.0}, 'reviews_per_month': {'count': 38843.0, 'mean': 1.3732214298586618, 'std': 1.6804419952744725, 'min': 0.01, '25%': 0.19, '50%': 0.72, '75%': 2.02, 'max': 58.5}, 'calculated_host_listings_count': {'count': 48895.0, 'mean': 7.143982002249719, 'std': 32.95251884941993, 'min': 1.0, '25%': 1.0, '50%': 1.0, '75%': 2.0, 'max': 327.0}, 'availability_365': {'count': 48895.0, 'mean': 112.78132733408324, 'std': 131.62228885171479, 'min': 0.0, '25%': 0.0, '50%': 45.0, '75%': 227.0, 'max': 365.0}}
<dataframe_info>
RangeIndex: 48895 entries, 0 to 48894
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 48895 non-null int64
1 name 48879 non-null object
2 host_id 48895 non-null int64
3 host_name 48874 non-null object
4 neighbourhood_group 48895 non-null object
5 neighbourhood 48895 non-null object
6 latitude 48895 non-null float64
7 longitude 48895 non-null float64
8 room_type 48895 non-null object
9 price 48895 non-null int64
10 minimum_nights 48895 non-null int64
11 number_of_reviews 48895 non-null int64
12 last_review 38843 non-null object
13 reviews_per_month 38843 non-null float64
14 calculated_host_listings_count 48895 non-null int64
15 availability_365 48895 non-null int64
dtypes: float64(3), int64(7), object(6)
memory usage: 6.0+ MB
<some_examples>
{'id': {'0': 2539, '1': 2595, '2': 3647, '3': 3831}, 'name': {'0': 'Clean & quiet apt home by the park', '1': 'Skylit Midtown Castle', '2': 'THE VILLAGE OF HARLEM....NEW YORK !', '3': 'Cozy Entire Floor of Brownstone'}, 'host_id': {'0': 2787, '1': 2845, '2': 4632, '3': 4869}, 'host_name': {'0': 'John', '1': 'Jennifer', '2': 'Elisabeth', '3': 'LisaRoxanne'}, 'neighbourhood_group': {'0': 'Brooklyn', '1': 'Manhattan', '2': 'Manhattan', '3': 'Brooklyn'}, 'neighbourhood': {'0': 'Kensington', '1': 'Midtown', '2': 'Harlem', '3': 'Clinton Hill'}, 'latitude': {'0': 40.64749, '1': 40.75362, '2': 40.80902, '3': 40.68514}, 'longitude': {'0': -73.97237, '1': -73.98377, '2': -73.9419, '3': -73.95976}, 'room_type': {'0': 'Private room', '1': 'Entire home/apt', '2': 'Private room', '3': 'Entire home/apt'}, 'price': {'0': 149, '1': 225, '2': 150, '3': 89}, 'minimum_nights': {'0': 1, '1': 1, '2': 3, '3': 1}, 'number_of_reviews': {'0': 9, '1': 45, '2': 0, '3': 270}, 'last_review': {'0': '2018-10-19', '1': '2019-05-21', '2': None, '3': '2019-07-05'}, 'reviews_per_month': {'0': 0.21, '1': 0.38, '2': None, '3': 4.64}, 'calculated_host_listings_count': {'0': 6, '1': 2, '2': 1, '3': 1}, 'availability_365': {'0': 365, '1': 355, '2': 365, '3': 194}}
<end_description>
| 1,358 | 0 | 2,846 | 1,358 |
129962711
|
<jupyter_start><jupyter_text>Intel Image Classification
### Context
This is image data of Natural Scenes around the world.
### Content
This Data contains around 25k images of size 150x150 distributed under 6 categories.
{'buildings' -> 0,
'forest' -> 1,
'glacier' -> 2,
'mountain' -> 3,
'sea' -> 4,
'street' -> 5 }
The Train, Test and Prediction data is separated in each zip files. There are around 14k images in Train, 3k in Test and 7k in Prediction.
This data was initially published on https://datahack.analyticsvidhya.com by Intel to host a Image classification Challenge.
Kaggle dataset identifier: intel-image-classification
<jupyter_script># # Assignment-05 Convolutional Neural Networks
# ### Students:
# - Sharon Sarai Maygua Mendiola
# - Franklin Ruben Rosembluth Prado
# Utils to run notebook on Kaggle
import os
import cv2
import glob
import pickle
import matplotlib
import numpy as np
import pandas as pd
import imageio as im
import seaborn as sns
import tensorflow as tf
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from PIL import Image
from tensorflow import keras
from keras import models
from pickle import dump
from pickle import load
from tensorflow import keras
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import regularizers
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import (
Conv2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
Activation,
BatchNormalization,
)
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras import layers
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import Adam, RMSprop
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.utils import shuffle
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
# from keras.preprocessing import image
import keras.utils as image
# load and save files with pickle
def save_pickle(file, file_name):
dump(file, open(file_name, "wb"))
print("Saved: %s" % file_name)
def load_pickle(file_name):
return load(open(file_name, "rb"))
# PATHS
# path to the folder containing the subfolders with the training images
trainpath = "/kaggle/input/intel-image-classification/seg_train/seg_train"
# path to the folder containing the subfolders with the testing images
testpath = "/kaggle/input/intel-image-classification/seg_test/seg_test"
predpath = "/kaggle/input/intel-image-classification/seg_pred/seg_pred"
# Tensorflow datasets creator from directory, making images to categorical
# Not used, learned how to label our images by our own
train_ds = image_dataset_from_directory(
trainpath, seed=123, image_size=IMAGE_SIZE, batch_size=64, label_mode="categorical"
)
test_ds = image_dataset_from_directory(
testpath, seed=123, image_size=IMAGE_SIZE, batch_size=64, label_mode="categorical"
)
print("Train class names:", train_ds.class_names)
print("Test class names:", test_ds.class_names)
plt.figure(figsize=(5, 5))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
label = tf.argmax(labels[i]).numpy()
plt.title(class_names[label])
plt.axis("off")
# # Labeling
# - This dataset needed some pre-processing.
# - The images were generally labeled, since they were in categorized folders. However, for training it is necessary that each image is associated with its label, so each of the training and test images was labeled.
# - The validation images could not be processed in this way because they were not categorized.
# With this objective, the *'def labeling'* function was created, which also transforms the text labels to numeric labels and converts the lists in which the images and labels had been stored, into numpy arrays of type (float32) and type (int32).
# This is because working with this type of data reduces the amount of storage memory, improves model performance, and because Keras needs its input data to be of this type.
# Also to reduce the amount of the images size, we resized all the images in labeling to normalize after concludes the labels of each image.
# Create a dictionary to change text labels into int numerical labels (Ordered alphabetically)
class_names = ["buildings", "forest", "glacier", "mountain", "sea", "street"]
class_labels = {class_name: i for i, class_name in enumerate(class_names)}
print(class_labels)
# Resize of images
IMAGE_SIZE = (150, 150)
# def for labeling
def labeling(folder_path, images, labels):
# loop through all subfolders in the folder_path
for label in os.listdir(folder_path):
# get the path to the subfolder
label_path = os.path.join(folder_path, label)
# convert label text to label number
label_number = class_labels[label]
# loop through all images in subfolder
for file_name in os.listdir(label_path):
# upload image using Pillow
image = Image.open(os.path.join(label_path, file_name))
# resize image to desired size
image = image.resize(IMAGE_SIZE)
# convert the image to a Numpy array
image = np.array(image)
# add image to testing_image list
images.append(image)
# add image label to testing_label list
labels.append(label_number)
# convert the images and labels list to numpy array
images = np.array(images, dtype="float32")
labels = np.array(labels, dtype="int32")
return images, labels
# # Data Visualization
# In this section you can see the results of the labeling.
# An image of the training set is plotted and its label is printed, both are consistent.
# Training labeling
# list to store the images and their labels
training_images = []
training_labels = []
x_train, y_train = labeling(trainpath, training_images, training_labels)
# Testing labeling
# list to store the images and their labels
testing_images = []
testing_labels = []
x_test, y_test = labeling(testpath, testing_images, testing_labels)
plt.imshow(training_images[5])
print(f"label: {training_labels[5]}, name: {class_names[training_labels[5]]}")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/962/129962711.ipynb
|
intel-image-classification
|
puneet6060
|
[{"Id": 129962711, "ScriptId": 38658728, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10470120, "CreationDate": "05/17/2023 18:34:31", "VersionNumber": 1.0, "Title": "Final-Assignment-05", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 171.0, "LinesInsertedFromPrevious": 171.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186399030, "KernelVersionId": 129962711, "SourceDatasetVersionId": 269359}]
|
[{"Id": 269359, "DatasetId": 111880, "DatasourceVersionId": 281586, "CreatorUserId": 2307235, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "01/30/2019 09:22:58", "VersionNumber": 2.0, "Title": "Intel Image Classification", "Slug": "intel-image-classification", "Subtitle": "Image Scene Classification of Multiclass", "Description": "### Context\n\nThis is image data of Natural Scenes around the world. \n\n### Content\n\nThis Data contains around 25k images of size 150x150 distributed under 6 categories.\n{'buildings' -> 0, \n'forest' -> 1,\n'glacier' -> 2,\n'mountain' -> 3,\n'sea' -> 4,\n'street' -> 5 }\n\nThe Train, Test and Prediction data is separated in each zip files. There are around 14k images in Train, 3k in Test and 7k in Prediction.\nThis data was initially published on https://datahack.analyticsvidhya.com by Intel to host a Image classification Challenge.\n\n\n### Acknowledgements\n\nThanks to https://datahack.analyticsvidhya.com for the challenge and Intel for the Data\n\nPhoto by [Jan B\u00f6ttinger on Unsplash][1]\n\n### Inspiration\n\nWant to build powerful Neural network that can classify these images with more accuracy.\n\n\n [1]: https://unsplash.com/photos/27xFENkt-lc", "VersionNotes": "Added Prediction Images", "TotalCompressedBytes": 108365415.0, "TotalUncompressedBytes": 361713334.0}]
|
[{"Id": 111880, "CreatorUserId": 2307235, "OwnerUserId": 2307235.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 269359.0, "CurrentDatasourceVersionId": 281586.0, "ForumId": 121691, "Type": 2, "CreationDate": "01/29/2019 10:37:42", "LastActivityDate": "01/29/2019", "TotalViews": 441103, "TotalDownloads": 83887, "TotalVotes": 1345, "TotalKernels": 815}]
|
[{"Id": 2307235, "UserName": "puneet6060", "DisplayName": "Puneet Bansal", "RegisterDate": "10/01/2018", "PerformanceTier": 0}]
|
# # Assignment-05 Convolutional Neural Networks
# ### Students:
# - Sharon Sarai Maygua Mendiola
# - Franklin Ruben Rosembluth Prado
# Utils to run notebook on Kaggle
import os
import cv2
import glob
import pickle
import matplotlib
import numpy as np
import pandas as pd
import imageio as im
import seaborn as sns
import tensorflow as tf
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from PIL import Image
from tensorflow import keras
from keras import models
from pickle import dump
from pickle import load
from tensorflow import keras
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import regularizers
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import (
Conv2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
Activation,
BatchNormalization,
)
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras import layers
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import Adam, RMSprop
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.utils import shuffle
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
# from keras.preprocessing import image
import keras.utils as image
# load and save files with pickle
def save_pickle(file, file_name):
dump(file, open(file_name, "wb"))
print("Saved: %s" % file_name)
def load_pickle(file_name):
return load(open(file_name, "rb"))
# PATHS
# path to the folder containing the subfolders with the training images
trainpath = "/kaggle/input/intel-image-classification/seg_train/seg_train"
# path to the folder containing the subfolders with the testing images
testpath = "/kaggle/input/intel-image-classification/seg_test/seg_test"
predpath = "/kaggle/input/intel-image-classification/seg_pred/seg_pred"
# Tensorflow datasets creator from directory, making images to categorical
# Not used, learned how to label our images by our own
train_ds = image_dataset_from_directory(
trainpath, seed=123, image_size=IMAGE_SIZE, batch_size=64, label_mode="categorical"
)
test_ds = image_dataset_from_directory(
testpath, seed=123, image_size=IMAGE_SIZE, batch_size=64, label_mode="categorical"
)
print("Train class names:", train_ds.class_names)
print("Test class names:", test_ds.class_names)
plt.figure(figsize=(5, 5))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
label = tf.argmax(labels[i]).numpy()
plt.title(class_names[label])
plt.axis("off")
# # Labeling
# - This dataset needed some pre-processing.
# - The images were generally labeled, since they were in categorized folders. However, for training it is necessary that each image is associated with its label, so each of the training and test images was labeled.
# - The validation images could not be processed in this way because they were not categorized.
# With this objective, the *'def labeling'* function was created, which also transforms the text labels to numeric labels and converts the lists in which the images and labels had been stored, into numpy arrays of type (float32) and type (int32).
# This is because working with this type of data reduces the amount of storage memory, improves model performance, and because Keras needs its input data to be of this type.
# Also to reduce the amount of the images size, we resized all the images in labeling to normalize after concludes the labels of each image.
# Create a dictionary to change text labels into int numerical labels (Ordered alphabetically)
class_names = ["buildings", "forest", "glacier", "mountain", "sea", "street"]
class_labels = {class_name: i for i, class_name in enumerate(class_names)}
print(class_labels)
# Resize of images
IMAGE_SIZE = (150, 150)
# def for labeling
def labeling(folder_path, images, labels):
# loop through all subfolders in the folder_path
for label in os.listdir(folder_path):
# get the path to the subfolder
label_path = os.path.join(folder_path, label)
# convert label text to label number
label_number = class_labels[label]
# loop through all images in subfolder
for file_name in os.listdir(label_path):
# upload image using Pillow
image = Image.open(os.path.join(label_path, file_name))
# resize image to desired size
image = image.resize(IMAGE_SIZE)
# convert the image to a Numpy array
image = np.array(image)
# add image to testing_image list
images.append(image)
# add image label to testing_label list
labels.append(label_number)
# convert the images and labels list to numpy array
images = np.array(images, dtype="float32")
labels = np.array(labels, dtype="int32")
return images, labels
# # Data Visualization
# In this section you can see the results of the labeling.
# An image of the training set is plotted and its label is printed, both are consistent.
# Training labeling
# list to store the images and their labels
training_images = []
training_labels = []
x_train, y_train = labeling(trainpath, training_images, training_labels)
# Testing labeling
# list to store the images and their labels
testing_images = []
testing_labels = []
x_test, y_test = labeling(testpath, testing_images, testing_labels)
plt.imshow(training_images[5])
print(f"label: {training_labels[5]}, name: {class_names[training_labels[5]]}")
| false | 0 | 1,529 | 0 | 1,730 | 1,529 |
||
129915954
|
<jupyter_start><jupyter_text>Tehran-Municipality (شهرداری تهران)
**Reference**
This dataset is a real dataset published by Iran government on: http://cmpt.shafaf.tehran.ir/fa/
This dataset has been published just for educational purposes.
Raw data are one-year complaint data of the data warehouse from 2007/03/21 to 2008/03/19. This dataset is composed of 7 columns and 243809 rows of complaints from Tehran citizens.
**Aim**
Currently, many governments are highly promoting implementation of information to be more citizen-oriented. For effective citizen relationship management, it is important to recognize the needs of different citizen groups and to provide respective services for each group accordingly. In this regard, the application of data mining tools would be very useful to understand citizen's needs.
**GitHub**
https://github.com/Melanee-Melanee/Tehran-Municipality
Kaggle dataset identifier: tehran-municipality
<jupyter_script>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import jalali_pandas
df = pd.read_excel("/kaggle/input/tehran-municipality/Tehran municipality 1397.xlsx")
# ### **preprocessing**
df.isnull().sum()
# **Change Columns name**
print("Columns before change:\n", df.columns.values)
df = df.rename(
columns={
"آخرین وضعیت پیام": "message_status",
"آدرس": "address",
"ناحیه": "area",
"منطقه": "zone",
"کد پیام": "message_code",
"موضوع پیام": "message_content",
"تاریخ ثبت پیام": "messages_date",
}
)
print("Columns after change:\n", df.columns.values)
df.head(5)
# **Convert messages_date column to jalali format**
df["messages_date"] = df["messages_date"].astype("string")
try:
df["messages_date"] = df["messages_date"].jalali.parse_jalali("%Y/%m/%d %H:%M")
except ValueError:
print("Invalid Row.")
# **Split date and time columns**
# Create a new DataFrame with the split columns
new_df = df["messages_date"].str.split(" ", expand=True)
# Rename the columns
new_df.columns = ["message_date", "message_time"]
# Concatenate the new DataFrame with the original DataFrame and drop the original column
df = pd.concat([df, new_df], axis=1).drop("messages_date", axis=1)
df.head(4)
# **Check the distribution of the messages:**
# Count the number of occurrences of each label
label_counts = df["message_status"].value_counts()
# Plot a bar chart of the label counts
label_counts.plot(kind="bar")
plt.xlabel("Message status")
plt.ylabel("Count")
plt.title("Distribution of messages")
plt.show()
df.head()
df["message_status"].unique()
pd.crosstab(df.zone, df.message_status)
# **Plot the successful projects in every zone:**
import matplotlib.pyplot as plt
import seaborn as sns
# Filter for only 'Done' status messages
done = df[df["message_status"] == "انجام شد و تایید گردید"]
# Calculate value counts by zone
done_counts = done.zone.value_counts()
# Set plot style and size
sns.set(style="darkgrid")
plt.figure(figsize=(10, 6))
# Plot a bar chart
sns.barplot(x=done_counts.index, y=done_counts)
# Add titles and labels
plt.title("successful projects by Zone")
plt.xlabel("Zone")
plt.ylabel("Message Count")
# Show the plot
plt.show()
# **Plot the most discontent in every zone:**
# Filter for "عدم" status only
relevant = df[df["message_status"] == "عدم رضایت - توسط شهروند انجام شد"]
# Calculate value counts
counts = relevant.zone.value_counts()
# Set plot style
sns.set(style="darkgrid")
plt.figure(figsize=(10, 6))
# Plot bar chart
sns.barplot(x=counts.index, y=counts)
# Add titles and labels
plt.title("Discontents by Zone")
plt.xlabel("Zone")
plt.ylabel("Count")
# Show plot
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/915/129915954.ipynb
|
tehran-municipality
|
melaneemelanee
|
[{"Id": 129915954, "ScriptId": 38644674, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6271464, "CreationDate": "05/17/2023 12:09:33", "VersionNumber": 1.0, "Title": "preprocessing_on_tehran_municipality", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 117.0, "LinesInsertedFromPrevious": 117.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 186333863, "KernelVersionId": 129915954, "SourceDatasetVersionId": 5687978}]
|
[{"Id": 5687978, "DatasetId": 3270118, "DatasourceVersionId": 5763570, "CreatorUserId": 10641119, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "05/15/2023 07:01:39", "VersionNumber": 1.0, "Title": "Tehran-Municipality (\u0634\u0647\u0631\u062f\u0627\u0631\u06cc \u062a\u0647\u0631\u0627\u0646)", "Slug": "tehran-municipality", "Subtitle": "\u0634\u0647\u0631\u062f\u0627\u0631\u06cc \u062a\u0647\u0631\u0627\u0646 \u062f\u06cc\u062a\u0627\u0633\u062a", "Description": "**Reference**\n\nThis dataset is a real dataset published by Iran government on: http://cmpt.shafaf.tehran.ir/fa/\n\nThis dataset has been published just for educational purposes. \n\nRaw data are one-year complaint data of the data warehouse from 2007/03/21 to 2008/03/19. This dataset is composed of 7 columns and 243809 rows of complaints from Tehran citizens.\n\n**Aim**\n\nCurrently, many governments are highly promoting implementation of information to be more citizen-oriented. For effective citizen relationship management, it is important to recognize the needs of different citizen groups and to provide respective services for each group accordingly. In this regard, the application of data mining tools would be very useful to understand citizen's needs.\n\n**GitHub**\n\nhttps://github.com/Melanee-Melanee/Tehran-Municipality", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3270118, "CreatorUserId": 10641119, "OwnerUserId": 10641119.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5687978.0, "CurrentDatasourceVersionId": 5763570.0, "ForumId": 3335748, "Type": 2, "CreationDate": "05/15/2023 07:01:39", "LastActivityDate": "05/15/2023", "TotalViews": 1872, "TotalDownloads": 220, "TotalVotes": 27, "TotalKernels": 4}]
|
[{"Id": 10641119, "UserName": "melaneemelanee", "DisplayName": "Melanee-Melanee", "RegisterDate": "05/25/2022", "PerformanceTier": 2}]
|
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import jalali_pandas
df = pd.read_excel("/kaggle/input/tehran-municipality/Tehran municipality 1397.xlsx")
# ### **preprocessing**
df.isnull().sum()
# **Change Columns name**
print("Columns before change:\n", df.columns.values)
df = df.rename(
columns={
"آخرین وضعیت پیام": "message_status",
"آدرس": "address",
"ناحیه": "area",
"منطقه": "zone",
"کد پیام": "message_code",
"موضوع پیام": "message_content",
"تاریخ ثبت پیام": "messages_date",
}
)
print("Columns after change:\n", df.columns.values)
df.head(5)
# **Convert messages_date column to jalali format**
df["messages_date"] = df["messages_date"].astype("string")
try:
df["messages_date"] = df["messages_date"].jalali.parse_jalali("%Y/%m/%d %H:%M")
except ValueError:
print("Invalid Row.")
# **Split date and time columns**
# Create a new DataFrame with the split columns
new_df = df["messages_date"].str.split(" ", expand=True)
# Rename the columns
new_df.columns = ["message_date", "message_time"]
# Concatenate the new DataFrame with the original DataFrame and drop the original column
df = pd.concat([df, new_df], axis=1).drop("messages_date", axis=1)
df.head(4)
# **Check the distribution of the messages:**
# Count the number of occurrences of each label
label_counts = df["message_status"].value_counts()
# Plot a bar chart of the label counts
label_counts.plot(kind="bar")
plt.xlabel("Message status")
plt.ylabel("Count")
plt.title("Distribution of messages")
plt.show()
df.head()
df["message_status"].unique()
pd.crosstab(df.zone, df.message_status)
# **Plot the successful projects in every zone:**
import matplotlib.pyplot as plt
import seaborn as sns
# Filter for only 'Done' status messages
done = df[df["message_status"] == "انجام شد و تایید گردید"]
# Calculate value counts by zone
done_counts = done.zone.value_counts()
# Set plot style and size
sns.set(style="darkgrid")
plt.figure(figsize=(10, 6))
# Plot a bar chart
sns.barplot(x=done_counts.index, y=done_counts)
# Add titles and labels
plt.title("successful projects by Zone")
plt.xlabel("Zone")
plt.ylabel("Message Count")
# Show the plot
plt.show()
# **Plot the most discontent in every zone:**
# Filter for "عدم" status only
relevant = df[df["message_status"] == "عدم رضایت - توسط شهروند انجام شد"]
# Calculate value counts
counts = relevant.zone.value_counts()
# Set plot style
sns.set(style="darkgrid")
plt.figure(figsize=(10, 6))
# Plot bar chart
sns.barplot(x=counts.index, y=counts)
# Add titles and labels
plt.title("Discontents by Zone")
plt.xlabel("Zone")
plt.ylabel("Count")
# Show plot
plt.show()
| false | 0 | 887 | 2 | 1,149 | 887 |
||
129915108
|
# # Playground Series S3E15
# %load ../initial_settings2.py
import os
import shutil
import subprocess
import sys
import warnings
from pathlib import Path
ON_KAGGLE = os.getenv("KAGGLE_KERNEL_RUN_TYPE") is not None
if ON_KAGGLE:
warnings.filterwarnings("ignore")
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import seaborn as sns
import tensorflow as tf
import tensorflow_datasets as tfds
# Sub-modules and so on.
from colorama import Fore, Style
from IPython.core.display import HTML
from IPython.display import display_html
from keras import layers
from plotly.subplots import make_subplots
from tensorflow import keras
K = keras.backend
# Colorama settings.
CLR = (Style.BRIGHT + Fore.BLACK) if ON_KAGGLE else (Style.BRIGHT + Fore.WHITE)
RED = Style.BRIGHT + Fore.RED
BLUE = Style.BRIGHT + Fore.BLUE
CYAN = Style.BRIGHT + Fore.CYAN
RESET = Style.RESET_ALL
# Colors
DF_CMAP = sns.light_palette("#8C92AC", as_cmap=True)
FONT_COLOR = "#141B4D"
BACKGROUND_COLOR = "#F6F5F5"
NOTEBOOK_PALETTE = {
"Coral": "#FF7F51",
"DarkNavy": "#2D3142",
"SlateBlue": "#8C92AC",
}
MY_RC = {
"axes.labelcolor": FONT_COLOR,
"axes.labelsize": 10,
"axes.labelpad": 15,
"axes.labelweight": "bold",
"axes.titlesize": 14,
"axes.titleweight": "bold",
"axes.titlepad": 15,
"xtick.labelsize": 10,
"xtick.color": FONT_COLOR,
"ytick.labelsize": 10,
"ytick.color": FONT_COLOR,
"figure.titlesize": 14,
"figure.titleweight": "bold",
"figure.facecolor": BACKGROUND_COLOR,
"figure.edgecolor": BACKGROUND_COLOR,
"figure.dpi": 72, # Locally Seaborn uses 72, meanwhile Kaggle 96.
"font.size": 10,
"font.family": "Serif",
"text.color": FONT_COLOR,
}
sns.set_theme(rc=MY_RC)
# Utility functions.
def download_dataset_from_kaggle(user, dataset, directory):
command = "kaggle datasets download -d "
filepath = directory / (dataset + ".zip")
if not filepath.is_file():
subprocess.run((command + user + "/" + dataset).split())
filepath.parent.mkdir(parents=True, exist_ok=True)
shutil.unpack_archive(dataset + ".zip", "data")
shutil.move(dataset + ".zip", "data")
def download_competition_from_kaggle(competition):
command = "kaggle competitions download -c "
filepath = Path("data/" + competition + ".zip")
if not filepath.is_file():
subprocess.run((command + competition).split())
Path("data").mkdir(parents=True, exist_ok=True)
shutil.unpack_archive(competition + ".zip", "data")
shutil.move(competition + ".zip", "data")
# Html `code` block highlight.
HTML(
"""
<style>
code {
background: rgba(42, 53, 125, 0.10) !important;
border-radius: 4px !important;
}
</style>
"""
)
#
# Competition Description 📜
# The dataset for this competition (both train and test) was generated from a deep learning model trained on the Predicting Critical Heat Flux dataset. Feature distributions are close to, but not exactly the same, as the original. Feel free to use the original dataset as part of this competition, both to explore differences as well as to see whether incorporating the original in training improves model performance.
# Task 🕵
# The objective is to impute the missing values of the feature x_e_out [-] (equilibrium quality).
# This Notebook Covers 📔
# Quick overview of the dataset.
# Relationships in numerical features, e.g. pair plots, correlation matrix, pivot tables.
# Kernel density estimation plots and probability plots.
# Relationships in categorial features, e.g. bar plots and pivot tables.
# Dataset projection with t-SNE.
#
#
# See More Here 📈
# Playground Series - Season 3, Episode 15
# # Quick Overview
# Notes 📜
# Let's get started with general information about the dataset.
#
competition = "playground-series-s3e15"
if not ON_KAGGLE:
download_competition_from_kaggle(competition)
data_path = "data/data.csv"
else:
data_path = f"/kaggle/input/{competition}/data.csv"
data = pd.read_csv(data_path, index_col="id")
# Features Description 📜
# The features description you see below comes from Dataset Features Explained posted by moth.
# author - Author.
# geometry - Geometry.
# pressure [MPa] - Pressure of the pressurized water reactor (boiling system) in MPa (kg/m·s²).
# mass_flux [kg/m2-s] - Amount of mass that passes through a given area per unit of time (kg/m2·s).
# x_e_out [-] - Equilibrium (or thermodynamic) quality. An adimensional factor.
# D_e [mm] - Channel equivalent (or hydraulic) diameter (mm). In simple words its just a concept that simplifies the analysis of flow in non-circular geometries by considering an equivalent circular channel with the same hydraulic resistance. It is a characteristic length scale used to describe the flow of fluid through a channel, duct, or pipe of non-circular cross-section.
# D_h [mm] - Channel heated diameter (mm).
# length [mm] - Heated length (mm).
# chf_exp [MW/m2] - Experimental critical heat flux. Is a regulatory limit for commercial pressurized water reactors (PWRs) worldwide. Its unit is MW/m2 or in SI base units (kg/s³).
#
data.head()
data.info()
for feature in data.columns:
print(
(CLR + feature).ljust(30),
(RED + str(data[feature].isna().sum())).ljust(20),
(RED + f"{data[feature].isna().sum() / len(data):.1%}" + RESET).ljust(20),
)
categories_only = data.select_dtypes("object").columns
numeric_only = data.select_dtypes("number").columns
data_cp = data.copy()
data_cp["x_e_out_missing"] = data_cp["x_e_out [-]"].isna().astype(bool)
data["x_e_out_missing"] = data["x_e_out [-]"].isna().map({False: "False", True: "True"})
fig = px.pie(
data,
names="x_e_out_missing",
height=520,
width=840,
hole=0.65,
title="Imputation Target Overview - x_e_out [-]",
color_discrete_sequence=["#2D3142", "#FF7F51"],
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
showlegend=False,
)
fig.add_annotation(
dict(
x=0.5,
y=0.5,
align="center",
xref="paper",
yref="paper",
showarrow=False,
font_size=22,
text="x_e_out [-]<br>Missing Values",
)
)
fig.update_traces(
hovertemplate=None,
textposition="outside",
textinfo="percent+label",
textfont_size=16,
rotation=20,
marker_line_width=15,
marker_line_color=BACKGROUND_COLOR,
)
fig.show()
# Observations 📔
# The dataset is relatively small, made of $31643$ samples.
# There are nine attributes at all, including two categorical features. These are author and geometry. The rest of the dataset is composed of numerical ones (seven).
# Each feature needs imputation except the chf_exp [MW/m2] - all values are available here. The ratio of missing values usually oscillates between 14% and 17% of dataset size, except the x_e_out [-], our main target, where this ratio exceeds 32%.
# # Relationships in Numerical Features
# Notes 📜
# Let's look at numerical features and their relations with each other.
# We will start with a numerical summary and then create pair plots and a correlation matrix.
#
data.describe().T.rename(columns=str.title).style.background_gradient(DF_CMAP)
fig = px.scatter_matrix(
data,
dimensions=numeric_only,
color="x_e_out_missing",
color_discrete_sequence=["#2D3142", "#FF7F51"],
symbol="x_e_out_missing",
symbol_sequence=["diamond", "circle"],
opacity=0.2,
title="Numerical Features - Scatter Pair Plots",
width=840,
height=840,
)
fig.update_traces(
diagonal_visible=False,
showupperhalf=False,
marker_size=1,
)
fig.update_layout(
font_color=FONT_COLOR,
font_size=9,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
showlegend=True,
legend=dict(
orientation="h",
yanchor="bottom",
xanchor="right",
y=1,
x=1,
itemsizing="constant",
),
)
fig.show()
# Observations 📔
# Even now, we can observe that there is no special relation with missing values in x_e_out [-] and other attributes since the pair plots are similar in both scenarios.
# You can turn off a given group by clicking on the True or False in the legend. It's easy to notice then that groups overlap.
#
col_names_map = {
"mean": "X_e_out Missing Rate",
"sum": "Missing Values",
"count": "Total Values",
}
numeric_pivots = []
for feature in numeric_only.difference(["x_e_out [-]"]):
pivot = (
data_cp.pivot_table(
values="x_e_out_missing",
index=pd.cut(data_cp[feature], 5), # type: ignore
aggfunc=["mean", "sum", "count"],
margins=True,
margins_name="Total",
)
.rename(columns=col_names_map)
.droplevel(level=1, axis="columns")
.style.background_gradient(DF_CMAP) # type: ignore
.set_table_attributes("style='display:inline'")
)
numeric_pivots.append(pivot)
display_html(
numeric_pivots[0]._repr_html_() + numeric_pivots[1]._repr_html_(), raw=True
)
display_html(
numeric_pivots[2]._repr_html_() + numeric_pivots[3]._repr_html_(), raw=True
)
display_html(
numeric_pivots[4]._repr_html_() + numeric_pivots[5]._repr_html_(), raw=True
)
# Observations 📔
# The above pivot tables confirm our previous statement. See that the missing value rate in the x_e_out [-] feature almost always fluctuates around 33%. The cut-off level in other attributes is meaningless.
# We could slowly claim that missing values have been chosen randomly.
#
color_map = [[0.0, "#2D3142"], [0.5, "#8C92AC"], [1.0, "#FF7F51"]]
corr = data.corr(numeric_only=True).round(2)
mask = np.triu(np.ones_like(corr, dtype=bool))
masked_corr = (
corr.mask(mask).dropna(axis="index", how="all").dropna(axis="columns", how="all")
)
heatmap = go.Heatmap(
z=masked_corr,
x=masked_corr.columns,
y=masked_corr.index,
text=masked_corr.fillna(""),
texttemplate="%{text}",
xgap=10,
ygap=10,
showscale=True,
colorscale=color_map,
colorbar_len=1.02,
hoverinfo="none",
)
fig = go.Figure(heatmap)
fig.update_layout(
font_color=FONT_COLOR,
title="Correlation Matrix - Lower Triangular",
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
width=840,
height=840,
xaxis_showgrid=False,
yaxis_showgrid=False,
yaxis_autorange="reversed",
)
fig.show()
# Observations 📔
# At first glance, we haven't got any interesting linear relations.
# Nevertheless, we can distinguish several correlated attributes. These are D_e [mm] vs pressure [MPa], D_h [mm] vs pressure [MPa], and D_h [mm] vs D_e [mm]. All these pairs have a correlation coefficient oscillated around $0.5$ or $-0.5$.
# From our perspective, the most important is x_e_out [-], but it weakly correlates with length [mm] and chf_exp [MW/m2]. Let's have a look closer at these relations on bubble plots.
#
no_na_data = data[["x_e_out [-]", "length [mm]", "chf_exp [MW/m2]"]].dropna()
fig = px.scatter(
no_na_data,
x="x_e_out [-]",
y="length [mm]",
size="length [mm]",
color="length [mm]",
color_continuous_scale=color_map,
title="x_e_out [-] vs length [mm]",
height=540,
width=840,
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
)
fig.show()
fig = px.scatter(
no_na_data,
x="x_e_out [-]",
y="chf_exp [MW/m2]",
size="chf_exp [MW/m2]",
color="chf_exp [MW/m2]",
color_continuous_scale=color_map,
title="x_e_out [-] vs chf_exp [MW/m2]",
height=540,
width=840,
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
)
fig.show()
# Observations 📔
# It's hard to say about a linear dependency here.
# # Kernel Density Estimation & Probability Plots
# Notes 📜
# It's good to know what these distributions look like at all. Therefore, let's create KDE and cumulative KDE plots. Of course, we can do this automatically, but I just wanted to have full control with Plotly.
# Then we will see probability plots. Such visualizations help to understand whether samples derive from the normal distribution (or some other) or not.
#
from itertools import product
from scipy.stats import gaussian_kde
grid_size = 3
rows = cols = len(list(numeric_only)) // grid_size + 1
row = col = range(1, 4)
axes = list(product(row, col))
fig1 = make_subplots(
rows=rows,
cols=cols,
y_title="Probability Density",
horizontal_spacing=0.1,
vertical_spacing=0.1,
)
fig2 = make_subplots(
rows=rows,
cols=cols,
y_title="Probability Density",
horizontal_spacing=0.1,
vertical_spacing=0.1,
)
for (row, col), feature in zip(axes, numeric_only):
feature_kde = gaussian_kde(data[feature].dropna())
kde_range = np.linspace(
data[feature].min() - data[feature].max() // 10,
data[feature].max() + data[feature].max() // 10,
len(data) // 10,
)
kde_estimated = feature_kde.evaluate(kde_range)
kde_estimated_cumulative = np.cumsum(kde_estimated)
kde_estimated_cumulative /= kde_estimated_cumulative.max()
for fig, kde_data in zip((fig1, fig2), (kde_estimated, kde_estimated_cumulative)):
fig.add_scatter(
x=kde_range,
y=kde_data,
line=dict(dash="solid", color="#2D3142", width=2),
# fill="tozeroy",
name=feature,
showlegend=False,
row=row,
col=col,
)
fig.update_xaxes(title_text=feature, row=row, col=col)
title1 = "Numerical Features - Kernel Density Estimation"
title2 = "Numerical Features - Cumulative Kernel Density Estimation"
for fig, title in zip((fig1, fig2), (title1, title2)):
fig.update_layout(
font_color=FONT_COLOR,
title=title,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
width=840,
height=840,
)
fig.update_annotations(font_size=14)
fig1.show()
fig2.show()
# Observations 📔
# Well, probably x_e_out [-] and chf_exp [MW/m2] should fit relatively easily to the normal distribution. When we create probability plots, these attributes should have a high coefficient of determination.
#
from scipy.stats import probplot
fig = make_subplots(
rows=rows,
cols=cols,
y_title="Observed Values",
x_title="Theoretical Quantiles",
subplot_titles=numeric_only.to_list(),
horizontal_spacing=0.1,
vertical_spacing=0.1,
)
fig.update_annotations(font_size=14)
for (row, col), feature in zip(axes, numeric_only):
data_sampled = data[feature].dropna().sample(2000, random_state=42)
(osm, osr), (slope, intercept, R) = probplot(data_sampled, rvalue=True)
x_theory = np.array([osm[0], osm[-1]])
y_theory = intercept + slope * x_theory
R2 = f"R2={R * R:.3f}"
fig.add_scatter(x=osm, y=osr, mode="markers", row=row, col=col, name=feature)
fig.add_scatter(x=x_theory, y=y_theory, mode="lines", row=row, col=col)
fig.add_annotation(
x=-1.5, y=osr[-1] * 0.75, text=R2, showarrow=False, row=row, col=col
)
fig.update_layout(
font_color=FONT_COLOR,
title="Numerical Features - Probability Plots (Sampled within 2000 Observations)",
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
showlegend=False,
width=840,
height=840,
)
fig.update_traces(
marker=dict(size=1, symbol="x-thin", line=dict(width=2, color="#2D3142")),
line_color="#FF7F51",
)
fig.show()
# Observations 📔
# Important, I sampled $2000$ observations from each distribution to create probability plots. Otherwise, the interactive nature of Plotly suffers, but such a number of samples is enough to capture the general idea.
# So, most distributions fit well to the normal one, except the D_h [mm]. Nevertheless, you can improve this with log-level transformation.
# # Relationships in Categorical Features
# Notes 📜
# We know something about numerical features, so let's look at those two categorical ones.
#
fig = make_subplots(rows=1, cols=2, y_title="Count", horizontal_spacing=0.1)
fig.update_annotations(font_size=14)
for (row, col), feature in zip(((1, 1), (1, 2)), categories_only):
if row == 1 and col == 1:
showlegend = True
else:
showlegend = False
fig.add_histogram(
x=data.query("x_e_out_missing == 'True'")[feature],
name="True",
marker_color="#FF7F51",
showlegend=showlegend,
row=row,
col=col,
)
fig.add_histogram(
x=data.query("x_e_out_missing == 'False'")[feature],
name="False",
marker_color="#2D3142",
showlegend=showlegend,
row=row,
col=col,
)
fig.update_xaxes(
categoryorder="total ascending", title_text=feature, row=row, col=col
)
fig.update_layout(
font_color=FONT_COLOR,
title="Categorical Features Overview",
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
bargap=0.2,
bargroupgap=0.1,
width=840,
height=540,
legend=dict(
orientation="h",
yanchor="bottom",
xanchor="right",
y=1.05,
x=1,
title="x_e_out_missing",
),
)
fig.show()
# Observations 📔
# Again, there is nothing which can catch our eyes. I thought there might be a great disproportion of missing values for some categories, but there is no such thing. Perhaps, the pivot table says more?
#
category_pivot = (
data_cp.pivot_table(
values="x_e_out_missing",
index=categories_only.to_list(),
aggfunc=["mean", "sum", "count"],
margins=True,
margins_name="Total",
)
.rename(columns=col_names_map)
.droplevel(level=1, axis="columns")
.style.background_gradient(DF_CMAP) # type: ignore
.set_table_attributes("style='display:inline'")
)
category_pivot
# Observations 📔
# Unfortunately, we won't know more from this.
# # 3D Projection with t-SNE
# Notes 📜
# The t-SNE is a great statistical method to visualize high-dimensional data. We can prepare a 3D projection of the dataset.
# To prepare such a visualization, we have to impute missing values. I use the most common strategies, i.e. median and most frequent ones. Additionally, I will add a label on whether x_e_out [-] was missing there or not.
#
from sklearn.preprocessing import OrdinalEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import make_column_transformer
from sklearn.pipeline import make_pipeline
from sklearn.manifold import TSNE
casual_preprocess = make_column_transformer(
(
make_pipeline(SimpleImputer(strategy="median"), StandardScaler()),
numeric_only.to_list(),
),
(
make_pipeline(SimpleImputer(strategy="most_frequent"), OrdinalEncoder()),
categories_only.to_list(),
),
remainder="drop",
verbose_feature_names_out=False,
).set_output(transform="pandas")
labels = data["x_e_out_missing"].astype("category")
data_preprocessed = casual_preprocess.fit_transform(data)
data_preprocessed.head()
tsne = TSNE(n_components=3, random_state=42)
X_3d = tsne.fit_transform(data_preprocessed)
X_3d = pd.DataFrame(X_3d, columns=["dim1", "dim2", "dim3"], index=labels.index).join(
labels
)
X_3d.head()
fig = px.scatter_3d(
data_frame=X_3d,
x="dim1",
y="dim2",
z="dim3",
symbol="x_e_out_missing",
symbol_sequence=["diamond", "circle"],
color="x_e_out_missing",
color_discrete_sequence=["#2D3142", "#FF7F51"],
opacity=0.6,
height=840,
width=840,
title="Dataset - 3D projection with t-SNE<br>after Median & Most Frequent Imputation",
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
legend=dict(
orientation="h",
yanchor="bottom",
xanchor="right",
y=1.05,
x=1,
title="x_e_out_missing",
itemsizing="constant",
),
)
fig.update_traces(marker_size=1)
fig.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/915/129915108.ipynb
| null | null |
[{"Id": 129915108, "ScriptId": 38612247, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13251128, "CreationDate": "05/17/2023 12:02:22", "VersionNumber": 1.0, "Title": "PS-S3E15 - Visual EDA \ud83d\udd75", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 731.0, "LinesInsertedFromPrevious": 731.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Playground Series S3E15
# %load ../initial_settings2.py
import os
import shutil
import subprocess
import sys
import warnings
from pathlib import Path
ON_KAGGLE = os.getenv("KAGGLE_KERNEL_RUN_TYPE") is not None
if ON_KAGGLE:
warnings.filterwarnings("ignore")
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import seaborn as sns
import tensorflow as tf
import tensorflow_datasets as tfds
# Sub-modules and so on.
from colorama import Fore, Style
from IPython.core.display import HTML
from IPython.display import display_html
from keras import layers
from plotly.subplots import make_subplots
from tensorflow import keras
K = keras.backend
# Colorama settings.
CLR = (Style.BRIGHT + Fore.BLACK) if ON_KAGGLE else (Style.BRIGHT + Fore.WHITE)
RED = Style.BRIGHT + Fore.RED
BLUE = Style.BRIGHT + Fore.BLUE
CYAN = Style.BRIGHT + Fore.CYAN
RESET = Style.RESET_ALL
# Colors
DF_CMAP = sns.light_palette("#8C92AC", as_cmap=True)
FONT_COLOR = "#141B4D"
BACKGROUND_COLOR = "#F6F5F5"
NOTEBOOK_PALETTE = {
"Coral": "#FF7F51",
"DarkNavy": "#2D3142",
"SlateBlue": "#8C92AC",
}
MY_RC = {
"axes.labelcolor": FONT_COLOR,
"axes.labelsize": 10,
"axes.labelpad": 15,
"axes.labelweight": "bold",
"axes.titlesize": 14,
"axes.titleweight": "bold",
"axes.titlepad": 15,
"xtick.labelsize": 10,
"xtick.color": FONT_COLOR,
"ytick.labelsize": 10,
"ytick.color": FONT_COLOR,
"figure.titlesize": 14,
"figure.titleweight": "bold",
"figure.facecolor": BACKGROUND_COLOR,
"figure.edgecolor": BACKGROUND_COLOR,
"figure.dpi": 72, # Locally Seaborn uses 72, meanwhile Kaggle 96.
"font.size": 10,
"font.family": "Serif",
"text.color": FONT_COLOR,
}
sns.set_theme(rc=MY_RC)
# Utility functions.
def download_dataset_from_kaggle(user, dataset, directory):
command = "kaggle datasets download -d "
filepath = directory / (dataset + ".zip")
if not filepath.is_file():
subprocess.run((command + user + "/" + dataset).split())
filepath.parent.mkdir(parents=True, exist_ok=True)
shutil.unpack_archive(dataset + ".zip", "data")
shutil.move(dataset + ".zip", "data")
def download_competition_from_kaggle(competition):
command = "kaggle competitions download -c "
filepath = Path("data/" + competition + ".zip")
if not filepath.is_file():
subprocess.run((command + competition).split())
Path("data").mkdir(parents=True, exist_ok=True)
shutil.unpack_archive(competition + ".zip", "data")
shutil.move(competition + ".zip", "data")
# Html `code` block highlight.
HTML(
"""
<style>
code {
background: rgba(42, 53, 125, 0.10) !important;
border-radius: 4px !important;
}
</style>
"""
)
#
# Competition Description 📜
# The dataset for this competition (both train and test) was generated from a deep learning model trained on the Predicting Critical Heat Flux dataset. Feature distributions are close to, but not exactly the same, as the original. Feel free to use the original dataset as part of this competition, both to explore differences as well as to see whether incorporating the original in training improves model performance.
# Task 🕵
# The objective is to impute the missing values of the feature x_e_out [-] (equilibrium quality).
# This Notebook Covers 📔
# Quick overview of the dataset.
# Relationships in numerical features, e.g. pair plots, correlation matrix, pivot tables.
# Kernel density estimation plots and probability plots.
# Relationships in categorial features, e.g. bar plots and pivot tables.
# Dataset projection with t-SNE.
#
#
# See More Here 📈
# Playground Series - Season 3, Episode 15
# # Quick Overview
# Notes 📜
# Let's get started with general information about the dataset.
#
competition = "playground-series-s3e15"
if not ON_KAGGLE:
download_competition_from_kaggle(competition)
data_path = "data/data.csv"
else:
data_path = f"/kaggle/input/{competition}/data.csv"
data = pd.read_csv(data_path, index_col="id")
# Features Description 📜
# The features description you see below comes from Dataset Features Explained posted by moth.
# author - Author.
# geometry - Geometry.
# pressure [MPa] - Pressure of the pressurized water reactor (boiling system) in MPa (kg/m·s²).
# mass_flux [kg/m2-s] - Amount of mass that passes through a given area per unit of time (kg/m2·s).
# x_e_out [-] - Equilibrium (or thermodynamic) quality. An adimensional factor.
# D_e [mm] - Channel equivalent (or hydraulic) diameter (mm). In simple words its just a concept that simplifies the analysis of flow in non-circular geometries by considering an equivalent circular channel with the same hydraulic resistance. It is a characteristic length scale used to describe the flow of fluid through a channel, duct, or pipe of non-circular cross-section.
# D_h [mm] - Channel heated diameter (mm).
# length [mm] - Heated length (mm).
# chf_exp [MW/m2] - Experimental critical heat flux. Is a regulatory limit for commercial pressurized water reactors (PWRs) worldwide. Its unit is MW/m2 or in SI base units (kg/s³).
#
data.head()
data.info()
for feature in data.columns:
print(
(CLR + feature).ljust(30),
(RED + str(data[feature].isna().sum())).ljust(20),
(RED + f"{data[feature].isna().sum() / len(data):.1%}" + RESET).ljust(20),
)
categories_only = data.select_dtypes("object").columns
numeric_only = data.select_dtypes("number").columns
data_cp = data.copy()
data_cp["x_e_out_missing"] = data_cp["x_e_out [-]"].isna().astype(bool)
data["x_e_out_missing"] = data["x_e_out [-]"].isna().map({False: "False", True: "True"})
fig = px.pie(
data,
names="x_e_out_missing",
height=520,
width=840,
hole=0.65,
title="Imputation Target Overview - x_e_out [-]",
color_discrete_sequence=["#2D3142", "#FF7F51"],
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
showlegend=False,
)
fig.add_annotation(
dict(
x=0.5,
y=0.5,
align="center",
xref="paper",
yref="paper",
showarrow=False,
font_size=22,
text="x_e_out [-]<br>Missing Values",
)
)
fig.update_traces(
hovertemplate=None,
textposition="outside",
textinfo="percent+label",
textfont_size=16,
rotation=20,
marker_line_width=15,
marker_line_color=BACKGROUND_COLOR,
)
fig.show()
# Observations 📔
# The dataset is relatively small, made of $31643$ samples.
# There are nine attributes at all, including two categorical features. These are author and geometry. The rest of the dataset is composed of numerical ones (seven).
# Each feature needs imputation except the chf_exp [MW/m2] - all values are available here. The ratio of missing values usually oscillates between 14% and 17% of dataset size, except the x_e_out [-], our main target, where this ratio exceeds 32%.
# # Relationships in Numerical Features
# Notes 📜
# Let's look at numerical features and their relations with each other.
# We will start with a numerical summary and then create pair plots and a correlation matrix.
#
data.describe().T.rename(columns=str.title).style.background_gradient(DF_CMAP)
fig = px.scatter_matrix(
data,
dimensions=numeric_only,
color="x_e_out_missing",
color_discrete_sequence=["#2D3142", "#FF7F51"],
symbol="x_e_out_missing",
symbol_sequence=["diamond", "circle"],
opacity=0.2,
title="Numerical Features - Scatter Pair Plots",
width=840,
height=840,
)
fig.update_traces(
diagonal_visible=False,
showupperhalf=False,
marker_size=1,
)
fig.update_layout(
font_color=FONT_COLOR,
font_size=9,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
showlegend=True,
legend=dict(
orientation="h",
yanchor="bottom",
xanchor="right",
y=1,
x=1,
itemsizing="constant",
),
)
fig.show()
# Observations 📔
# Even now, we can observe that there is no special relation with missing values in x_e_out [-] and other attributes since the pair plots are similar in both scenarios.
# You can turn off a given group by clicking on the True or False in the legend. It's easy to notice then that groups overlap.
#
col_names_map = {
"mean": "X_e_out Missing Rate",
"sum": "Missing Values",
"count": "Total Values",
}
numeric_pivots = []
for feature in numeric_only.difference(["x_e_out [-]"]):
pivot = (
data_cp.pivot_table(
values="x_e_out_missing",
index=pd.cut(data_cp[feature], 5), # type: ignore
aggfunc=["mean", "sum", "count"],
margins=True,
margins_name="Total",
)
.rename(columns=col_names_map)
.droplevel(level=1, axis="columns")
.style.background_gradient(DF_CMAP) # type: ignore
.set_table_attributes("style='display:inline'")
)
numeric_pivots.append(pivot)
display_html(
numeric_pivots[0]._repr_html_() + numeric_pivots[1]._repr_html_(), raw=True
)
display_html(
numeric_pivots[2]._repr_html_() + numeric_pivots[3]._repr_html_(), raw=True
)
display_html(
numeric_pivots[4]._repr_html_() + numeric_pivots[5]._repr_html_(), raw=True
)
# Observations 📔
# The above pivot tables confirm our previous statement. See that the missing value rate in the x_e_out [-] feature almost always fluctuates around 33%. The cut-off level in other attributes is meaningless.
# We could slowly claim that missing values have been chosen randomly.
#
color_map = [[0.0, "#2D3142"], [0.5, "#8C92AC"], [1.0, "#FF7F51"]]
corr = data.corr(numeric_only=True).round(2)
mask = np.triu(np.ones_like(corr, dtype=bool))
masked_corr = (
corr.mask(mask).dropna(axis="index", how="all").dropna(axis="columns", how="all")
)
heatmap = go.Heatmap(
z=masked_corr,
x=masked_corr.columns,
y=masked_corr.index,
text=masked_corr.fillna(""),
texttemplate="%{text}",
xgap=10,
ygap=10,
showscale=True,
colorscale=color_map,
colorbar_len=1.02,
hoverinfo="none",
)
fig = go.Figure(heatmap)
fig.update_layout(
font_color=FONT_COLOR,
title="Correlation Matrix - Lower Triangular",
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
width=840,
height=840,
xaxis_showgrid=False,
yaxis_showgrid=False,
yaxis_autorange="reversed",
)
fig.show()
# Observations 📔
# At first glance, we haven't got any interesting linear relations.
# Nevertheless, we can distinguish several correlated attributes. These are D_e [mm] vs pressure [MPa], D_h [mm] vs pressure [MPa], and D_h [mm] vs D_e [mm]. All these pairs have a correlation coefficient oscillated around $0.5$ or $-0.5$.
# From our perspective, the most important is x_e_out [-], but it weakly correlates with length [mm] and chf_exp [MW/m2]. Let's have a look closer at these relations on bubble plots.
#
no_na_data = data[["x_e_out [-]", "length [mm]", "chf_exp [MW/m2]"]].dropna()
fig = px.scatter(
no_na_data,
x="x_e_out [-]",
y="length [mm]",
size="length [mm]",
color="length [mm]",
color_continuous_scale=color_map,
title="x_e_out [-] vs length [mm]",
height=540,
width=840,
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
)
fig.show()
fig = px.scatter(
no_na_data,
x="x_e_out [-]",
y="chf_exp [MW/m2]",
size="chf_exp [MW/m2]",
color="chf_exp [MW/m2]",
color_continuous_scale=color_map,
title="x_e_out [-] vs chf_exp [MW/m2]",
height=540,
width=840,
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
)
fig.show()
# Observations 📔
# It's hard to say about a linear dependency here.
# # Kernel Density Estimation & Probability Plots
# Notes 📜
# It's good to know what these distributions look like at all. Therefore, let's create KDE and cumulative KDE plots. Of course, we can do this automatically, but I just wanted to have full control with Plotly.
# Then we will see probability plots. Such visualizations help to understand whether samples derive from the normal distribution (or some other) or not.
#
from itertools import product
from scipy.stats import gaussian_kde
grid_size = 3
rows = cols = len(list(numeric_only)) // grid_size + 1
row = col = range(1, 4)
axes = list(product(row, col))
fig1 = make_subplots(
rows=rows,
cols=cols,
y_title="Probability Density",
horizontal_spacing=0.1,
vertical_spacing=0.1,
)
fig2 = make_subplots(
rows=rows,
cols=cols,
y_title="Probability Density",
horizontal_spacing=0.1,
vertical_spacing=0.1,
)
for (row, col), feature in zip(axes, numeric_only):
feature_kde = gaussian_kde(data[feature].dropna())
kde_range = np.linspace(
data[feature].min() - data[feature].max() // 10,
data[feature].max() + data[feature].max() // 10,
len(data) // 10,
)
kde_estimated = feature_kde.evaluate(kde_range)
kde_estimated_cumulative = np.cumsum(kde_estimated)
kde_estimated_cumulative /= kde_estimated_cumulative.max()
for fig, kde_data in zip((fig1, fig2), (kde_estimated, kde_estimated_cumulative)):
fig.add_scatter(
x=kde_range,
y=kde_data,
line=dict(dash="solid", color="#2D3142", width=2),
# fill="tozeroy",
name=feature,
showlegend=False,
row=row,
col=col,
)
fig.update_xaxes(title_text=feature, row=row, col=col)
title1 = "Numerical Features - Kernel Density Estimation"
title2 = "Numerical Features - Cumulative Kernel Density Estimation"
for fig, title in zip((fig1, fig2), (title1, title2)):
fig.update_layout(
font_color=FONT_COLOR,
title=title,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
width=840,
height=840,
)
fig.update_annotations(font_size=14)
fig1.show()
fig2.show()
# Observations 📔
# Well, probably x_e_out [-] and chf_exp [MW/m2] should fit relatively easily to the normal distribution. When we create probability plots, these attributes should have a high coefficient of determination.
#
from scipy.stats import probplot
fig = make_subplots(
rows=rows,
cols=cols,
y_title="Observed Values",
x_title="Theoretical Quantiles",
subplot_titles=numeric_only.to_list(),
horizontal_spacing=0.1,
vertical_spacing=0.1,
)
fig.update_annotations(font_size=14)
for (row, col), feature in zip(axes, numeric_only):
data_sampled = data[feature].dropna().sample(2000, random_state=42)
(osm, osr), (slope, intercept, R) = probplot(data_sampled, rvalue=True)
x_theory = np.array([osm[0], osm[-1]])
y_theory = intercept + slope * x_theory
R2 = f"R2={R * R:.3f}"
fig.add_scatter(x=osm, y=osr, mode="markers", row=row, col=col, name=feature)
fig.add_scatter(x=x_theory, y=y_theory, mode="lines", row=row, col=col)
fig.add_annotation(
x=-1.5, y=osr[-1] * 0.75, text=R2, showarrow=False, row=row, col=col
)
fig.update_layout(
font_color=FONT_COLOR,
title="Numerical Features - Probability Plots (Sampled within 2000 Observations)",
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
showlegend=False,
width=840,
height=840,
)
fig.update_traces(
marker=dict(size=1, symbol="x-thin", line=dict(width=2, color="#2D3142")),
line_color="#FF7F51",
)
fig.show()
# Observations 📔
# Important, I sampled $2000$ observations from each distribution to create probability plots. Otherwise, the interactive nature of Plotly suffers, but such a number of samples is enough to capture the general idea.
# So, most distributions fit well to the normal one, except the D_h [mm]. Nevertheless, you can improve this with log-level transformation.
# # Relationships in Categorical Features
# Notes 📜
# We know something about numerical features, so let's look at those two categorical ones.
#
fig = make_subplots(rows=1, cols=2, y_title="Count", horizontal_spacing=0.1)
fig.update_annotations(font_size=14)
for (row, col), feature in zip(((1, 1), (1, 2)), categories_only):
if row == 1 and col == 1:
showlegend = True
else:
showlegend = False
fig.add_histogram(
x=data.query("x_e_out_missing == 'True'")[feature],
name="True",
marker_color="#FF7F51",
showlegend=showlegend,
row=row,
col=col,
)
fig.add_histogram(
x=data.query("x_e_out_missing == 'False'")[feature],
name="False",
marker_color="#2D3142",
showlegend=showlegend,
row=row,
col=col,
)
fig.update_xaxes(
categoryorder="total ascending", title_text=feature, row=row, col=col
)
fig.update_layout(
font_color=FONT_COLOR,
title="Categorical Features Overview",
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
bargap=0.2,
bargroupgap=0.1,
width=840,
height=540,
legend=dict(
orientation="h",
yanchor="bottom",
xanchor="right",
y=1.05,
x=1,
title="x_e_out_missing",
),
)
fig.show()
# Observations 📔
# Again, there is nothing which can catch our eyes. I thought there might be a great disproportion of missing values for some categories, but there is no such thing. Perhaps, the pivot table says more?
#
category_pivot = (
data_cp.pivot_table(
values="x_e_out_missing",
index=categories_only.to_list(),
aggfunc=["mean", "sum", "count"],
margins=True,
margins_name="Total",
)
.rename(columns=col_names_map)
.droplevel(level=1, axis="columns")
.style.background_gradient(DF_CMAP) # type: ignore
.set_table_attributes("style='display:inline'")
)
category_pivot
# Observations 📔
# Unfortunately, we won't know more from this.
# # 3D Projection with t-SNE
# Notes 📜
# The t-SNE is a great statistical method to visualize high-dimensional data. We can prepare a 3D projection of the dataset.
# To prepare such a visualization, we have to impute missing values. I use the most common strategies, i.e. median and most frequent ones. Additionally, I will add a label on whether x_e_out [-] was missing there or not.
#
from sklearn.preprocessing import OrdinalEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import make_column_transformer
from sklearn.pipeline import make_pipeline
from sklearn.manifold import TSNE
casual_preprocess = make_column_transformer(
(
make_pipeline(SimpleImputer(strategy="median"), StandardScaler()),
numeric_only.to_list(),
),
(
make_pipeline(SimpleImputer(strategy="most_frequent"), OrdinalEncoder()),
categories_only.to_list(),
),
remainder="drop",
verbose_feature_names_out=False,
).set_output(transform="pandas")
labels = data["x_e_out_missing"].astype("category")
data_preprocessed = casual_preprocess.fit_transform(data)
data_preprocessed.head()
tsne = TSNE(n_components=3, random_state=42)
X_3d = tsne.fit_transform(data_preprocessed)
X_3d = pd.DataFrame(X_3d, columns=["dim1", "dim2", "dim3"], index=labels.index).join(
labels
)
X_3d.head()
fig = px.scatter_3d(
data_frame=X_3d,
x="dim1",
y="dim2",
z="dim3",
symbol="x_e_out_missing",
symbol_sequence=["diamond", "circle"],
color="x_e_out_missing",
color_discrete_sequence=["#2D3142", "#FF7F51"],
opacity=0.6,
height=840,
width=840,
title="Dataset - 3D projection with t-SNE<br>after Median & Most Frequent Imputation",
)
fig.update_layout(
font_color=FONT_COLOR,
title_font_size=18,
plot_bgcolor=BACKGROUND_COLOR,
paper_bgcolor=BACKGROUND_COLOR,
legend=dict(
orientation="h",
yanchor="bottom",
xanchor="right",
y=1.05,
x=1,
title="x_e_out_missing",
itemsizing="constant",
),
)
fig.update_traces(marker_size=1)
fig.show()
| false | 0 | 6,398 | 0 | 6,398 | 6,398 |
||
129915480
|
# ## Telecom churn data analysis using Logistic Regression, Decision Tree, Random Forest
# - With 21 predictor variables we need to predict whether the customer will switch to another telecom company or not.In telecom industries it is considered or said to be churning(swithching to other) and not churning(not changing to other comp).
# # Steps :
# - 1.Missing value imputation
# - 2.Outlier treatment
# - 3.Dummy variable creation for categorical variables
# - 4.Test-train split of the data
# - 5.Standardisation of the scales of continuous variables
#
# - a logistic regression model was built in Python using the function GLM() under the statsmodel library.
# - This model contained all the variables, some of which had insignificant coefficients.
# - Hence, some of these variables were removed first based on an automated approach,
# - i.e. RFE and then a manual approach based on VIF and p-value.
# - we also learnt about confusion matrix and accuracy and saw how accuracy was calculated for a logistic regression model.
# ------
# - Assuming that we arbitrarily chose a cut-off of 0.5(threshold), wherein if the probability is greater than 0.5, you'd conclude that the customers has churned and if it is less than or equal to 0.5, you'd conclude that the customer didn't churned(switched), how many of these customers would be classified as churned.
# ## Step - 1: Importing and Merging Data
# First import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# import customer data set
cust_data = pd.read_csv("/Users/sakshimunde/Downloads/customer_data.csv")
cust_data.head()
# from customer data we can see partner and dependents are binary
# seeing size of the customer data
cust_data.shape
# importing internet data
internet_data = pd.read_csv("/Users/sakshimunde/Downloads/internet_data.csv")
internet_data.head()
# online security,online backup,device protection,techsupport,streaming movies are binary in internet data
# seeing the size of internt_data
internet_data.shape
# importing churn_data
churn_data = pd.read_csv("/Users/sakshimunde/Downloads/churn_data.csv")
churn_data.head()
# see size of churn_data
churn_data.shape
# # Merging or combining all data files
# merging cust and internet data on customer id
df = pd.merge(cust_data, internet_data, how="inner", on="customerID")
df.head()
# merging df data (which has combination of customer and internet data) with churn data
telecom = pd.merge(df, churn_data, how="inner", on="customerID")
pd.set_option("display.max_columns", None)
# # Step 2 :Inspecting the dataframe
telecom.head()
telecom.info()
# - total charges column should be float.
# - We can see no nnull values are present
telecom.describe()
print("\n1.Partner")
print(telecom.Partner.unique())
print("\n2.Dependents")
print(telecom.Dependents.unique())
print("\n3.OnlineSecurity")
print(telecom.OnlineSecurity.unique())
print("\n4.OnlineBackup")
print(telecom.OnlineBackup.value_counts())
print("\n1.DeviceProtection")
print(telecom.DeviceProtection.value_counts())
print("\n2.TechSupport")
print(telecom.TechSupport.value_counts())
print("\n3.StreamingTV")
print(telecom.StreamingTV.unique())
print("\n4.PhoneService")
print(telecom.PhoneService.unique())
print("\n5.PaperlessBilling")
print(telecom.PaperlessBilling.value_counts())
print("\n6.Churn")
print(telecom.Churn.value_counts())
# - The count of the level ‘No internet service’ is the same for all, i.e. 1526. Can you explain briefly why this has happened?
# This happens because the level ‘No internet service’ just tells you whether a user has internet service or not. Now because the number of users not having an internet service is the same, the count of this level in all of these variables will be the same. You can also check the value counts of the variable ‘InternetService’ and you’ll see that the output you’ll get is:
# Fiber Optic 3096
# DSL 2421
# No 1526
# Coincidence? No!
# This information is already contained in the variable ‘InternetService’ and hence, the count will be the same in all the variables with the level ‘No internet service’. This is actually also the reason we chose to drop this particular level.
# - --------
# - we can see Partner, dependents , PhoneService,PaperlessBilling & Churn are binary data(yes/no) lets convert them to 0 and 1.
# Step 3: Data Preparation
# --
# - Converting binary variables yes/no to 0/1
#
binary_var = ["Partner", "Dependents", "PhoneService", "PaperlessBilling", "Churn"]
telecom[binary_var] = telecom[binary_var].apply(lambda x: x.map({"Yes": 1, "No": 0}))
# let's see yes or no got converted to 0 and 1
telecom.head()
# - We can see all binary vars got converted to 1's and 0's
# - Now let's convert categorical vars with >2 levels to dummy vars
# first we will convert gender,InternetService,Contract,PaymentMethod categorical vars to dummy
dummy1 = pd.get_dummies(
telecom[["gender", "InternetService", "Contract", "PaymentMethod"]], drop_first=True
)
dummy1.head()
# let's now concat dummy vars dataframe with telecom dataframe
telecom = pd.concat([telecom, dummy1], axis=1)
telecom.head()
# Now creating dummy vars of rest all categoriacl vars
print(telecom.MultipleLines.unique())
pd.get_dummies(telecom["MultipleLines"]).head()
# - WE SHOULD ADD COLUMN NAMES TO START(PREFIX) TO UNDERSTAND TO WHICH COLUMN IT BELONGS.
# - We know for n levels we should have n-1 levels in dummy vars.But in pandas dummy create n lvels where we need to drop one level we can choose randomly,or drop first or drop which is not useful.
# let's convert to dummies and give column names at beginning(prefix)
ML = pd.get_dummies(telecom["MultipleLines"], prefix="MultipleLines")
ML.head()
# let's drop MultipleLines_No phone service as it is not useful aswell
ML1 = ML.drop(["MultipleLines_No phone service"], axis=1)
ML1.head()
# - OnlineSecurity,OnlineBackup,DeviceProtection,techsupport,StreamingTV,Streaming movies are categorical vars that are left which needed to be converted to dummy vars.
# Converting OnlineSecurity to dummy var
OS = pd.get_dummies(telecom["OnlineSecurity"], prefix="OnlineSecurity")
OS.head()
# dropping OnlineSecurity No internet service column bcz there are n levels we should hv n-1 levels
OS1 = OS.drop(["OnlineSecurity_No internet service"], axis=1)
OS1.head()
# OnlineBackup
OB = pd.get_dummies(telecom["OnlineBackup"], prefix="OnlineBackup")
OB.head()
# drop OnlineBackup_No internet service
OB = OB.drop(["OnlineBackup_No internet service"], axis=1)
OB.head()
# DeviceProtection
DP = pd.get_dummies(telecom["DeviceProtection"], prefix="DeviceProtection")
DP.head()
# drop DeviceProtection_No internet service
DP = DP.drop(["DeviceProtection_No internet service"], axis=1)
DP.head()
# TechSupport
TS = pd.get_dummies(telecom["TechSupport"], prefix="TechSupport")
TS.head()
# drop TechSupport_No internet service
TS = TS.drop(["TechSupport_No internet service"], axis=1)
TS.head()
# StreamingTV
ST = pd.get_dummies(telecom["StreamingTV"], prefix="StreamingTV")
ST.head()
# drop StreamingTV_No internet service
ST = ST.drop(["StreamingTV_No internet service"], axis=1)
ST.head()
# StreamingMovies
SM = pd.get_dummies(telecom["StreamingMovies"], prefix="StreamingMovies")
SM.head()
# DROPPING STREAMING MOVIES NO INTERNET SERVICE BCZ FOR n LEVELS there should be n-1 LEVELS
SM = SM.drop(["StreamingMovies_No internet service"], axis=1)
SM.head()
# CONCATINATING ALL DUMMIES WITH TELECOM DATAFRAME
telecom = pd.concat([telecom, ML1], axis=1)
telecom = pd.concat([telecom, OS1], axis=1)
telecom = pd.concat([telecom, OB], axis=1)
telecom = pd.concat([telecom, DP], axis=1)
telecom = pd.concat([telecom, TS], axis=1)
telecom = pd.concat([telecom, ST], axis=1)
telecom = pd.concat([telecom, SM], axis=1)
telecom.head()
# ###### Drop Repeated vars
# - We have created dumy vars so we can drop repeated var
# DROPPING REPEATED VARS
telecom = telecom.drop(
[
"gender",
"InternetService",
"Contract",
"PaymentMethod",
"MultipleLines",
"OnlineSecurity",
"OnlineBackup",
"DeviceProtection",
"TechSupport",
"StreamingTV",
"StreamingMovies",
],
axis=1,
)
telecom.head()
# Customer id is not useful column so let's drop it
telecom = telecom.drop(["customerID"], axis=1)
# THERE IS A BLANK SPACE IN TOTALCHARGES COL BCZ OF WHICH IT IS SHOWING AS AN OBJECT
telecom["TotalCharges"] = telecom["TotalCharges"].str.replace(" ", "0")
telecom["TotalCharges"] = telecom["TotalCharges"].astype(float)
telecom["TotalCharges"].shape
# CHECKING WHETHER DATA TYPE CHANGED OR NOT
telecom["TotalCharges"].dtype
telecom.info()
telecom.head()
# ##### Checking for outliers
# - SeniorCitizen,tenure ,MonthlyCharges,TotalCharges are numerical data with high values.So we will see whether outliers are present in them or not
numerical_val = telecom[["SeniorCitizen", "tenure", "MonthlyCharges", "TotalCharges"]]
numerical_val.describe(percentiles=[0.25, 0.50, 0.75, 0.90, 0.95, 0.99])
# - We can see there are no outliers. All values are increasing gradually.
# - Also after 99% there is no sudden increase
telecom.isnull().sum()
# # Step 4 : Splitting data into train and test sets
# assigning all independent vars except churn and customer id to X axis
X = telecom.drop(["Churn"], axis=1)
X.head()
# assigning churn(target var) column to y axis
y = telecom["Churn"]
y.head()
from sklearn.model_selection import train_test_split
# splitting data into train and test
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.7, test_size=0.30, random_state=100
)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# X var as 30 columns and y has only 1 var i.e., churn
# ## Step 5 : Feature Scaling
# - Scaling helps us in faster convergence of gradient descent.
# - Standard scaler centers mean to 0
# - The formula for standardising a value in a dataset is given by:
# - (X − μ)/σ
# - Min max scaling compress values between min 0 and max 1
# --------
# - 'fit_transform' on the train set but just 'transform' on the test set. Why do you think this is done ?
# - The 'fit_transform' command first fits the data to have a mean of 0 and a standard deviation of 1, i.e. it scales all the variables using:
# - Now, once this is done, all the variables are transformed using this formula.
# - Now, when you go ahead to the test set, you want the variables to not learn anything new.
# - if we apply fit to test data then mean 0 and std 1 wil be diff of train and test data we should have same mean and std for train and test.That's y we don't do fit to test data,If we done then both will get duff mean & std.
# - You want to use the old centralisation that you had when you used fit on the train dataset.
# - And this is why you don't apply 'fit' on the test data, just the 'transform'.
#
from sklearn.preprocessing import StandardScaler
# creating an object of standard scaler as in sklearn we create object of a class
scaler = StandardScaler()
# fit and transform large values on same scale that other vars are
X_train[["tenure", "MonthlyCharges", "TotalCharges"]] = scaler.fit_transform(
X_train[["tenure", "MonthlyCharges", "TotalCharges"]]
)
X_train.head()
#
# - The variables had these ranges before standardisation:
# - Tenure = 1 to 72
# - Monthly charges = 18.25 to 118.80
# - Total charges = 18.8 to 8685
#
# - After standardisation, the ranges of the variables changed to:
# - Tenure = -1.28 to +1.61
# - Monthly charges = -1.55 to +1.79
# - Total charges = -0.99 to 2.83
# - Clearly, none of the variables will have a disproportionate effect on the model’s results now.
# churn data
# --
# let's see what is the percentage of churn data means how many customers changed there network or telecom company
# churn %
churn = (sum(telecom["Churn"]) / len(telecom["Churn"].index)) * 100
churn
# - 27% data is churned.Means 27% people changed there network company.
# ### step 6 :Correlation
# seeing correlation between the vars
# plotiing heatmap and corr() to see the relation between the vars
plt.figure(figsize=[35, 15])
sns.heatmap(telecom.corr(), annot=True, cmap="Greens")
plt.show()
# - We can see multiple lines_yes\no ,online security yes\no ,onlinebackup_yes\no,device protection yes\no,techsupport,screaming tv,screaming movies are dummy vars which are strongly correlated among themselves.
# - we will not drop all vars now,we drop them using feature eleimination bcz some vars could be imp
# - So it is better that we drop one of these variables from each pair as they won’t add much value to the model.
# - The choice of which of these pair of variables you desire to drop is completely up to you; we’ve chosen to drop all the 'Nos' because the 'Yeses' are generally more interpretable and easy-to-work-with variables.
# - Let's drop this inter correlated vars: also called multicollinearity.
# - dropping from both X_train data and xtest data.
X_train = X_train.drop(
[
"MultipleLines_No",
"OnlineSecurity_No",
"OnlineBackup_No",
"DeviceProtection_No",
"TechSupport_No",
"StreamingTV_No",
"StreamingMovies_No",
],
axis=1,
)
X_test = X_test.drop(
[
"MultipleLines_No",
"OnlineSecurity_No",
"OnlineBackup_No",
"DeviceProtection_No",
"TechSupport_No",
"StreamingTV_No",
"StreamingMovies_No",
],
axis=1,
)
# Now after dropping some of the dummy vars let's see relation between rest of the vars
plt.figure(figsize=[20, 10])
sns.heatmap(X_train.corr(), annot=True)
plt.show()
# # Step 7 : Model Building
# - Now that we have completed all the pre-processing steps, inspected the correlation values and have eliminated a few variables, it’s time to build our first model.
import statsmodels.api as sm
# building a logistic regression model.first add a constant
X_train_sm = sm.add_constant(X_train)
X_train_sm.head()
# Logistic regression is a binomial distribution
family = sm.families.Binomial()
family
# BUilding logistic regression model and fitting it (mx+c)
logm1 = sm.GLM(y_train, X_train_sm, family).fit()
logm1
# now our model is built.Let's see summary
logm1.summary()
# - In this table, our key focus area is just the different coefficients and their respective p-values. As you can see, there are many variables whose p-values are high, implying that that variable is statistically insignificant. So we need to eliminate some of the variables in order to build a better model.
#
# - We'll first eliminate a few features using Recursive Feature Elimination (RFE), and once we have reached a small set of variables to work with, we can then use manual feature elimination (i.e. manually eliminating features based on observing the p-values and VIFs).
# -------
# - For a variable to be insignificant, the p-value should be greater than 0.05.
# - In hypothesis we will se p value should be greater than z or alpha to make our null hypo true.
# - But in regression we should have p value <0.05(5%) to make it significant.
# -----
# - Recall that the null hypothesis for any beta was:
# - βi=0
# - And if the p-value is small, you can say that the coefficient is significant, and hence, you can reject the null hypothesis that
# - βi=0
# ---------
# # Feature selection using RFE
# - Now that We built our first model based on the summary statistics, we inferred that many of the variables might be insignificant and hence, we need to do some feature elimination.
# - Since the number of features is huge, let's first start with an automated feature selection technique (RFE) and then move to manual feature elimination (using p-values and VIFs) : this is exactly the same process that we did in linear regression.
# - first using rfe select significant(important) vars and then build model using this selected vars.
# - again check this selected vars using statsmodel or sklearn.
# - RFE won't work with statsmodel.So we have to use logistic regression using sklearn.
# # Steps :
# - 1. Import rfe and logistic regression models
# - 2. fit the model using X & Y using rfe
# - 3.select top columns which are significant according to rfe
# - 4.adding constant
# - 5.adding binomial family
# - 6.building a model and fitting it to get parameters
# - 7.predicting vals
# - 8.then converting predicted values to binary numbers
# - 9.finding accuracy.
# import logistic regression from sklearn
from sklearn.linear_model import LogisticRegression
lor = LogisticRegression()
lor
# now import RFE
from sklearn.feature_selection import RFE
# now select how top vars we want.we want top 15 vars
# creating an object of class RFE
rfe = RFE(lor, n_features_to_select=15)
rfe
# fitting the model
rfe = rfe.fit(X_train, y_train)
rfe
# Let's see how many vars got selected.support will show true or false in binary way
rfe.support_
# let's see which column got selected and what is columns rank
X_train.columns, rfe.support_, rfe.ranking_
# lets zip them together.This will show rank of all columns
list(zip(X_train.columns, rfe.ranking_))
# now get only top 15 columns
# ref.support will give only selected columns
col = X_train.columns[rfe.support_]
col
# - we can see onlly True columns we got i.e., top 15 columns
# - let's see which columns are insignificant.
# - 8 columns are insignificant
X_train.columns[~rfe.support_]
# ##### Creating the model using statsmodel
# now that we have top 15 columns.We will build a model using this vars
# assign top 15 columns to X train
X_train_rfe = X_train[col]
X_train_rfe.head()
# now let's add constant to train data
X_train_sm = sm.add_constant(X_train_rfe)
X_train_sm.head()
# - GLM (Generalised Linear Models) method of the library statsmodels.
# - 'Binomial()' in the 'family' argument tells statsmodels that it needs to fit a logit curve to a binomial data (i.e. in which the target will have just two classes, here 'Churn' and 'Non-Churn').
# Now constant is added,let's build our 2nd model
# as it is logistic regression we should have binomial distribution
family = sm.families.Binomial()
family
# now that binomial is created ,let's build our 2nd model after RFE using GLM
# Estimating COEFFICIENTS using generalised linear method /maximum likelihood function
logm2 = sm.GLM(y_train, X_train_sm, family).fit()
logm2 # logm : logistic model
# let's see summary
logm2.summary()
# - We can see all p values are <0.05 or 5% so all vars are significant.
# - Now for creating confusion matrix we need two vars.one is actual and other is predicted ,so that we con do comparision and can understand what we predicted is actually true or not.
# - y pred formulae : mx+c
# - we have inbuilt function predict to get y_pred
# - 'Binomial()' in the 'family' argument tells statsmodels that it needs to fit a logit curve to a binomial data (i.e. in which the target will have just two classes, here 'Churn' and 'Non-Churn').
# - Target var churn will be under two class +ve or -ve class i.e., in this 'churn' and 'not churn'
# so we need y pred.We predict on fit model.we predict using X_train data which is fitted
y_train_pred = logm2.predict(X_train_sm)
y_train_pred[:10] # seeing only 10 values .It's like head()/tail()
# - We got y predicted values.Now first we will make actual y values as churn values(network switched by customer) and predicted y values as churn probaility ( we are assuming that customers churned or switched to ther network)
# - We will reshape y train predicted data.Bcz we are making it only 1 row.Sequence is same.
# - reshaping we r doing bcz y train values have 1 column and we want y pred to have 1 column.
# -1 reshaping means making it in one column.Bcz there is no dimension to it
y_train_pred = y_train_pred.values.reshape(-1)
y_train_pred
# column converted to row
# now assigning y train pred to churn prob and ytrain as churn
y_train_pred_final = pd.DataFrame({"churn": y_train.values, "churn_prob": y_train_pred})
y_train_pred_final
# #### We can see output is categorical.Churn column has 0's/1's which are categorical values
# Now add a custid column
y_train_pred_final["custid"] = y_train.index
y_train_pred_final.head()
# ##### custid is also added.Now we will add predicted column which says if y pred value is >0.5 then write as 1
# - if churn prob >0.5 assign 1 else 0.
# - we are converting into 0's and 1's to churn prob column bcz churn is alreday in 1/0 and in classification we should have binary outcomes i.e., 1/0.
# - churn is already in 0/1 ,now we will convert churn prob in 1/0 and write them in separate column.
# taking threshold as 0.5 by default and let's see 0.5 cutoff is correct or not
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# #### we are converting churn prob to 0's/1's bcz we will get S-shape curve.If we won't convert to 0/1 then we will get a linear line using churn and churn_prob.
# - we can see where churn prob is >5% there predicted column has 1.
# - Since the logistic curve gives you just the probabilities and not the actual classification of 'Churn' and 'Non-Churn', you need to find a threshold probability to classify customers as 'churn' and 'non-churn'.
# - Here, we choose 0.5 as an arbitrary cutoff wherein if the probability of a particular customer churning is less than 0.5, you'd classify it as 'Non-Churn' and if it's greater than 0.5, you'd classify it as 'Churn'. The choice of 0.5 is completely arbitrary at this stage.
# -----
# - You chose a cutoff of 0.5 in order to classify the customers into 'Churn' and 'Non-Churn'.
# - Now, since we're classifying the customers into two classes, we'll obviously have some errors. The classes of errors that would be there are:
# - 'Churn' customers being (incorrectly) classified as 'Non-Churn' :- actually churned but predicted not churn is a type1 error
# - 'Non-Churn' customers being (incorrectly) classified as 'Churn' :- non churned(not switched) customers are predicted as churned(switched) is also an error(means wrong prediction)
# ### Confusion matrix
from sklearn import metrics
confusion = metrics.confusion_matrix(
y_train_pred_final["churn"], y_train_pred_final["Predicted"]
)
confusion
# - ------- predicted : ----------- not churn, churn
# - --- --actual
# - not churn : -------------- 3255 ,372 ---------------- TN,FP
# - churn : ------------------- 550,753 -------------- FN ,TP
#
#
#
# - churn churn means 1 and it means that there are 753, 1's.(TP)
# - 0's are 3255,not churn is 0.(TN)
# - 3255 people didn't churned(didin't switched) there network company.
# - 753 customers churned(switched) there network.
# -------
# - False positive : Actually -ve but predicted +ve.So it's an error.We can see 372 were predicted as 1 but they were actually 0.
# - Flase Negative : Actually +ve but predicted -ve.There are 550 values which predicted wrong.550 values are actually +ve but predicted -ve.
# ---------
# - We get accuracy by using formulae or built in funtion :
# - (TP+TN)/(TP+TN+FP+FN)
metrics.accuracy_score(y_train_pred_final["churn"], y_train_pred_final["Predicted"])
# - Accuracy is 81% which is a good % value to begin with.
# - So far you have only selected features based on RFE.
# - Further elimination of features using the p-values and VIFs manually is yet to be done.
# -----
# - We saw in the pairwise correlations, there are high values of correlations present between the 15 features, i.e. there is still some multicollinearity among the features.
# - So we definitely need to check the VIFs as well to further eliminate the redundant variables.
# - VIF calculates how well one independent variable is explained by all the other independent variables combined.
# ## Checking VIF'S
# - VIF > 5% drop them.VIF < 0.5 is significant var.
# # Steps :
# - 1.finding vif value
# - 2.manual feature elimination
# - 3.BUild a model and fit
# - 1.see summary and based on p values eliminate vars if>0.05
# - 2.prdict value using fit model and X train value of fitted model
# - 3.create a dataframe of actual y value and predicted y value
# - 4.now uisng predicted value make binary numbers 1/0 of predicted y value.We make values >0.5 as 1 and <0.5 0
# here o.5 we took bcz by default threshold we take as 0.5.
# - 5.see confusion matrix using y actual and predicted values.
# - 6.Find accuracy score using y actual value and predicted binary values
# - See Vif values.Repeat this process until we get significant variables.
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["Features"] = X_train[col].columns
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
vif["VIF"] = round(vif["VIF"], 2)
vif
vif["VIF"] = round(vif["VIF"], 2)
vif = vif.sort_values(by="VIF", ascending=False)
vif
# - phone service has very high vif 9%.let's drop it.phone service 8% means it is highly correlated with it's own independent vars i.e., multicollinearity.
# ## MANUAL FEATURE ELIMINATION
# #### dropping PhoneService
# let's see all 15 columns that were selected
col
# Now from this columns let's drop phone service
col = col.drop("PhoneService")
col
# Now once again we need to build model
X_train_rfe = X_train[col]
X_train_rfe.head()
# now we will add constant to X train data set
X_train_sm = sm.add_constant(X_train_rfe)
X_train_sm.head()
# #### Building and fitting model after PhoneService is dropped
# now that constant is added let's build(mx+c) our model and fit it.After fitting only we will get parameters.
# this is our 3rd logistic model
logm3 = sm.GLM(y_train, X_train_sm, family).fit() # family is binomial
logm3
# now let's see summary
logm3.summary()
# - p values of all vars are sigificant.Let's see vif value
# - now we have to see for accuracy.We will see is there any change in accuracy or not.
# #### Creating a prdicted var
y_train_pred = logm3.predict(X_train_sm)
y_train_pred
# we will reshape y_train_pred
y_train_pred = y_train_pred.values.reshape(-1)
y_train_pred
# Now y_train_pred got reshaped. we will assign actual y value as churn and pred y value as churn Probabilty
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
# now let's add predicted column
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now create conusion matrix for calculating accuracy
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
confusion
metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# - we can see there is no big change in accuracy.So dropping phone service column didn't effected our accuracy it means that phone service was an redundant or insignificant column
# - Let's see vif value
vif = pd.DataFrame()
vif["Features"] = X_train[col].columns # this will give column names
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
vif
# VIF havs 6 values after decimal so we will round it to 2 and write it in descending order(high to low numbers)
vif["VIF"] = round(vif["VIF"], 2)
vif = vif.sort_values(by="VIF", ascending=False)
vif
# #### dropping TotalCharges
# Total charges is having vif values of 7.53 which is high and it mean that Total charges is related with other
# independent vars with strong relation and its a multi-collnearity. And we drop this bcz there is no use of this column
col = col.drop("TotalCharges")
col
# NOw again we will build a model, to see what changes happened in model after dropping total charges column
# we will assign this col to some other var
X_train_sm = X_train[col]
# build a model and then fit that model
logm4 = sm.GLM(y_train, X_train_sm, family).fit() # family is binomial distribution
logm4.summary()
# we can see multiple lines _yes has high p value 48%
# but still we will see accuracy after dropping total charges
y_train_pred = logm4.predict(X_train_sm).values.reshape(-1)
y_train_pred
# now assign actual y train value as churn and predicted y train value as churn probability
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final.head()
# now we will add predicted column bcz to make churn prob column in binary values
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now create a confusion matrix or directly calculate accuracy score
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
print(confusion)
print(metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted))
# - from 81% to 79% our accuracy score is not a big change
# now let's see vif value
vif = pd.DataFrame()
vif["Features"] = X_train[col].columns # this will give column names
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
# shape 1 mean 0 index for feature column and 1 index for vif column.we have columns till 1 index
vif["VIF"] = round(vif["VIF"], 2)
vif
# - All values are within range.VIF value of all vars are 5%.It means that all vars are significant.
# - BUt we saw from summary that multiple lines column has 48% p value which is very high.
# #### dropping Mutiple lines yes
col = col.drop("MultipleLines_Yes")
# assigning this col to X_train_sm
X_train_sm = X_train[col]
X_train_sm.head()
# Build model and then fit to get coefficients(mx+c\parameters).This is our 5th model.GLM:GENERALISED LOGISTIC MODEL
# we r not adding constant later after perfect model is build then we will add constant at that time
logm5 = sm.GLM(y_train, X_train_sm, family).fit()
logm5.summary()
# - paperless billing columns is having high p value i.e.,20%.We will drop it but first we will see accuracy.Bcz dropping multiple lines made any change in our accuracy or not.
# #### 'The new shape should be compatible with the original shape' .Reshape simply means that it is an unknown dimension and we want numpy to figure it out.
# Creating a predictive value of train data
y_train_pred = logm5.predict(X_train_sm).values.reshape(-1)
y_train_pred
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
# now we will add a predicted column which will be binary values of churn prob column
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now ye have y_train value and y_train_pred value we will get accuracy
metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# - So it was a good idea to drop multiple lines bcz it was redundant as our accuracy score has not changed.
# - we know from model that p vlaue of paperless billings is 20% so we will drop that column.
# #### dropping paperless billing
# dropping paperless billings
col = col.drop("PaperlessBilling")
X_train_sm1 = X_train[col]
X_train_sm = sm.add_constant(X_train_sm1)
# build logistic model and fit it
logm6 = sm.GLM(y_train, X_train_sm, family).fit() # family = sm.families(Binomial())
logm6.summary()
# now lets see the accuracy. for that we need predict value
y_train_pred = logm6.predict(X_train_sm).values.reshape(-1)
y_train_pred
# now making a dataframe of y and y predict value
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
# Creating a prdicted column which is nothing but binary values of churn probability
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now that we have actual y and predicted value let's create confusion matrix and find accuracy using accuracy score
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
print("confusion matrix")
print(confusion)
# let's see accuracy
print("\n")
print("accuracy value:")
print(metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted))
# - So dropping paperless billing didn't effected our accuracy value.It's 79% which is a good value.
# - P value of PaymentMethod_Electronic check is 11% which is not significant.So we need to drop.
# #### dropping PaymentMethod_Electronic check var
# dropping PaymentMethod_Electronic check column
col = col.drop("PaymentMethod_Electronic check")
col
X_train_sm = X_train[col]
# build a model and fit model
X_train_sm = sm.add_constant(X_train_sm)
logm7 = sm.GLM(y_train, X_train_sm, family).fit()
logm7.summary()
# - p values of all vars is sigificant.
# - Now let's find accuracy after dropping PaymentMethod_Electronic check column.For that we need y Train predicted value.
# find predictive
y_train_pred = logm7.predict(X_train_sm).values.reshape(
-1
) # reshaping means we r giving dimension value
# Now creating a dataframe
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
y_train_pred_final["CUSTID"] = y_train.index
# now create a binary value of churn probability and assign it to predicted column
# by making values greater than 0.5 as 1 and lessthan 0.5 as 0
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now see the confusion matrix and accuracy score
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
print({"CONFUSION MATRIX": confusion})
print("\n")
print(
{
"ACCURACY SCORE": metrics.accuracy_score(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
}
)
# - with the help of curly bracks we can mention what value it is as we mentioned those values are confusion matrix and 79% is accuracy value.
# - dropping payment method electronic was a good idea bcz our accuracy value didn't changed.Any how it was an insignificant var.
# Now let's see vif value
vif = pd.DataFrame()
vif["features"] = X_train[col].columns
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
vif
# - P values and vif values of all vars are significant.
# - which mean we can go with this model and make predictions using this model.
# LET'S VIEW OUR CONFUSION MATRIX
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
confusion
# PREDICTED : not_churn | churn
# ACTUAL :
# not_churn : 3243 , 384 : TN , FP
# churn : 595 , 708 : FN , TP
# ##### accuracy is often not the best metric
# - bcz we can see from above actual churn is 708 and 595 but we predicted 595 as not churn and only 708 customers as churned. We can say that accuracy alone is not best rather it is very danger to give wrong information.
# - suppose company wants to give offer to people who want to churn(SWITCH) but we predicted is only 820 are churnig so company will give offer only to 708 customers.But actually 595 customer are also churning but we predicted as they are not churning ,and this customers will go to opposite company's network which will be a loss to company.
# - actually 595+708=1303 customers are churning but our prediction says only 708 are churning.So we can see how risky it is for companies.So with this we can alone accuracy is not beneficial to the company.
# - This is very risky the company won’t be able to give offers to the rest of 46% ‘churn’ customers and they could switch to a competitor!
#
# - 820 means only 54% customers churning.Which is giving wrong information.
# - accuracy is about 80%, the model only predicts 54% of churn cases correctly.
# ----
# - In essence, what’s happening here is that you care more about one class (class='churn') than the other.
# - This is a very common situation in classification problems - you almost always care more about one class than the other.
# - On the other hand, the accuracy tells you the model's performance on both classes combined - which is fine, but not the most important metric.
# ----
# - This brings us to two of the most commonly used metrics to evaluate a classification model:
# - Sensitivity : Actual +ve's or yess
# - Specificity : Actual no's or -ve's
# ------
# ###### SENSITIVITY / TRUE POSITIVE RATE/Recall = Total number of actual "YESES correctly predicted" /Total number of "actual yeses"
# - only 54% we detected +ve's or churned customers or 1's correctly.Rest 46% we failed to detect which is incorrect.
# - Thus, we can see that although we had high accuracy (~80%), our sensitivity turned out to be quite low (~54%)
#
# ###### Specificity = Total number of actual NO's correctly predicted/Total number of actual NO'S
# - 3243+384 = 3627
# - 89% we detected -ve's that is 0's or not churned coreectly.detection of 89% customers is good.
# - False +ve = total number of actual no predicted as yes/total number of actual no's
# - false + ve is to see 89% is specificity which says about 0's(not churned) how much % we predicted correctly and false +ve says rest how much % we didn't predicted correctly of not churned customers.
# ###### positive predicted / PRECISION = the number of +ves correctly predicted / the total number of +ves predicted
# - means our prediction is correct by this much %.What we predicted +vely is correct.
# ###### Negative predicted = the number of negatives correctly predicted / the total number of negatives predicted.
# - this gives -ve predicted values .
#
#
# # Metrics beyond simply accuracy
# - SENSITIVITY : ACTUAL POSITIVE
# - SPECIFICITY : ACTUAL NEGATIVE
# - POSITIVE PREDICTED/ PRECSION /RECALL : PREDICTED POSITIVE
# - NEGATIVE PREDICTIVE VALUE : PREDICTED NEGATIVE
# -----
# - TPR should be high and but for which cutoff we have a lower value of FPR. So clearly, that cutoff will give you a better model.
TP = confusion[
1, 1
] # index value,at index 1 inside that at index 1=[[a,b][c,d]] =1st bracket has index 0 and 2nd
TN = confusion[
0, 0
] # as index1.inside that a is at 0th index and b is at 1st. c is at 0th index and d at 1st
FP = confusion[0, 1]
FN = confusion[1, 0] # at 1st column inside that at 0th place
TP # TRUE POSITIVE : actual +ve and predicted correctly +ve(yes) : actually churnde and predicted churned
TN # TURE NEGATIVE : actula -ve(not churned) and predicted -ve (not churned)
FP # FALSE POSITIVE : actual negative(No) but predicted +ve: actually not churned and predicted churned
FN # FALSE NEGATIVE : actual +ve but predicted -ve: actually churned and predicted not churned
# SENSITIVITY : ACTUAL YESS CORRECTLY PREDICTED/TOTAL ACTUAL YES
# Let's see sensitivity of our logistic regression
sensitivity = TP / float(TP + FN)
({"SENSITIVITY": sensitivity})
# - 54% +ve value we predicted is correct and means 62% customers churned(switched) to other network companies.
# LET'S SEE SPECIFICITY OF LOGISTICE REGRESSION.
specificity = TN / float(TN + FP)
print({"SPECIFICITY": specificity})
# 89% of customers didn't churned
# - what we are calculating that will be on top.In specificity negativity we are finding then on top also -ve value.
# #### We should have sensitivity and specificity very close values and near to 100.Here we are getting very wrong view 54% & 89%.Diff is very large which is not a good view.This large diff is bcz we selected threshold by default.
# this is optional
# Let's see false Positive : not churned actually but predicted he has churned
print({"FALSE POSITIVE": FP / float(FP + TN)})
# 10% customers didn't churned but we predicted has they churned.
# ###### FALSE POSITIVE RATE : 1-SPECIFICITY
# - False Postive Rate is nothing but (1 - True Negative Rate) and the True Negative Rate is simply the specificity.
# Let's see customers have churned in actual and we predicted has not churned
# false negatives
print({"FALSE NEGATIVE": FN / float(FN + TP)})
# 45% customers have churned and we predicted not churned
# total +ve predictive value : Precision /positive predictive value/recall : what we predicted +ve
# let's see what we predicted +vely is correct or not
print({"POSTIVE PREDICTED": TP / float(TP + FP)})
# Negative predicted value.Means what we predicted -vely is correct or not
print({"NEGATIVE PREDICTED": TN / float(TN + FN)})
# - Positive predicted : 64% what we predicted +vely is correctly
# - 84 % what we predicted -vely is correct.
# - So our model seems to have high accuracy (~80%) and high specificity (~89%), but low sensitivity (~54%).
# -----
# - THRESHOLD / cut-off which we choosed at random and there was no particular logic behind it.
# - So it might not be the ideal cut-off point for classification which is why we might be getting such a low sensitivity and high specificity. So how do you find the ideal threshold/cutoff point?
# - For low values of threshold, you'd have a higher number of customers predicted as a 1 (Churn). This is because if the threshold is low, it basically means that everything above that threshold would be one and everything below that threshold would be zero.So naturally, a lower cutoff would mean a higher number of customers being identified as 'Churn'.
# - Similarly, for high values of threshold, you'd have a higher number of customers predicted as a 0 (Not-Churn) and a lower number of customers predicted as a 1 (Churn).
# ----
# - ROC Curves which show the tradeoff between the True Positive Rate (TPR) and the False Positive Rate (FPR).
# - We should have high TPR and low FPR.
# - TPR and FPR are nothing but sensitivity and (1 - specificity), so it can also be looked at as a tradeoff between sensitivity and specificity.
# ### A good ROC curve is the one that touches the upper-left corner of the graph; so the higher the area under the curve of a ROC curve, the better is your model.
# - we can clearly see from the ROC curve that when the value of TPR (on the Y-axis) is increasing, the value of FPR (on the X-axis) also increases.
# ###### - The highest AUC is the most accurate model. Also, note that the highest value of AUC can be 1
# ## Step 9 : Plotting ROC CURVE
# - The closer the curve follows the left-hand border and then the top border of the ROC space, the more accurate the test.
# - The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test.
# - sensitivity on Y axis and false postive rate(1-specificity) on x axis.
# ### USES of ROC curve
# #### 2.ROC is also used to see how efficient is our model.
# - To plot the ROC curve, we need to calculate the TPR and FPR for many different thresholds (This step is included in all relevant libraries as scikit-learn).
# - For each threshold, we plot the FPR value in the x-axis and the TPR value in the y-axis. We then join the dots with a line.
# finding FPR and TPR for all thresholds from 0.0 to 0.9
def draw_roc(actual, probs):
fpr, tpr, thresholds = metrics.roc_curve(
actual, probs, drop_intermediate=False
) # at actual we will give actual y value
# at probs we will write predicted values
auc_score = metrics.roc_auc_score(actual, probs)
plt.plot(fpr, tpr, label="ROC curve (area = %0.2f)" % auc_score)
# for middle line and x & y axis numbers
plt.plot(
[0, 1], [0, 1]
) # (0,1),(0,1) is 0,0 on x axis and 1,1 numbers on y axis.From 0,0 to 1,1 line is plotted
plt.xlim([0.0, 1.0]) # x limits till 1 means numbers on x axis
plt.ylim(
[0.0, 1.05]
) # y limits till 1.05=0.05 extra bcz orelse our line will go beyond plot
plt.title("Receiver operating characteristic example:ROC")
plt.legend(loc="lower right")
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.show()
return None
# getting values for fpr and tpr from actual y and predicted y values
fpr, tpr, thresholds = metrics.roc_curve(
y_train_pred_final.churn, y_train_pred_final.churn_prob, drop_intermediate=False
)
fpr, tpr, thresholds
# - all this fpr and tpr values are for different thresholds ,by plotting ROC Curve we will see which threshold is best.
# - roc auc score is : roc curve's area under curve score.
# - roc auc score of actual y value and predicted y value.
# - we get curve bcz of roc_auc_score metric
# plotting roc curve
draw_roc(y_train_pred_final.churn, y_train_pred_final.churn_prob)
# # Step 10 : Finding the optimal cutoff point
numbers = [float(x / 10) for x in range(10)]
numbers
# - x have values from 0 to 9 then, each values is divided by 10,0/10=0.0 ,1/10= 0.1 ,2/10 = 0.2 etc.
for i in numbers:
print(i)
y_train_pred_final[i] = y_train_pred_final.churn_prob.map(
lambda k: 1 if k > i else 0
)
y_train_pred_final
# Calculating accuracy ,probability , sensitivity and specificity for various cutoffs (threshold)
cutoff_df = pd.DataFrame(columns=["prob", "accuracy", "sensitivity", "specificity"])
num = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
for i in num:
confusion1 = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final[i]
)
total = sum(
sum(confusion1)
) # sum of all rows and it will give for 0.0 all rows sum,for 0.1,0.2 etc
# print({"TOTAL":total})
# Now let's find accuracy ,sensit & specificity
accuracy = (confusion1[0, 0] + confusion1[1, 1]) / total # 0&1 are index values,
# TN+TP/(TN+TP+FP+FN) denominator is nothing but total
sensitivity = (confusion1[1, 1]) / (confusion1[1, 0] + confusion1[1, 1]) # TP/TP+FN
specificity = (confusion1[0, 0]) / (confusion1[0, 0] + confusion1[0, 1]) # TN/TN+FP
cutoff_df.loc[i] = [i, accuracy, sensitivity, specificity]
print(cutoff_df)
# - We can see we got accuaracy ,specificity & sensitivity for all thresholds.
# - Using this values we will plot line graph.Before we found specificity,accuaracy,sensitivity for only 0.5 threshold. By default we selected 0.5 as threshold.
# prob is thresholds
cutoff_df.plot.line(x="prob", y=["sensitivity", "accuracy", "specificity"])
plt.show()
# seeing predicted column using 0.3 as threshold point
y_train_pred_final["final_Predicted"] = y_train_pred_final.churn_prob.map(
lambda k: 1 if k > 0.3 else 0
)
y_train_pred_final
# - Accuracy score and confusion matrix
# Let's check the accuracy
metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.final_Predicted)
# let's calculate confusion matrix for new threshold value 0.4
confusion_2 = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.final_Predicted
)
confusion_2
# we can see FN IS 337 WHICH IS DROPPED SIGNIFICANTLY AND TP(966) HAS INCREASED.THIS BY CHOOSING LOWER THRESHOLD
# HELPED US IN CAPTURING THE CHURN'S BETTER
# PREDICTED
# : NOT CHURN(-VE) , CHURN(+VE)
# ACTUAL # NOT CHURN(-VE) : 2754 , 873 : TN , FP
# CHURN(+VE) : 289 , 1014 : FN ,TP
TP = confusion_2[1, 1]
TN = confusion_2[0, 0]
FP = confusion_2[0, 1]
FN = confusion_2[1, 0]
# ###### sensitivity
# let's calculate sensitivity: +ve rate or actual +ves
TP / (TP + FN)
# - SENSITIVITY : 77% CUSTOMERS ARE ACTUALLY CHURNED
# ###### Specificity
# LET'S SEE SPECIFICITY:ACTUAL -VES
TN / (TN + FP)
# - SPECIFICITY INSIGHT : 76% CUSTOMERS ACTUALLY NOT CHURNED
# ###### False +ve rate
# LET'S CALCULATE FALSE POSITIVE RATE :
# WE have to calculate FP so numerator should be FP & this is from actual negative row on denominator
FP / (TN + FP)
# ###### +ve predictive value /Precision
# POSITIVE PREDICTIVE VALUE / PRECISION
TP / (TP + FP)
# NEGATIVE PREDICTIVE VALUE
TN / (TN + FN)
# - WE WILL SEE IN PRECISION HOW MANY WE PREDICTED +VES OR YES, will be similar to actual yes /+ves i.e., 77%.
# - sensitivity/recall is out of all actual yeses/+ves how many we are predicting correctly yeses.
# # Precision and recall
# - In industry, some businesses follow the 'Sensitivity-Specificity' view and some other businesses follow the 'Precision-Recall' view.
# - We can use any one view of this 2 views
# -----
# - When using the sensitivity-specificity tradeoff, we found out that the optimal cutoff point was 0.4. Now, when we will plot the precision-recall tradeoff, we will get diff threshold.
# PRECISION: using confusion 1st matrix
confusion[1, 1] / (confusion[1, 1] + confusion[0, 1])
# - We can see for precision is 64% which is our prediction on +ves/yeses,and actually the +ves/yeses are 74%(sensitivity).not so good there is huge diff in our predicted "yeses" and actual "yeses".
# RECALL
confusion[1, 1] / (confusion[1, 1] + confusion[1, 0])
# - Precision-recall there is bug diff and not such a good view.(64% and 54%)
# - If we would take sensitivity-specificity view that would be a good view.(77% & 76%).
# - whatever view we select might give us different interpretations for the same model. It is completely up to us which view we choose to take while building a logistic regression model.
# - Similar to sensitivity and specificity there is a trade off between Precision and recall.
# import precision and recall from sklearn metrics
from sklearn.metrics import precision_score, recall_score
# - We found precision & recall score using confusion matrix and slo we have built in functions to find this score.
# we used predicted bcz this is were we created a predicted column at start.Like sensitivity & specificity ,
# we do precision & recall from beggining.This are two diff methods to find threshold
precision_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# after getting churn prob's binary values i.e, predicted then we can continue directly with precision & recall to find threshold value
recall_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# - We got same values of precision and recall using confusion matrix and built in functions
# ### Trade off between Precision & Recall
# trade off btwn sensitivity-specificity we did,same we will do for precision and recall
# we have a built-in function for plotting a curve
from sklearn.metrics import precision_recall_curve
y_train_pred_final.churn, y_train_pred_final.Predicted
# assigning curve to p,r,threshold.
p, r, thresholds = precision_recall_curve(
y_train_pred_final.churn, y_train_pred_final.churn_prob
)
print(p)
print(r)
print(thresholds) # threshold will be between 0 & 1
# using p,r values let's plot a graph.Precision - recall trade off curve.
# we also have ROC- curve
plt.plot(
thresholds, p[:-1], "g"
) # all rows(only 1row is there) and columns till -1 means till last column
plt.plot(thresholds, r[:-1], "r")
plt.show()
# - Precision & recall are intersecting at 0.42.So our Threshold is 0.42.
# - we can find confusion matrix and accuracy using 0.4 as cutoff.
# - F1 score:
# - F=2×precision*recall/(precision + recall)
# - The F1-score is useful when you want to look at the performance of precision and recall together.
#
F1 = 2 * ((0.64 * 0.54) / (0.64 + 0.54))
F1
# # 11.Making predictions on test data sets
X_test
# - We can tenure ,MonthlyCharges & TotalCharges have very high values.Let's bring all on same scale.
# we will transform on test data. We won't do fit on test data.
X_test[["tenure", "MonthlyCharges", "TotalCharges"]] = scaler.transform(
X_test[["tenure", "MonthlyCharges", "TotalCharges"]]
)
# using rfe we selected top 15 columns and then doing manual feature elimination we got 10 columns
col
# We did all this in train data ,so we will access them directly in test data
X_test = X_test[col]
X_test.head()
# add constant to X test data set
X_test_sm = sm.add_constant(X_test)
X_test_sm.head()
# we have already built model of train data.
logm7
# predicting y test
y_test_pred = logm7.predict(X_test_sm)
# create a dataframe of actual y test & predicted ytest
y_test_pred_final = pd.DataFrame({"churn": y_test, "churn_prob": y_test_pred})
y_test_pred_final.head()
# let's add custid to our dataframe
y_test_pred_final["Custid"] = y_test.index
y_test_pred_final.head()
# now creating a predicting column which is 1's and 0's values of churn prob column.Where we assign 1 if >0.3
# we are taking sensitivity-sepcificity cutoff value
# we can take precision-recall's cutoff value also.It's upto us.
y_test_pred_final["Predicted"] = y_test_pred_final.churn_prob.map(
lambda x: 1 if x > 0.3 else 0
)
y_test_pred_final.head()
# now let's see the accuracy score
metrics.accuracy_score(y_test_pred_final.churn, y_test_pred_final.Predicted)
# let's see confusion matrix
metrics.confusion_matrix(y_test_pred_final.churn, y_test_pred_final.Predicted)
# we can see sensitivity and specificity of test data
# SENSITIVITY
sensi = TP / (TP + FN)
sensi
# Specificity
spe = TN / (TN + FP)
spe
# - We can also take the cutoff we got from the precision-recall tradeoff curve and we can make predictions based on that also.
# ##### OR
# ## For threshold = 0.42
# - let's see confusion matrix and accuracy
y_test_pred_final["PREDICTED"] = y_test_pred_final.churn_prob.map(
lambda x: 1 if x > 0.42 else 0
)
y_test_pred_final.head()
# let's see confusion matrix when cutoff is 0.4
confusion_4 = metrics.confusion_matrix(
y_test_pred_final.churn, y_test_pred_final.PREDICTED
)
TP = confusion_4[1, 1]
TN = confusion_4[0, 0]
FP = confusion_4[0, 1]
FN = confusion_4[1, 0]
# cutoff at 0.4 & at 0.3 cutoff CONFUSION MATRIX
# [1257, 290], [1113, 434],
# [ 189, 377] [ 140, 426]
# at 0.4 cutoff churned customers are 377 and at 0.3 churned customers are 426.
# 377 means less customers churned to other network
# ACCURACY : at 0.44 threshold
metrics.accuracy_score(y_test_pred_final.churn, y_test_pred_final.PREDICTED)
# - ACCURACY ALONE IS NOT SUFFICIENT.SO WE WILL LOOK AT SENSITIVITY AND SPECIFICITY.
# SENSITIVITY AT 0.44 CUTOFF
TP / (TP + FN)
# SPECIFICITY
TN / (TN + FP)
# ------
# # The steps that were performed throughout the model building and model evaluation are:
# - 1.Data cleaning and preparation
# - Combining three dataframes
# - Handling categorical variables
# - Mapping categorical variables to integers
# - Dummy variable creation
# - Handling missing values
# - 2.Test-train split and scaling
# - 3.Model Building
# - Feature elimination based on correlations
# - Feature selection using RFE (Coarse Tuning)
# - Manual feature elimination (using p-values and VIFs)
# - 4.Model Evaluation
# - Accuracy
# - Sensitivity and Specificity
# - Optimal cut-off using ROC curve
# - Precision and Recall
# - 5.Predictions on the test set
# #
# #
# #
# # Using Decision Trees
# - In decision trees we don't have to bother about feature scaling and multicollinearity.
X.head()
y.head()
# Let's split data into train and test splits
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.7, random_state=19
)
# let's look at dimensions of the train and test data
X_train.shape, X_test.shape
# ## Model building
from sklearn.tree import DecisionTreeClassifier
# creating an object of class
dt = DecisionTreeClassifier(random_state=67, max_depth=5)
# fit the model
dt.fit(X_train, y_train)
from sklearn import tree
import graphviz
dot_data = tree.export_graphviz(
dt,
out_file=None,
feature_names=X_train.columns,
rounded=True,
filled=True,
class_names=["Not-churn", "Churn"],
)
graphviz.Source(dot_data)
# # OR
from IPython.display import Image
from six import StringIO
from sklearn.tree import export_graphviz
import pydotplus, graphviz
dot_data = StringIO()
export_graphviz(
dt,
out_file=dot_data,
filled=True,
rounded=True,
feature_names=X_train.columns,
class_names=["Non-Churn", "Churn"],
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
# Let's predict train and test data
y_train_pred = dt.predict(X_train)
y_test_pred = dt.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_test_pred))
# ## Plotting ROC curve
from sklearn.metrics import plot_roc_curve
plot_roc_curve(dt, X_train, y_train, drop_intermediate=False)
# ### Decision tree : Grid Search CV for Hyperparameter tuning
from sklearn.model_selection import GridSearchCV
dt = DecisionTreeClassifier(random_state=10)
params = {"max_depth": [3, 5, 10, 15, 20], "min_samples_leaf": [50, 100, 150, 200, 400]}
grid = GridSearchCV(
estimator=dt, param_grid=params, cv=4, n_jobs=-1, verbose=1, scoring="accuracy"
)
grid.fit(X_train, y_train)
grid.best_score_
grid.best_estimator_
# instantiate
# building model using optimal hyper-parameters
dt_best = DecisionTreeClassifier(max_depth=5, min_samples_leaf=10, random_state=10)
# fitting
dt_best.fit(X_train, y_train)
# plot roc curve
plot_roc_curve(dt_best, X_train, y_train)
dot_data = tree.export_graphviz(
dt_best,
out_file=None,
feature_names=X_train.columns,
rounded=True,
filled=True,
class_names=["Non-churn", "Churn"],
)
graphviz.Source(dot_data)
# #
# -----------
# -------
# # Using Random Forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(
n_estimators=10, max_depth=5, random_state=19, max_features=10, oob_score=True
)
rf.fit(X_train, y_train)
# OOB score
rf.oob_score_
# plot roc curve
plot_roc_curve(rf, X_train, y_train)
# # Random Forest : Grid Seach CV Hyperparamter tuning
rf = RandomForestClassifier(random_state=100, n_jobs=-1)
params = {
"max_depth": [5, 8, 10, 15, 20],
"min_samples_leaf": [10, 15, 30, 70, 100, 200],
"n_estimators": [10, 100, 150, 200],
}
grid = GridSearchCV(
estimator=rf, param_grid=params, cv=5, scoring="accuracy", verbose=1, n_jobs=-1
)
grid.fit(X_train, y_train)
grid.best_score_
grid.best_estimator_
# building model using optimal hyper-parameters
rf_best = RandomForestClassifier(
max_depth=8, min_samples_leaf=10, n_estimators=150, n_jobs=-1, random_state=100
)
rf_best.fit(X_train, y_train)
plot_roc_curve(rf_best, X_train, y_train)
# - We can see before tuning AUC score was 0.86 and after tuning auc score has increased i.e., 0.88
rf_best.feature_importances_
# creating a dataframe
imp_df = pd.DataFrame(
{"Variable_name": X_train.columns, "Imp_features": rf_best.feature_importances_}
)
imp_df.nlargest(30, "Imp_features")
# or
# imp_df.sort_values(by ='Imp_features' , ascending = False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/915/129915480.ipynb
| null | null |
[{"Id": 129915480, "ScriptId": 38644537, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12439749, "CreationDate": "05/17/2023 12:05:25", "VersionNumber": 1.0, "Title": "Telecom churn-Logistic Regression,Tree Models", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 1407.0, "LinesInsertedFromPrevious": 1407.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
# ## Telecom churn data analysis using Logistic Regression, Decision Tree, Random Forest
# - With 21 predictor variables we need to predict whether the customer will switch to another telecom company or not.In telecom industries it is considered or said to be churning(swithching to other) and not churning(not changing to other comp).
# # Steps :
# - 1.Missing value imputation
# - 2.Outlier treatment
# - 3.Dummy variable creation for categorical variables
# - 4.Test-train split of the data
# - 5.Standardisation of the scales of continuous variables
#
# - a logistic regression model was built in Python using the function GLM() under the statsmodel library.
# - This model contained all the variables, some of which had insignificant coefficients.
# - Hence, some of these variables were removed first based on an automated approach,
# - i.e. RFE and then a manual approach based on VIF and p-value.
# - we also learnt about confusion matrix and accuracy and saw how accuracy was calculated for a logistic regression model.
# ------
# - Assuming that we arbitrarily chose a cut-off of 0.5(threshold), wherein if the probability is greater than 0.5, you'd conclude that the customers has churned and if it is less than or equal to 0.5, you'd conclude that the customer didn't churned(switched), how many of these customers would be classified as churned.
# ## Step - 1: Importing and Merging Data
# First import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# import customer data set
cust_data = pd.read_csv("/Users/sakshimunde/Downloads/customer_data.csv")
cust_data.head()
# from customer data we can see partner and dependents are binary
# seeing size of the customer data
cust_data.shape
# importing internet data
internet_data = pd.read_csv("/Users/sakshimunde/Downloads/internet_data.csv")
internet_data.head()
# online security,online backup,device protection,techsupport,streaming movies are binary in internet data
# seeing the size of internt_data
internet_data.shape
# importing churn_data
churn_data = pd.read_csv("/Users/sakshimunde/Downloads/churn_data.csv")
churn_data.head()
# see size of churn_data
churn_data.shape
# # Merging or combining all data files
# merging cust and internet data on customer id
df = pd.merge(cust_data, internet_data, how="inner", on="customerID")
df.head()
# merging df data (which has combination of customer and internet data) with churn data
telecom = pd.merge(df, churn_data, how="inner", on="customerID")
pd.set_option("display.max_columns", None)
# # Step 2 :Inspecting the dataframe
telecom.head()
telecom.info()
# - total charges column should be float.
# - We can see no nnull values are present
telecom.describe()
print("\n1.Partner")
print(telecom.Partner.unique())
print("\n2.Dependents")
print(telecom.Dependents.unique())
print("\n3.OnlineSecurity")
print(telecom.OnlineSecurity.unique())
print("\n4.OnlineBackup")
print(telecom.OnlineBackup.value_counts())
print("\n1.DeviceProtection")
print(telecom.DeviceProtection.value_counts())
print("\n2.TechSupport")
print(telecom.TechSupport.value_counts())
print("\n3.StreamingTV")
print(telecom.StreamingTV.unique())
print("\n4.PhoneService")
print(telecom.PhoneService.unique())
print("\n5.PaperlessBilling")
print(telecom.PaperlessBilling.value_counts())
print("\n6.Churn")
print(telecom.Churn.value_counts())
# - The count of the level ‘No internet service’ is the same for all, i.e. 1526. Can you explain briefly why this has happened?
# This happens because the level ‘No internet service’ just tells you whether a user has internet service or not. Now because the number of users not having an internet service is the same, the count of this level in all of these variables will be the same. You can also check the value counts of the variable ‘InternetService’ and you’ll see that the output you’ll get is:
# Fiber Optic 3096
# DSL 2421
# No 1526
# Coincidence? No!
# This information is already contained in the variable ‘InternetService’ and hence, the count will be the same in all the variables with the level ‘No internet service’. This is actually also the reason we chose to drop this particular level.
# - --------
# - we can see Partner, dependents , PhoneService,PaperlessBilling & Churn are binary data(yes/no) lets convert them to 0 and 1.
# Step 3: Data Preparation
# --
# - Converting binary variables yes/no to 0/1
#
binary_var = ["Partner", "Dependents", "PhoneService", "PaperlessBilling", "Churn"]
telecom[binary_var] = telecom[binary_var].apply(lambda x: x.map({"Yes": 1, "No": 0}))
# let's see yes or no got converted to 0 and 1
telecom.head()
# - We can see all binary vars got converted to 1's and 0's
# - Now let's convert categorical vars with >2 levels to dummy vars
# first we will convert gender,InternetService,Contract,PaymentMethod categorical vars to dummy
dummy1 = pd.get_dummies(
telecom[["gender", "InternetService", "Contract", "PaymentMethod"]], drop_first=True
)
dummy1.head()
# let's now concat dummy vars dataframe with telecom dataframe
telecom = pd.concat([telecom, dummy1], axis=1)
telecom.head()
# Now creating dummy vars of rest all categoriacl vars
print(telecom.MultipleLines.unique())
pd.get_dummies(telecom["MultipleLines"]).head()
# - WE SHOULD ADD COLUMN NAMES TO START(PREFIX) TO UNDERSTAND TO WHICH COLUMN IT BELONGS.
# - We know for n levels we should have n-1 levels in dummy vars.But in pandas dummy create n lvels where we need to drop one level we can choose randomly,or drop first or drop which is not useful.
# let's convert to dummies and give column names at beginning(prefix)
ML = pd.get_dummies(telecom["MultipleLines"], prefix="MultipleLines")
ML.head()
# let's drop MultipleLines_No phone service as it is not useful aswell
ML1 = ML.drop(["MultipleLines_No phone service"], axis=1)
ML1.head()
# - OnlineSecurity,OnlineBackup,DeviceProtection,techsupport,StreamingTV,Streaming movies are categorical vars that are left which needed to be converted to dummy vars.
# Converting OnlineSecurity to dummy var
OS = pd.get_dummies(telecom["OnlineSecurity"], prefix="OnlineSecurity")
OS.head()
# dropping OnlineSecurity No internet service column bcz there are n levels we should hv n-1 levels
OS1 = OS.drop(["OnlineSecurity_No internet service"], axis=1)
OS1.head()
# OnlineBackup
OB = pd.get_dummies(telecom["OnlineBackup"], prefix="OnlineBackup")
OB.head()
# drop OnlineBackup_No internet service
OB = OB.drop(["OnlineBackup_No internet service"], axis=1)
OB.head()
# DeviceProtection
DP = pd.get_dummies(telecom["DeviceProtection"], prefix="DeviceProtection")
DP.head()
# drop DeviceProtection_No internet service
DP = DP.drop(["DeviceProtection_No internet service"], axis=1)
DP.head()
# TechSupport
TS = pd.get_dummies(telecom["TechSupport"], prefix="TechSupport")
TS.head()
# drop TechSupport_No internet service
TS = TS.drop(["TechSupport_No internet service"], axis=1)
TS.head()
# StreamingTV
ST = pd.get_dummies(telecom["StreamingTV"], prefix="StreamingTV")
ST.head()
# drop StreamingTV_No internet service
ST = ST.drop(["StreamingTV_No internet service"], axis=1)
ST.head()
# StreamingMovies
SM = pd.get_dummies(telecom["StreamingMovies"], prefix="StreamingMovies")
SM.head()
# DROPPING STREAMING MOVIES NO INTERNET SERVICE BCZ FOR n LEVELS there should be n-1 LEVELS
SM = SM.drop(["StreamingMovies_No internet service"], axis=1)
SM.head()
# CONCATINATING ALL DUMMIES WITH TELECOM DATAFRAME
telecom = pd.concat([telecom, ML1], axis=1)
telecom = pd.concat([telecom, OS1], axis=1)
telecom = pd.concat([telecom, OB], axis=1)
telecom = pd.concat([telecom, DP], axis=1)
telecom = pd.concat([telecom, TS], axis=1)
telecom = pd.concat([telecom, ST], axis=1)
telecom = pd.concat([telecom, SM], axis=1)
telecom.head()
# ###### Drop Repeated vars
# - We have created dumy vars so we can drop repeated var
# DROPPING REPEATED VARS
telecom = telecom.drop(
[
"gender",
"InternetService",
"Contract",
"PaymentMethod",
"MultipleLines",
"OnlineSecurity",
"OnlineBackup",
"DeviceProtection",
"TechSupport",
"StreamingTV",
"StreamingMovies",
],
axis=1,
)
telecom.head()
# Customer id is not useful column so let's drop it
telecom = telecom.drop(["customerID"], axis=1)
# THERE IS A BLANK SPACE IN TOTALCHARGES COL BCZ OF WHICH IT IS SHOWING AS AN OBJECT
telecom["TotalCharges"] = telecom["TotalCharges"].str.replace(" ", "0")
telecom["TotalCharges"] = telecom["TotalCharges"].astype(float)
telecom["TotalCharges"].shape
# CHECKING WHETHER DATA TYPE CHANGED OR NOT
telecom["TotalCharges"].dtype
telecom.info()
telecom.head()
# ##### Checking for outliers
# - SeniorCitizen,tenure ,MonthlyCharges,TotalCharges are numerical data with high values.So we will see whether outliers are present in them or not
numerical_val = telecom[["SeniorCitizen", "tenure", "MonthlyCharges", "TotalCharges"]]
numerical_val.describe(percentiles=[0.25, 0.50, 0.75, 0.90, 0.95, 0.99])
# - We can see there are no outliers. All values are increasing gradually.
# - Also after 99% there is no sudden increase
telecom.isnull().sum()
# # Step 4 : Splitting data into train and test sets
# assigning all independent vars except churn and customer id to X axis
X = telecom.drop(["Churn"], axis=1)
X.head()
# assigning churn(target var) column to y axis
y = telecom["Churn"]
y.head()
from sklearn.model_selection import train_test_split
# splitting data into train and test
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.7, test_size=0.30, random_state=100
)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# X var as 30 columns and y has only 1 var i.e., churn
# ## Step 5 : Feature Scaling
# - Scaling helps us in faster convergence of gradient descent.
# - Standard scaler centers mean to 0
# - The formula for standardising a value in a dataset is given by:
# - (X − μ)/σ
# - Min max scaling compress values between min 0 and max 1
# --------
# - 'fit_transform' on the train set but just 'transform' on the test set. Why do you think this is done ?
# - The 'fit_transform' command first fits the data to have a mean of 0 and a standard deviation of 1, i.e. it scales all the variables using:
# - Now, once this is done, all the variables are transformed using this formula.
# - Now, when you go ahead to the test set, you want the variables to not learn anything new.
# - if we apply fit to test data then mean 0 and std 1 wil be diff of train and test data we should have same mean and std for train and test.That's y we don't do fit to test data,If we done then both will get duff mean & std.
# - You want to use the old centralisation that you had when you used fit on the train dataset.
# - And this is why you don't apply 'fit' on the test data, just the 'transform'.
#
from sklearn.preprocessing import StandardScaler
# creating an object of standard scaler as in sklearn we create object of a class
scaler = StandardScaler()
# fit and transform large values on same scale that other vars are
X_train[["tenure", "MonthlyCharges", "TotalCharges"]] = scaler.fit_transform(
X_train[["tenure", "MonthlyCharges", "TotalCharges"]]
)
X_train.head()
#
# - The variables had these ranges before standardisation:
# - Tenure = 1 to 72
# - Monthly charges = 18.25 to 118.80
# - Total charges = 18.8 to 8685
#
# - After standardisation, the ranges of the variables changed to:
# - Tenure = -1.28 to +1.61
# - Monthly charges = -1.55 to +1.79
# - Total charges = -0.99 to 2.83
# - Clearly, none of the variables will have a disproportionate effect on the model’s results now.
# churn data
# --
# let's see what is the percentage of churn data means how many customers changed there network or telecom company
# churn %
churn = (sum(telecom["Churn"]) / len(telecom["Churn"].index)) * 100
churn
# - 27% data is churned.Means 27% people changed there network company.
# ### step 6 :Correlation
# seeing correlation between the vars
# plotiing heatmap and corr() to see the relation between the vars
plt.figure(figsize=[35, 15])
sns.heatmap(telecom.corr(), annot=True, cmap="Greens")
plt.show()
# - We can see multiple lines_yes\no ,online security yes\no ,onlinebackup_yes\no,device protection yes\no,techsupport,screaming tv,screaming movies are dummy vars which are strongly correlated among themselves.
# - we will not drop all vars now,we drop them using feature eleimination bcz some vars could be imp
# - So it is better that we drop one of these variables from each pair as they won’t add much value to the model.
# - The choice of which of these pair of variables you desire to drop is completely up to you; we’ve chosen to drop all the 'Nos' because the 'Yeses' are generally more interpretable and easy-to-work-with variables.
# - Let's drop this inter correlated vars: also called multicollinearity.
# - dropping from both X_train data and xtest data.
X_train = X_train.drop(
[
"MultipleLines_No",
"OnlineSecurity_No",
"OnlineBackup_No",
"DeviceProtection_No",
"TechSupport_No",
"StreamingTV_No",
"StreamingMovies_No",
],
axis=1,
)
X_test = X_test.drop(
[
"MultipleLines_No",
"OnlineSecurity_No",
"OnlineBackup_No",
"DeviceProtection_No",
"TechSupport_No",
"StreamingTV_No",
"StreamingMovies_No",
],
axis=1,
)
# Now after dropping some of the dummy vars let's see relation between rest of the vars
plt.figure(figsize=[20, 10])
sns.heatmap(X_train.corr(), annot=True)
plt.show()
# # Step 7 : Model Building
# - Now that we have completed all the pre-processing steps, inspected the correlation values and have eliminated a few variables, it’s time to build our first model.
import statsmodels.api as sm
# building a logistic regression model.first add a constant
X_train_sm = sm.add_constant(X_train)
X_train_sm.head()
# Logistic regression is a binomial distribution
family = sm.families.Binomial()
family
# BUilding logistic regression model and fitting it (mx+c)
logm1 = sm.GLM(y_train, X_train_sm, family).fit()
logm1
# now our model is built.Let's see summary
logm1.summary()
# - In this table, our key focus area is just the different coefficients and their respective p-values. As you can see, there are many variables whose p-values are high, implying that that variable is statistically insignificant. So we need to eliminate some of the variables in order to build a better model.
#
# - We'll first eliminate a few features using Recursive Feature Elimination (RFE), and once we have reached a small set of variables to work with, we can then use manual feature elimination (i.e. manually eliminating features based on observing the p-values and VIFs).
# -------
# - For a variable to be insignificant, the p-value should be greater than 0.05.
# - In hypothesis we will se p value should be greater than z or alpha to make our null hypo true.
# - But in regression we should have p value <0.05(5%) to make it significant.
# -----
# - Recall that the null hypothesis for any beta was:
# - βi=0
# - And if the p-value is small, you can say that the coefficient is significant, and hence, you can reject the null hypothesis that
# - βi=0
# ---------
# # Feature selection using RFE
# - Now that We built our first model based on the summary statistics, we inferred that many of the variables might be insignificant and hence, we need to do some feature elimination.
# - Since the number of features is huge, let's first start with an automated feature selection technique (RFE) and then move to manual feature elimination (using p-values and VIFs) : this is exactly the same process that we did in linear regression.
# - first using rfe select significant(important) vars and then build model using this selected vars.
# - again check this selected vars using statsmodel or sklearn.
# - RFE won't work with statsmodel.So we have to use logistic regression using sklearn.
# # Steps :
# - 1. Import rfe and logistic regression models
# - 2. fit the model using X & Y using rfe
# - 3.select top columns which are significant according to rfe
# - 4.adding constant
# - 5.adding binomial family
# - 6.building a model and fitting it to get parameters
# - 7.predicting vals
# - 8.then converting predicted values to binary numbers
# - 9.finding accuracy.
# import logistic regression from sklearn
from sklearn.linear_model import LogisticRegression
lor = LogisticRegression()
lor
# now import RFE
from sklearn.feature_selection import RFE
# now select how top vars we want.we want top 15 vars
# creating an object of class RFE
rfe = RFE(lor, n_features_to_select=15)
rfe
# fitting the model
rfe = rfe.fit(X_train, y_train)
rfe
# Let's see how many vars got selected.support will show true or false in binary way
rfe.support_
# let's see which column got selected and what is columns rank
X_train.columns, rfe.support_, rfe.ranking_
# lets zip them together.This will show rank of all columns
list(zip(X_train.columns, rfe.ranking_))
# now get only top 15 columns
# ref.support will give only selected columns
col = X_train.columns[rfe.support_]
col
# - we can see onlly True columns we got i.e., top 15 columns
# - let's see which columns are insignificant.
# - 8 columns are insignificant
X_train.columns[~rfe.support_]
# ##### Creating the model using statsmodel
# now that we have top 15 columns.We will build a model using this vars
# assign top 15 columns to X train
X_train_rfe = X_train[col]
X_train_rfe.head()
# now let's add constant to train data
X_train_sm = sm.add_constant(X_train_rfe)
X_train_sm.head()
# - GLM (Generalised Linear Models) method of the library statsmodels.
# - 'Binomial()' in the 'family' argument tells statsmodels that it needs to fit a logit curve to a binomial data (i.e. in which the target will have just two classes, here 'Churn' and 'Non-Churn').
# Now constant is added,let's build our 2nd model
# as it is logistic regression we should have binomial distribution
family = sm.families.Binomial()
family
# now that binomial is created ,let's build our 2nd model after RFE using GLM
# Estimating COEFFICIENTS using generalised linear method /maximum likelihood function
logm2 = sm.GLM(y_train, X_train_sm, family).fit()
logm2 # logm : logistic model
# let's see summary
logm2.summary()
# - We can see all p values are <0.05 or 5% so all vars are significant.
# - Now for creating confusion matrix we need two vars.one is actual and other is predicted ,so that we con do comparision and can understand what we predicted is actually true or not.
# - y pred formulae : mx+c
# - we have inbuilt function predict to get y_pred
# - 'Binomial()' in the 'family' argument tells statsmodels that it needs to fit a logit curve to a binomial data (i.e. in which the target will have just two classes, here 'Churn' and 'Non-Churn').
# - Target var churn will be under two class +ve or -ve class i.e., in this 'churn' and 'not churn'
# so we need y pred.We predict on fit model.we predict using X_train data which is fitted
y_train_pred = logm2.predict(X_train_sm)
y_train_pred[:10] # seeing only 10 values .It's like head()/tail()
# - We got y predicted values.Now first we will make actual y values as churn values(network switched by customer) and predicted y values as churn probaility ( we are assuming that customers churned or switched to ther network)
# - We will reshape y train predicted data.Bcz we are making it only 1 row.Sequence is same.
# - reshaping we r doing bcz y train values have 1 column and we want y pred to have 1 column.
# -1 reshaping means making it in one column.Bcz there is no dimension to it
y_train_pred = y_train_pred.values.reshape(-1)
y_train_pred
# column converted to row
# now assigning y train pred to churn prob and ytrain as churn
y_train_pred_final = pd.DataFrame({"churn": y_train.values, "churn_prob": y_train_pred})
y_train_pred_final
# #### We can see output is categorical.Churn column has 0's/1's which are categorical values
# Now add a custid column
y_train_pred_final["custid"] = y_train.index
y_train_pred_final.head()
# ##### custid is also added.Now we will add predicted column which says if y pred value is >0.5 then write as 1
# - if churn prob >0.5 assign 1 else 0.
# - we are converting into 0's and 1's to churn prob column bcz churn is alreday in 1/0 and in classification we should have binary outcomes i.e., 1/0.
# - churn is already in 0/1 ,now we will convert churn prob in 1/0 and write them in separate column.
# taking threshold as 0.5 by default and let's see 0.5 cutoff is correct or not
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# #### we are converting churn prob to 0's/1's bcz we will get S-shape curve.If we won't convert to 0/1 then we will get a linear line using churn and churn_prob.
# - we can see where churn prob is >5% there predicted column has 1.
# - Since the logistic curve gives you just the probabilities and not the actual classification of 'Churn' and 'Non-Churn', you need to find a threshold probability to classify customers as 'churn' and 'non-churn'.
# - Here, we choose 0.5 as an arbitrary cutoff wherein if the probability of a particular customer churning is less than 0.5, you'd classify it as 'Non-Churn' and if it's greater than 0.5, you'd classify it as 'Churn'. The choice of 0.5 is completely arbitrary at this stage.
# -----
# - You chose a cutoff of 0.5 in order to classify the customers into 'Churn' and 'Non-Churn'.
# - Now, since we're classifying the customers into two classes, we'll obviously have some errors. The classes of errors that would be there are:
# - 'Churn' customers being (incorrectly) classified as 'Non-Churn' :- actually churned but predicted not churn is a type1 error
# - 'Non-Churn' customers being (incorrectly) classified as 'Churn' :- non churned(not switched) customers are predicted as churned(switched) is also an error(means wrong prediction)
# ### Confusion matrix
from sklearn import metrics
confusion = metrics.confusion_matrix(
y_train_pred_final["churn"], y_train_pred_final["Predicted"]
)
confusion
# - ------- predicted : ----------- not churn, churn
# - --- --actual
# - not churn : -------------- 3255 ,372 ---------------- TN,FP
# - churn : ------------------- 550,753 -------------- FN ,TP
#
#
#
# - churn churn means 1 and it means that there are 753, 1's.(TP)
# - 0's are 3255,not churn is 0.(TN)
# - 3255 people didn't churned(didin't switched) there network company.
# - 753 customers churned(switched) there network.
# -------
# - False positive : Actually -ve but predicted +ve.So it's an error.We can see 372 were predicted as 1 but they were actually 0.
# - Flase Negative : Actually +ve but predicted -ve.There are 550 values which predicted wrong.550 values are actually +ve but predicted -ve.
# ---------
# - We get accuracy by using formulae or built in funtion :
# - (TP+TN)/(TP+TN+FP+FN)
metrics.accuracy_score(y_train_pred_final["churn"], y_train_pred_final["Predicted"])
# - Accuracy is 81% which is a good % value to begin with.
# - So far you have only selected features based on RFE.
# - Further elimination of features using the p-values and VIFs manually is yet to be done.
# -----
# - We saw in the pairwise correlations, there are high values of correlations present between the 15 features, i.e. there is still some multicollinearity among the features.
# - So we definitely need to check the VIFs as well to further eliminate the redundant variables.
# - VIF calculates how well one independent variable is explained by all the other independent variables combined.
# ## Checking VIF'S
# - VIF > 5% drop them.VIF < 0.5 is significant var.
# # Steps :
# - 1.finding vif value
# - 2.manual feature elimination
# - 3.BUild a model and fit
# - 1.see summary and based on p values eliminate vars if>0.05
# - 2.prdict value using fit model and X train value of fitted model
# - 3.create a dataframe of actual y value and predicted y value
# - 4.now uisng predicted value make binary numbers 1/0 of predicted y value.We make values >0.5 as 1 and <0.5 0
# here o.5 we took bcz by default threshold we take as 0.5.
# - 5.see confusion matrix using y actual and predicted values.
# - 6.Find accuracy score using y actual value and predicted binary values
# - See Vif values.Repeat this process until we get significant variables.
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["Features"] = X_train[col].columns
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
vif["VIF"] = round(vif["VIF"], 2)
vif
vif["VIF"] = round(vif["VIF"], 2)
vif = vif.sort_values(by="VIF", ascending=False)
vif
# - phone service has very high vif 9%.let's drop it.phone service 8% means it is highly correlated with it's own independent vars i.e., multicollinearity.
# ## MANUAL FEATURE ELIMINATION
# #### dropping PhoneService
# let's see all 15 columns that were selected
col
# Now from this columns let's drop phone service
col = col.drop("PhoneService")
col
# Now once again we need to build model
X_train_rfe = X_train[col]
X_train_rfe.head()
# now we will add constant to X train data set
X_train_sm = sm.add_constant(X_train_rfe)
X_train_sm.head()
# #### Building and fitting model after PhoneService is dropped
# now that constant is added let's build(mx+c) our model and fit it.After fitting only we will get parameters.
# this is our 3rd logistic model
logm3 = sm.GLM(y_train, X_train_sm, family).fit() # family is binomial
logm3
# now let's see summary
logm3.summary()
# - p values of all vars are sigificant.Let's see vif value
# - now we have to see for accuracy.We will see is there any change in accuracy or not.
# #### Creating a prdicted var
y_train_pred = logm3.predict(X_train_sm)
y_train_pred
# we will reshape y_train_pred
y_train_pred = y_train_pred.values.reshape(-1)
y_train_pred
# Now y_train_pred got reshaped. we will assign actual y value as churn and pred y value as churn Probabilty
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
# now let's add predicted column
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now create conusion matrix for calculating accuracy
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
confusion
metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# - we can see there is no big change in accuracy.So dropping phone service column didn't effected our accuracy it means that phone service was an redundant or insignificant column
# - Let's see vif value
vif = pd.DataFrame()
vif["Features"] = X_train[col].columns # this will give column names
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
vif
# VIF havs 6 values after decimal so we will round it to 2 and write it in descending order(high to low numbers)
vif["VIF"] = round(vif["VIF"], 2)
vif = vif.sort_values(by="VIF", ascending=False)
vif
# #### dropping TotalCharges
# Total charges is having vif values of 7.53 which is high and it mean that Total charges is related with other
# independent vars with strong relation and its a multi-collnearity. And we drop this bcz there is no use of this column
col = col.drop("TotalCharges")
col
# NOw again we will build a model, to see what changes happened in model after dropping total charges column
# we will assign this col to some other var
X_train_sm = X_train[col]
# build a model and then fit that model
logm4 = sm.GLM(y_train, X_train_sm, family).fit() # family is binomial distribution
logm4.summary()
# we can see multiple lines _yes has high p value 48%
# but still we will see accuracy after dropping total charges
y_train_pred = logm4.predict(X_train_sm).values.reshape(-1)
y_train_pred
# now assign actual y train value as churn and predicted y train value as churn probability
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final.head()
# now we will add predicted column bcz to make churn prob column in binary values
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now create a confusion matrix or directly calculate accuracy score
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
print(confusion)
print(metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted))
# - from 81% to 79% our accuracy score is not a big change
# now let's see vif value
vif = pd.DataFrame()
vif["Features"] = X_train[col].columns # this will give column names
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
# shape 1 mean 0 index for feature column and 1 index for vif column.we have columns till 1 index
vif["VIF"] = round(vif["VIF"], 2)
vif
# - All values are within range.VIF value of all vars are 5%.It means that all vars are significant.
# - BUt we saw from summary that multiple lines column has 48% p value which is very high.
# #### dropping Mutiple lines yes
col = col.drop("MultipleLines_Yes")
# assigning this col to X_train_sm
X_train_sm = X_train[col]
X_train_sm.head()
# Build model and then fit to get coefficients(mx+c\parameters).This is our 5th model.GLM:GENERALISED LOGISTIC MODEL
# we r not adding constant later after perfect model is build then we will add constant at that time
logm5 = sm.GLM(y_train, X_train_sm, family).fit()
logm5.summary()
# - paperless billing columns is having high p value i.e.,20%.We will drop it but first we will see accuracy.Bcz dropping multiple lines made any change in our accuracy or not.
# #### 'The new shape should be compatible with the original shape' .Reshape simply means that it is an unknown dimension and we want numpy to figure it out.
# Creating a predictive value of train data
y_train_pred = logm5.predict(X_train_sm).values.reshape(-1)
y_train_pred
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
# now we will add a predicted column which will be binary values of churn prob column
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now ye have y_train value and y_train_pred value we will get accuracy
metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# - So it was a good idea to drop multiple lines bcz it was redundant as our accuracy score has not changed.
# - we know from model that p vlaue of paperless billings is 20% so we will drop that column.
# #### dropping paperless billing
# dropping paperless billings
col = col.drop("PaperlessBilling")
X_train_sm1 = X_train[col]
X_train_sm = sm.add_constant(X_train_sm1)
# build logistic model and fit it
logm6 = sm.GLM(y_train, X_train_sm, family).fit() # family = sm.families(Binomial())
logm6.summary()
# now lets see the accuracy. for that we need predict value
y_train_pred = logm6.predict(X_train_sm).values.reshape(-1)
y_train_pred
# now making a dataframe of y and y predict value
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
# Creating a prdicted column which is nothing but binary values of churn probability
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now that we have actual y and predicted value let's create confusion matrix and find accuracy using accuracy score
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
print("confusion matrix")
print(confusion)
# let's see accuracy
print("\n")
print("accuracy value:")
print(metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.Predicted))
# - So dropping paperless billing didn't effected our accuracy value.It's 79% which is a good value.
# - P value of PaymentMethod_Electronic check is 11% which is not significant.So we need to drop.
# #### dropping PaymentMethod_Electronic check var
# dropping PaymentMethod_Electronic check column
col = col.drop("PaymentMethod_Electronic check")
col
X_train_sm = X_train[col]
# build a model and fit model
X_train_sm = sm.add_constant(X_train_sm)
logm7 = sm.GLM(y_train, X_train_sm, family).fit()
logm7.summary()
# - p values of all vars is sigificant.
# - Now let's find accuracy after dropping PaymentMethod_Electronic check column.For that we need y Train predicted value.
# find predictive
y_train_pred = logm7.predict(X_train_sm).values.reshape(
-1
) # reshaping means we r giving dimension value
# Now creating a dataframe
y_train_pred_final = pd.DataFrame({"churn": y_train, "churn_prob": y_train_pred})
y_train_pred_final
y_train_pred_final["CUSTID"] = y_train.index
# now create a binary value of churn probability and assign it to predicted column
# by making values greater than 0.5 as 1 and lessthan 0.5 as 0
y_train_pred_final["Predicted"] = y_train_pred_final["churn_prob"].map(
lambda x: 1 if x > 0.5 else 0
)
y_train_pred_final
# now see the confusion matrix and accuracy score
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
print({"CONFUSION MATRIX": confusion})
print("\n")
print(
{
"ACCURACY SCORE": metrics.accuracy_score(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
}
)
# - with the help of curly bracks we can mention what value it is as we mentioned those values are confusion matrix and 79% is accuracy value.
# - dropping payment method electronic was a good idea bcz our accuracy value didn't changed.Any how it was an insignificant var.
# Now let's see vif value
vif = pd.DataFrame()
vif["features"] = X_train[col].columns
vif["VIF"] = [
variance_inflation_factor(X_train[col].values, i)
for i in range(X_train[col].shape[1])
]
vif
# - P values and vif values of all vars are significant.
# - which mean we can go with this model and make predictions using this model.
# LET'S VIEW OUR CONFUSION MATRIX
confusion = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.Predicted
)
confusion
# PREDICTED : not_churn | churn
# ACTUAL :
# not_churn : 3243 , 384 : TN , FP
# churn : 595 , 708 : FN , TP
# ##### accuracy is often not the best metric
# - bcz we can see from above actual churn is 708 and 595 but we predicted 595 as not churn and only 708 customers as churned. We can say that accuracy alone is not best rather it is very danger to give wrong information.
# - suppose company wants to give offer to people who want to churn(SWITCH) but we predicted is only 820 are churnig so company will give offer only to 708 customers.But actually 595 customer are also churning but we predicted as they are not churning ,and this customers will go to opposite company's network which will be a loss to company.
# - actually 595+708=1303 customers are churning but our prediction says only 708 are churning.So we can see how risky it is for companies.So with this we can alone accuracy is not beneficial to the company.
# - This is very risky the company won’t be able to give offers to the rest of 46% ‘churn’ customers and they could switch to a competitor!
#
# - 820 means only 54% customers churning.Which is giving wrong information.
# - accuracy is about 80%, the model only predicts 54% of churn cases correctly.
# ----
# - In essence, what’s happening here is that you care more about one class (class='churn') than the other.
# - This is a very common situation in classification problems - you almost always care more about one class than the other.
# - On the other hand, the accuracy tells you the model's performance on both classes combined - which is fine, but not the most important metric.
# ----
# - This brings us to two of the most commonly used metrics to evaluate a classification model:
# - Sensitivity : Actual +ve's or yess
# - Specificity : Actual no's or -ve's
# ------
# ###### SENSITIVITY / TRUE POSITIVE RATE/Recall = Total number of actual "YESES correctly predicted" /Total number of "actual yeses"
# - only 54% we detected +ve's or churned customers or 1's correctly.Rest 46% we failed to detect which is incorrect.
# - Thus, we can see that although we had high accuracy (~80%), our sensitivity turned out to be quite low (~54%)
#
# ###### Specificity = Total number of actual NO's correctly predicted/Total number of actual NO'S
# - 3243+384 = 3627
# - 89% we detected -ve's that is 0's or not churned coreectly.detection of 89% customers is good.
# - False +ve = total number of actual no predicted as yes/total number of actual no's
# - false + ve is to see 89% is specificity which says about 0's(not churned) how much % we predicted correctly and false +ve says rest how much % we didn't predicted correctly of not churned customers.
# ###### positive predicted / PRECISION = the number of +ves correctly predicted / the total number of +ves predicted
# - means our prediction is correct by this much %.What we predicted +vely is correct.
# ###### Negative predicted = the number of negatives correctly predicted / the total number of negatives predicted.
# - this gives -ve predicted values .
#
#
# # Metrics beyond simply accuracy
# - SENSITIVITY : ACTUAL POSITIVE
# - SPECIFICITY : ACTUAL NEGATIVE
# - POSITIVE PREDICTED/ PRECSION /RECALL : PREDICTED POSITIVE
# - NEGATIVE PREDICTIVE VALUE : PREDICTED NEGATIVE
# -----
# - TPR should be high and but for which cutoff we have a lower value of FPR. So clearly, that cutoff will give you a better model.
TP = confusion[
1, 1
] # index value,at index 1 inside that at index 1=[[a,b][c,d]] =1st bracket has index 0 and 2nd
TN = confusion[
0, 0
] # as index1.inside that a is at 0th index and b is at 1st. c is at 0th index and d at 1st
FP = confusion[0, 1]
FN = confusion[1, 0] # at 1st column inside that at 0th place
TP # TRUE POSITIVE : actual +ve and predicted correctly +ve(yes) : actually churnde and predicted churned
TN # TURE NEGATIVE : actula -ve(not churned) and predicted -ve (not churned)
FP # FALSE POSITIVE : actual negative(No) but predicted +ve: actually not churned and predicted churned
FN # FALSE NEGATIVE : actual +ve but predicted -ve: actually churned and predicted not churned
# SENSITIVITY : ACTUAL YESS CORRECTLY PREDICTED/TOTAL ACTUAL YES
# Let's see sensitivity of our logistic regression
sensitivity = TP / float(TP + FN)
({"SENSITIVITY": sensitivity})
# - 54% +ve value we predicted is correct and means 62% customers churned(switched) to other network companies.
# LET'S SEE SPECIFICITY OF LOGISTICE REGRESSION.
specificity = TN / float(TN + FP)
print({"SPECIFICITY": specificity})
# 89% of customers didn't churned
# - what we are calculating that will be on top.In specificity negativity we are finding then on top also -ve value.
# #### We should have sensitivity and specificity very close values and near to 100.Here we are getting very wrong view 54% & 89%.Diff is very large which is not a good view.This large diff is bcz we selected threshold by default.
# this is optional
# Let's see false Positive : not churned actually but predicted he has churned
print({"FALSE POSITIVE": FP / float(FP + TN)})
# 10% customers didn't churned but we predicted has they churned.
# ###### FALSE POSITIVE RATE : 1-SPECIFICITY
# - False Postive Rate is nothing but (1 - True Negative Rate) and the True Negative Rate is simply the specificity.
# Let's see customers have churned in actual and we predicted has not churned
# false negatives
print({"FALSE NEGATIVE": FN / float(FN + TP)})
# 45% customers have churned and we predicted not churned
# total +ve predictive value : Precision /positive predictive value/recall : what we predicted +ve
# let's see what we predicted +vely is correct or not
print({"POSTIVE PREDICTED": TP / float(TP + FP)})
# Negative predicted value.Means what we predicted -vely is correct or not
print({"NEGATIVE PREDICTED": TN / float(TN + FN)})
# - Positive predicted : 64% what we predicted +vely is correctly
# - 84 % what we predicted -vely is correct.
# - So our model seems to have high accuracy (~80%) and high specificity (~89%), but low sensitivity (~54%).
# -----
# - THRESHOLD / cut-off which we choosed at random and there was no particular logic behind it.
# - So it might not be the ideal cut-off point for classification which is why we might be getting such a low sensitivity and high specificity. So how do you find the ideal threshold/cutoff point?
# - For low values of threshold, you'd have a higher number of customers predicted as a 1 (Churn). This is because if the threshold is low, it basically means that everything above that threshold would be one and everything below that threshold would be zero.So naturally, a lower cutoff would mean a higher number of customers being identified as 'Churn'.
# - Similarly, for high values of threshold, you'd have a higher number of customers predicted as a 0 (Not-Churn) and a lower number of customers predicted as a 1 (Churn).
# ----
# - ROC Curves which show the tradeoff between the True Positive Rate (TPR) and the False Positive Rate (FPR).
# - We should have high TPR and low FPR.
# - TPR and FPR are nothing but sensitivity and (1 - specificity), so it can also be looked at as a tradeoff between sensitivity and specificity.
# ### A good ROC curve is the one that touches the upper-left corner of the graph; so the higher the area under the curve of a ROC curve, the better is your model.
# - we can clearly see from the ROC curve that when the value of TPR (on the Y-axis) is increasing, the value of FPR (on the X-axis) also increases.
# ###### - The highest AUC is the most accurate model. Also, note that the highest value of AUC can be 1
# ## Step 9 : Plotting ROC CURVE
# - The closer the curve follows the left-hand border and then the top border of the ROC space, the more accurate the test.
# - The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test.
# - sensitivity on Y axis and false postive rate(1-specificity) on x axis.
# ### USES of ROC curve
# #### 2.ROC is also used to see how efficient is our model.
# - To plot the ROC curve, we need to calculate the TPR and FPR for many different thresholds (This step is included in all relevant libraries as scikit-learn).
# - For each threshold, we plot the FPR value in the x-axis and the TPR value in the y-axis. We then join the dots with a line.
# finding FPR and TPR for all thresholds from 0.0 to 0.9
def draw_roc(actual, probs):
fpr, tpr, thresholds = metrics.roc_curve(
actual, probs, drop_intermediate=False
) # at actual we will give actual y value
# at probs we will write predicted values
auc_score = metrics.roc_auc_score(actual, probs)
plt.plot(fpr, tpr, label="ROC curve (area = %0.2f)" % auc_score)
# for middle line and x & y axis numbers
plt.plot(
[0, 1], [0, 1]
) # (0,1),(0,1) is 0,0 on x axis and 1,1 numbers on y axis.From 0,0 to 1,1 line is plotted
plt.xlim([0.0, 1.0]) # x limits till 1 means numbers on x axis
plt.ylim(
[0.0, 1.05]
) # y limits till 1.05=0.05 extra bcz orelse our line will go beyond plot
plt.title("Receiver operating characteristic example:ROC")
plt.legend(loc="lower right")
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.show()
return None
# getting values for fpr and tpr from actual y and predicted y values
fpr, tpr, thresholds = metrics.roc_curve(
y_train_pred_final.churn, y_train_pred_final.churn_prob, drop_intermediate=False
)
fpr, tpr, thresholds
# - all this fpr and tpr values are for different thresholds ,by plotting ROC Curve we will see which threshold is best.
# - roc auc score is : roc curve's area under curve score.
# - roc auc score of actual y value and predicted y value.
# - we get curve bcz of roc_auc_score metric
# plotting roc curve
draw_roc(y_train_pred_final.churn, y_train_pred_final.churn_prob)
# # Step 10 : Finding the optimal cutoff point
numbers = [float(x / 10) for x in range(10)]
numbers
# - x have values from 0 to 9 then, each values is divided by 10,0/10=0.0 ,1/10= 0.1 ,2/10 = 0.2 etc.
for i in numbers:
print(i)
y_train_pred_final[i] = y_train_pred_final.churn_prob.map(
lambda k: 1 if k > i else 0
)
y_train_pred_final
# Calculating accuracy ,probability , sensitivity and specificity for various cutoffs (threshold)
cutoff_df = pd.DataFrame(columns=["prob", "accuracy", "sensitivity", "specificity"])
num = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
for i in num:
confusion1 = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final[i]
)
total = sum(
sum(confusion1)
) # sum of all rows and it will give for 0.0 all rows sum,for 0.1,0.2 etc
# print({"TOTAL":total})
# Now let's find accuracy ,sensit & specificity
accuracy = (confusion1[0, 0] + confusion1[1, 1]) / total # 0&1 are index values,
# TN+TP/(TN+TP+FP+FN) denominator is nothing but total
sensitivity = (confusion1[1, 1]) / (confusion1[1, 0] + confusion1[1, 1]) # TP/TP+FN
specificity = (confusion1[0, 0]) / (confusion1[0, 0] + confusion1[0, 1]) # TN/TN+FP
cutoff_df.loc[i] = [i, accuracy, sensitivity, specificity]
print(cutoff_df)
# - We can see we got accuaracy ,specificity & sensitivity for all thresholds.
# - Using this values we will plot line graph.Before we found specificity,accuaracy,sensitivity for only 0.5 threshold. By default we selected 0.5 as threshold.
# prob is thresholds
cutoff_df.plot.line(x="prob", y=["sensitivity", "accuracy", "specificity"])
plt.show()
# seeing predicted column using 0.3 as threshold point
y_train_pred_final["final_Predicted"] = y_train_pred_final.churn_prob.map(
lambda k: 1 if k > 0.3 else 0
)
y_train_pred_final
# - Accuracy score and confusion matrix
# Let's check the accuracy
metrics.accuracy_score(y_train_pred_final.churn, y_train_pred_final.final_Predicted)
# let's calculate confusion matrix for new threshold value 0.4
confusion_2 = metrics.confusion_matrix(
y_train_pred_final.churn, y_train_pred_final.final_Predicted
)
confusion_2
# we can see FN IS 337 WHICH IS DROPPED SIGNIFICANTLY AND TP(966) HAS INCREASED.THIS BY CHOOSING LOWER THRESHOLD
# HELPED US IN CAPTURING THE CHURN'S BETTER
# PREDICTED
# : NOT CHURN(-VE) , CHURN(+VE)
# ACTUAL # NOT CHURN(-VE) : 2754 , 873 : TN , FP
# CHURN(+VE) : 289 , 1014 : FN ,TP
TP = confusion_2[1, 1]
TN = confusion_2[0, 0]
FP = confusion_2[0, 1]
FN = confusion_2[1, 0]
# ###### sensitivity
# let's calculate sensitivity: +ve rate or actual +ves
TP / (TP + FN)
# - SENSITIVITY : 77% CUSTOMERS ARE ACTUALLY CHURNED
# ###### Specificity
# LET'S SEE SPECIFICITY:ACTUAL -VES
TN / (TN + FP)
# - SPECIFICITY INSIGHT : 76% CUSTOMERS ACTUALLY NOT CHURNED
# ###### False +ve rate
# LET'S CALCULATE FALSE POSITIVE RATE :
# WE have to calculate FP so numerator should be FP & this is from actual negative row on denominator
FP / (TN + FP)
# ###### +ve predictive value /Precision
# POSITIVE PREDICTIVE VALUE / PRECISION
TP / (TP + FP)
# NEGATIVE PREDICTIVE VALUE
TN / (TN + FN)
# - WE WILL SEE IN PRECISION HOW MANY WE PREDICTED +VES OR YES, will be similar to actual yes /+ves i.e., 77%.
# - sensitivity/recall is out of all actual yeses/+ves how many we are predicting correctly yeses.
# # Precision and recall
# - In industry, some businesses follow the 'Sensitivity-Specificity' view and some other businesses follow the 'Precision-Recall' view.
# - We can use any one view of this 2 views
# -----
# - When using the sensitivity-specificity tradeoff, we found out that the optimal cutoff point was 0.4. Now, when we will plot the precision-recall tradeoff, we will get diff threshold.
# PRECISION: using confusion 1st matrix
confusion[1, 1] / (confusion[1, 1] + confusion[0, 1])
# - We can see for precision is 64% which is our prediction on +ves/yeses,and actually the +ves/yeses are 74%(sensitivity).not so good there is huge diff in our predicted "yeses" and actual "yeses".
# RECALL
confusion[1, 1] / (confusion[1, 1] + confusion[1, 0])
# - Precision-recall there is bug diff and not such a good view.(64% and 54%)
# - If we would take sensitivity-specificity view that would be a good view.(77% & 76%).
# - whatever view we select might give us different interpretations for the same model. It is completely up to us which view we choose to take while building a logistic regression model.
# - Similar to sensitivity and specificity there is a trade off between Precision and recall.
# import precision and recall from sklearn metrics
from sklearn.metrics import precision_score, recall_score
# - We found precision & recall score using confusion matrix and slo we have built in functions to find this score.
# we used predicted bcz this is were we created a predicted column at start.Like sensitivity & specificity ,
# we do precision & recall from beggining.This are two diff methods to find threshold
precision_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# after getting churn prob's binary values i.e, predicted then we can continue directly with precision & recall to find threshold value
recall_score(y_train_pred_final.churn, y_train_pred_final.Predicted)
# - We got same values of precision and recall using confusion matrix and built in functions
# ### Trade off between Precision & Recall
# trade off btwn sensitivity-specificity we did,same we will do for precision and recall
# we have a built-in function for plotting a curve
from sklearn.metrics import precision_recall_curve
y_train_pred_final.churn, y_train_pred_final.Predicted
# assigning curve to p,r,threshold.
p, r, thresholds = precision_recall_curve(
y_train_pred_final.churn, y_train_pred_final.churn_prob
)
print(p)
print(r)
print(thresholds) # threshold will be between 0 & 1
# using p,r values let's plot a graph.Precision - recall trade off curve.
# we also have ROC- curve
plt.plot(
thresholds, p[:-1], "g"
) # all rows(only 1row is there) and columns till -1 means till last column
plt.plot(thresholds, r[:-1], "r")
plt.show()
# - Precision & recall are intersecting at 0.42.So our Threshold is 0.42.
# - we can find confusion matrix and accuracy using 0.4 as cutoff.
# - F1 score:
# - F=2×precision*recall/(precision + recall)
# - The F1-score is useful when you want to look at the performance of precision and recall together.
#
F1 = 2 * ((0.64 * 0.54) / (0.64 + 0.54))
F1
# # 11.Making predictions on test data sets
X_test
# - We can tenure ,MonthlyCharges & TotalCharges have very high values.Let's bring all on same scale.
# we will transform on test data. We won't do fit on test data.
X_test[["tenure", "MonthlyCharges", "TotalCharges"]] = scaler.transform(
X_test[["tenure", "MonthlyCharges", "TotalCharges"]]
)
# using rfe we selected top 15 columns and then doing manual feature elimination we got 10 columns
col
# We did all this in train data ,so we will access them directly in test data
X_test = X_test[col]
X_test.head()
# add constant to X test data set
X_test_sm = sm.add_constant(X_test)
X_test_sm.head()
# we have already built model of train data.
logm7
# predicting y test
y_test_pred = logm7.predict(X_test_sm)
# create a dataframe of actual y test & predicted ytest
y_test_pred_final = pd.DataFrame({"churn": y_test, "churn_prob": y_test_pred})
y_test_pred_final.head()
# let's add custid to our dataframe
y_test_pred_final["Custid"] = y_test.index
y_test_pred_final.head()
# now creating a predicting column which is 1's and 0's values of churn prob column.Where we assign 1 if >0.3
# we are taking sensitivity-sepcificity cutoff value
# we can take precision-recall's cutoff value also.It's upto us.
y_test_pred_final["Predicted"] = y_test_pred_final.churn_prob.map(
lambda x: 1 if x > 0.3 else 0
)
y_test_pred_final.head()
# now let's see the accuracy score
metrics.accuracy_score(y_test_pred_final.churn, y_test_pred_final.Predicted)
# let's see confusion matrix
metrics.confusion_matrix(y_test_pred_final.churn, y_test_pred_final.Predicted)
# we can see sensitivity and specificity of test data
# SENSITIVITY
sensi = TP / (TP + FN)
sensi
# Specificity
spe = TN / (TN + FP)
spe
# - We can also take the cutoff we got from the precision-recall tradeoff curve and we can make predictions based on that also.
# ##### OR
# ## For threshold = 0.42
# - let's see confusion matrix and accuracy
y_test_pred_final["PREDICTED"] = y_test_pred_final.churn_prob.map(
lambda x: 1 if x > 0.42 else 0
)
y_test_pred_final.head()
# let's see confusion matrix when cutoff is 0.4
confusion_4 = metrics.confusion_matrix(
y_test_pred_final.churn, y_test_pred_final.PREDICTED
)
TP = confusion_4[1, 1]
TN = confusion_4[0, 0]
FP = confusion_4[0, 1]
FN = confusion_4[1, 0]
# cutoff at 0.4 & at 0.3 cutoff CONFUSION MATRIX
# [1257, 290], [1113, 434],
# [ 189, 377] [ 140, 426]
# at 0.4 cutoff churned customers are 377 and at 0.3 churned customers are 426.
# 377 means less customers churned to other network
# ACCURACY : at 0.44 threshold
metrics.accuracy_score(y_test_pred_final.churn, y_test_pred_final.PREDICTED)
# - ACCURACY ALONE IS NOT SUFFICIENT.SO WE WILL LOOK AT SENSITIVITY AND SPECIFICITY.
# SENSITIVITY AT 0.44 CUTOFF
TP / (TP + FN)
# SPECIFICITY
TN / (TN + FP)
# ------
# # The steps that were performed throughout the model building and model evaluation are:
# - 1.Data cleaning and preparation
# - Combining three dataframes
# - Handling categorical variables
# - Mapping categorical variables to integers
# - Dummy variable creation
# - Handling missing values
# - 2.Test-train split and scaling
# - 3.Model Building
# - Feature elimination based on correlations
# - Feature selection using RFE (Coarse Tuning)
# - Manual feature elimination (using p-values and VIFs)
# - 4.Model Evaluation
# - Accuracy
# - Sensitivity and Specificity
# - Optimal cut-off using ROC curve
# - Precision and Recall
# - 5.Predictions on the test set
# #
# #
# #
# # Using Decision Trees
# - In decision trees we don't have to bother about feature scaling and multicollinearity.
X.head()
y.head()
# Let's split data into train and test splits
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.7, random_state=19
)
# let's look at dimensions of the train and test data
X_train.shape, X_test.shape
# ## Model building
from sklearn.tree import DecisionTreeClassifier
# creating an object of class
dt = DecisionTreeClassifier(random_state=67, max_depth=5)
# fit the model
dt.fit(X_train, y_train)
from sklearn import tree
import graphviz
dot_data = tree.export_graphviz(
dt,
out_file=None,
feature_names=X_train.columns,
rounded=True,
filled=True,
class_names=["Not-churn", "Churn"],
)
graphviz.Source(dot_data)
# # OR
from IPython.display import Image
from six import StringIO
from sklearn.tree import export_graphviz
import pydotplus, graphviz
dot_data = StringIO()
export_graphviz(
dt,
out_file=dot_data,
filled=True,
rounded=True,
feature_names=X_train.columns,
class_names=["Non-Churn", "Churn"],
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
# Let's predict train and test data
y_train_pred = dt.predict(X_train)
y_test_pred = dt.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_test_pred))
# ## Plotting ROC curve
from sklearn.metrics import plot_roc_curve
plot_roc_curve(dt, X_train, y_train, drop_intermediate=False)
# ### Decision tree : Grid Search CV for Hyperparameter tuning
from sklearn.model_selection import GridSearchCV
dt = DecisionTreeClassifier(random_state=10)
params = {"max_depth": [3, 5, 10, 15, 20], "min_samples_leaf": [50, 100, 150, 200, 400]}
grid = GridSearchCV(
estimator=dt, param_grid=params, cv=4, n_jobs=-1, verbose=1, scoring="accuracy"
)
grid.fit(X_train, y_train)
grid.best_score_
grid.best_estimator_
# instantiate
# building model using optimal hyper-parameters
dt_best = DecisionTreeClassifier(max_depth=5, min_samples_leaf=10, random_state=10)
# fitting
dt_best.fit(X_train, y_train)
# plot roc curve
plot_roc_curve(dt_best, X_train, y_train)
dot_data = tree.export_graphviz(
dt_best,
out_file=None,
feature_names=X_train.columns,
rounded=True,
filled=True,
class_names=["Non-churn", "Churn"],
)
graphviz.Source(dot_data)
# #
# -----------
# -------
# # Using Random Forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(
n_estimators=10, max_depth=5, random_state=19, max_features=10, oob_score=True
)
rf.fit(X_train, y_train)
# OOB score
rf.oob_score_
# plot roc curve
plot_roc_curve(rf, X_train, y_train)
# # Random Forest : Grid Seach CV Hyperparamter tuning
rf = RandomForestClassifier(random_state=100, n_jobs=-1)
params = {
"max_depth": [5, 8, 10, 15, 20],
"min_samples_leaf": [10, 15, 30, 70, 100, 200],
"n_estimators": [10, 100, 150, 200],
}
grid = GridSearchCV(
estimator=rf, param_grid=params, cv=5, scoring="accuracy", verbose=1, n_jobs=-1
)
grid.fit(X_train, y_train)
grid.best_score_
grid.best_estimator_
# building model using optimal hyper-parameters
rf_best = RandomForestClassifier(
max_depth=8, min_samples_leaf=10, n_estimators=150, n_jobs=-1, random_state=100
)
rf_best.fit(X_train, y_train)
plot_roc_curve(rf_best, X_train, y_train)
# - We can see before tuning AUC score was 0.86 and after tuning auc score has increased i.e., 0.88
rf_best.feature_importances_
# creating a dataframe
imp_df = pd.DataFrame(
{"Variable_name": X_train.columns, "Imp_features": rf_best.feature_importances_}
)
imp_df.nlargest(30, "Imp_features")
# or
# imp_df.sort_values(by ='Imp_features' , ascending = False)
| false | 0 | 18,038 | 1 | 18,038 | 18,038 |
||
129881218
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/spaceship-titanic/train.csv")
train_data.head()
test_data = pd.read_csv("/kaggle/input/spaceship-titanic/test.csv")
test_data.head()
b = 0
for i in range(8693):
b += train_data.iloc[i]["Age"]
b /= 8693
print(b)
a = 0
for i in range(8693):
if train_data.iloc[i]["VIP"] == True and train_data.iloc[i]["Transported"] == True:
a += 1
print(a / 8693)
c = 0
for i in range(8693):
if (
train_data.iloc[i]["CryoSleep"] == True
and train_data.iloc[i]["Transported"] == True
):
c += 1
print(c / 8693)
d = 0
for i in range(8693):
if (
train_data.iloc[i]["HomePlanet"] == "Earth"
and train_data.iloc[i]["Transported"] == True
):
d += 1
print(d / 8693)
from sklearn.ensemble import RandomForestClassifier
features = ["VIP", "CryoSleep", "HomePlanet"]
y = train_data["Transported"]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
model = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=1)
model.fit(X, y)
predictions = model.predict(X_test)
output = pd.DataFrame(
{"PassengerId": test_data.PassengerId, "Transported": predictions}
)
output.to_csv("submission.csv", index=False)
print("mission complete")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/881/129881218.ipynb
| null | null |
[{"Id": 129881218, "ScriptId": 38587986, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15079656, "CreationDate": "05/17/2023 06:54:15", "VersionNumber": 2.0, "Title": "notebooka81970f541", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 59.0, "LinesInsertedFromPrevious": 2.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 57.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/spaceship-titanic/train.csv")
train_data.head()
test_data = pd.read_csv("/kaggle/input/spaceship-titanic/test.csv")
test_data.head()
b = 0
for i in range(8693):
b += train_data.iloc[i]["Age"]
b /= 8693
print(b)
a = 0
for i in range(8693):
if train_data.iloc[i]["VIP"] == True and train_data.iloc[i]["Transported"] == True:
a += 1
print(a / 8693)
c = 0
for i in range(8693):
if (
train_data.iloc[i]["CryoSleep"] == True
and train_data.iloc[i]["Transported"] == True
):
c += 1
print(c / 8693)
d = 0
for i in range(8693):
if (
train_data.iloc[i]["HomePlanet"] == "Earth"
and train_data.iloc[i]["Transported"] == True
):
d += 1
print(d / 8693)
from sklearn.ensemble import RandomForestClassifier
features = ["VIP", "CryoSleep", "HomePlanet"]
y = train_data["Transported"]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
model = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=1)
model.fit(X, y)
predictions = model.predict(X_test)
output = pd.DataFrame(
{"PassengerId": test_data.PassengerId, "Transported": predictions}
)
output.to_csv("submission.csv", index=False)
print("mission complete")
| false | 0 | 656 | 0 | 656 | 656 |
||
129881635
|
<jupyter_start><jupyter_text>Fake news prediction
Kaggle dataset identifier: fake-news-prediction
<jupyter_script># Logistic Regression - binary classification
# # About the Dataset
# 1. id: unique id for a news article
# 2. title: the title of a news article
# 3. author: author of the news artcle
# 4. text: the text of the article; could be incomplete
# 5. label: a label that marks whether the new is real or fake
# 1== Fake news
# 0== real news
#
# Importing that dependancies
import numpy as np
import pandas as pd
import re
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
import nltk
# printing the stopwords in english
nltk.download("stopwords")
print(stopwords.words("english"))
# Data pre-Processing
# Loading that data set to a pandas dataFrame
news_data = pd.read_csv("/kaggle/input/fake-news-prediction/train.csv/train.csv")
news_data.shape
news_data.head(5)
# counting the number of missing values in the data set
news_data.isnull().sum()
# replacing the null values with empty string
news_data = news_data.fillna("")
# merging the author name and news title
news_data["content"] = news_data["author"] + " " + news_data["title"]
print(news_data["content"])
# separating the data and label
X = news_data.drop(columns="label", axis=1)
Y = news_data["label"]
print(X)
print(Y)
# stemming
# stemming is the process of reducing a word to its Root words
port_stem = PorterStemmer()
def stemming(content):
stemmed_content = re.sub("[^a-zA-Z]", " ", content)
stemmed_content = stemmed_content.lower()
stemmed_content = stemmed_content.split()
stemmed_content = [
port_stem.stem(word)
for word in stemmed_content
if not word in stopwords.words("english")
]
stemmed_content = " ".join(stemmed_content)
return stemmed_content
news_data["content"] = news_data["content"].apply(stemming)
print(news_data["content"])
# separating the data and label
X = news_data["content"].values
Y = news_data["label"].values
print(X)
print(Y)
Y.shape
# converting the textual data to numerical data
vectorizer = TfidfVectorizer()
vectorizer.fit(X)
X = vectorizer.transform(X)
print(X)
# Splitting the dataset to training & test data
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, stratify=Y, random_state=2
)
# trainging the model : Logestic regression
model = LogisticRegression()
model.fit(X_train, Y_train)
# # Evaluation
# accuracy score on the training data
X_train_prediction = model.predict(X_train)
training_data_accuracy = accuracy_score(X_train_prediction, Y_train)
print("Accuracy score of training data:", training_data_accuracy)
# accuracy on test data
X_test_prediction = model.predict(X_test)
test_data_accuracy = accuracy_score(X_test_prediction, Y_test)
print("Accuracy score of test data:", test_data_accuracy)
# # Making a Predictive System
X_news = X_test[2]
prediction = model.predict(X_news)
print(prediction)
if prediction[0] == 0:
print("The news is Real")
else:
print("The news is Fake")
print(Y_test[2])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/881/129881635.ipynb
|
fake-news-prediction
|
udaykumarms
|
[{"Id": 129881635, "ScriptId": 38631243, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10298739, "CreationDate": "05/17/2023 06:57:48", "VersionNumber": 2.0, "Title": "Fake news_prediction_ML", "EvaluationDate": "05/17/2023", "IsChange": false, "TotalLines": 146.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 146.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 186287211, "KernelVersionId": 129881635, "SourceDatasetVersionId": 5705135}]
|
[{"Id": 5705135, "DatasetId": 3279899, "DatasourceVersionId": 5780920, "CreatorUserId": 10298739, "LicenseName": "CC0: Public Domain", "CreationDate": "05/17/2023 06:44:17", "VersionNumber": 1.0, "Title": "Fake news prediction", "Slug": "fake-news-prediction", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3279899, "CreatorUserId": 10298739, "OwnerUserId": 10298739.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5705135.0, "CurrentDatasourceVersionId": 5780920.0, "ForumId": 3345617, "Type": 2, "CreationDate": "05/17/2023 06:44:17", "LastActivityDate": "05/17/2023", "TotalViews": 76, "TotalDownloads": 21, "TotalVotes": 1, "TotalKernels": 1}]
|
[{"Id": 10298739, "UserName": "udaykumarms", "DisplayName": "Uday kumar M S", "RegisterDate": "04/20/2022", "PerformanceTier": 0}]
|
# Logistic Regression - binary classification
# # About the Dataset
# 1. id: unique id for a news article
# 2. title: the title of a news article
# 3. author: author of the news artcle
# 4. text: the text of the article; could be incomplete
# 5. label: a label that marks whether the new is real or fake
# 1== Fake news
# 0== real news
#
# Importing that dependancies
import numpy as np
import pandas as pd
import re
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
import nltk
# printing the stopwords in english
nltk.download("stopwords")
print(stopwords.words("english"))
# Data pre-Processing
# Loading that data set to a pandas dataFrame
news_data = pd.read_csv("/kaggle/input/fake-news-prediction/train.csv/train.csv")
news_data.shape
news_data.head(5)
# counting the number of missing values in the data set
news_data.isnull().sum()
# replacing the null values with empty string
news_data = news_data.fillna("")
# merging the author name and news title
news_data["content"] = news_data["author"] + " " + news_data["title"]
print(news_data["content"])
# separating the data and label
X = news_data.drop(columns="label", axis=1)
Y = news_data["label"]
print(X)
print(Y)
# stemming
# stemming is the process of reducing a word to its Root words
port_stem = PorterStemmer()
def stemming(content):
stemmed_content = re.sub("[^a-zA-Z]", " ", content)
stemmed_content = stemmed_content.lower()
stemmed_content = stemmed_content.split()
stemmed_content = [
port_stem.stem(word)
for word in stemmed_content
if not word in stopwords.words("english")
]
stemmed_content = " ".join(stemmed_content)
return stemmed_content
news_data["content"] = news_data["content"].apply(stemming)
print(news_data["content"])
# separating the data and label
X = news_data["content"].values
Y = news_data["label"].values
print(X)
print(Y)
Y.shape
# converting the textual data to numerical data
vectorizer = TfidfVectorizer()
vectorizer.fit(X)
X = vectorizer.transform(X)
print(X)
# Splitting the dataset to training & test data
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, stratify=Y, random_state=2
)
# trainging the model : Logestic regression
model = LogisticRegression()
model.fit(X_train, Y_train)
# # Evaluation
# accuracy score on the training data
X_train_prediction = model.predict(X_train)
training_data_accuracy = accuracy_score(X_train_prediction, Y_train)
print("Accuracy score of training data:", training_data_accuracy)
# accuracy on test data
X_test_prediction = model.predict(X_test)
test_data_accuracy = accuracy_score(X_test_prediction, Y_test)
print("Accuracy score of test data:", test_data_accuracy)
# # Making a Predictive System
X_news = X_test[2]
prediction = model.predict(X_news)
print(prediction)
if prediction[0] == 0:
print("The news is Real")
else:
print("The news is Fake")
print(Y_test[2])
| false | 1 | 953 | 1 | 974 | 953 |
||
129881808
|
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.decomposition import PCA
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import (
StandardScaler,
OneHotEncoder,
LabelEncoder,
OrdinalEncoder,
)
from sklearn.impute import SimpleImputer, KNNImputer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.metrics import accuracy_score, f1_score
from catboost import CatBoostClassifier
import warnings
from matplotlib import pyplot as plt
import seaborn as sns
warnings.filterwarnings("ignore")
train_df = pd.read_csv("/kaggle/input/spaceship-titanic/train.csv")
test_df = pd.read_csv("/kaggle/input/spaceship-titanic/test.csv")
train_df.head()
X_, y = train_df.drop("Transported", axis=1), train_df.Transported.apply(
lambda x: 1 if x is True else 0
)
_df_train = pd.DataFrame(
X_["Cabin"]
.fillna("//")
.apply(lambda x: x.split("/")[::2] if x != "//" else [None, None])
.to_list(),
columns=["deck", "side"],
)
_df_test = pd.DataFrame(
test_df["Cabin"]
.fillna("//")
.apply(lambda x: x.split("/")[::2] if x != "//" else [None, None])
.to_list(),
columns=["deck", "side"],
)
X = pd.concat([X_.drop("Cabin", axis=1), _df_train], axis=1)
test = pd.concat([X_.drop("Cabin", axis=1), _df_test], axis=1)
X.isna().sum() / X.count() * 100
fig, ax = plt.subplots(figsize=(9, 5))
sns.heatmap(X.isnull(), cbar=False, cmap="YlGnBu_r")
plt.show()
X.describe()
X.info()
print(X.nunique())
num_columns = X.columns[X.dtypes != "object"]
print(len(num_columns))
num_columns
c = 1
plt.rcParams["figure.figsize"] = [10, 8]
for i in num_columns:
plt.subplot(3, 3, c)
sns.distplot(X[i])
c += 1
plt.tight_layout()
plt.show()
tc = X.corr()
sns.heatmap(tc, cmap="coolwarm")
plt.title("titanic.corr()")
obj_columns = X.columns[X.dtypes == "object"]
cat_columns = [c for c in obj_columns if c not in ["PassengerId", "Name"]]
len(cat_columns)
df_train = pd.concat([X, pd.DataFrame(y, columns=["Transported"])], axis=1)
df_train.head()
nr_rows = 2
nr_cols = 3
fig, axs = plt.subplots(nr_rows, nr_cols, figsize=(nr_cols * 3.5, nr_rows * 3))
for r in range(0, nr_rows):
for c in range(0, nr_cols):
i = r * nr_cols + c
ax = axs[r][c]
sns.countplot(data=df_train, x=cat_columns[i], hue="Transported", ax=ax)
ax.set_title(cat_columns[i], fontsize=14, fontweight="bold")
ax.legend(title="Transported", loc="upper center")
plt.tight_layout()
pd.crosstab(df_train.Age, df_train.Transported).apply(
lambda r: r / r.sum(), axis=1
).style.background_gradient(cmap="summer_r")
cat_pipe = Pipeline(
[
("pre_encoder", OrdinalEncoder()),
("imputer", KNNImputer(n_neighbors=3)),
("encoder", OneHotEncoder(handle_unknown="ignore")),
]
)
num_pipe = Pipeline(
[("imputer", SimpleImputer(strategy="median")), ("scaler", StandardScaler())]
)
preprocessor = ColumnTransformer(
[("Cat", cat_pipe, cat_columns), ("Num", num_pipe, num_columns)]
)
pipe = Pipeline(
steps=[("preprocessor", preprocessor), ("model", GradientBoostingClassifier())]
)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid = GridSearchCV(
estimator=pipe,
param_grid={
"model__n_estimators": [150, 200],
"model__learning_rate": [0.001, 0.05],
"model__max_depth": [3, 4, 5],
},
cv=3,
scoring="f1",
)
grid.fit(X_train, y_train)
y_pred = grid.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid.best_estimator_
grid_2 = GridSearchCV(
estimator=pipe,
param_grid={
"model__n_estimators": [200],
"model__learning_rate": [0.05],
"model__max_depth": [3],
"preprocessor__Cat__imputer__n_neighbors": [3, 4, 5],
},
cv=3,
scoring="f1",
)
grid_2.fit(X_train, y_train)
y_pred = grid_2.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid_2.best_estimator_
cat_pipe_2 = Pipeline(
[
("pre_encoder", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore")),
]
)
num_pipe_2 = Pipeline(
[("imputer", SimpleImputer(strategy="median")), ("scaler", StandardScaler())]
)
preprocessor_2 = ColumnTransformer(
[("Cat", cat_pipe_2, cat_columns), ("Num", num_pipe_2, num_columns)]
)
pipe_2 = Pipeline(
steps=[
("preprocessor", preprocessor_2),
("pca", PCA(0.99)),
("model", GradientBoostingClassifier(n_estimators=200, learning_rate=0.05)),
]
)
pipe_2.fit(X_train, y_train)
y_pred = pipe_2.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid_3 = GridSearchCV(
estimator=pipe_2,
param_grid={
"model__n_estimators": [100, 200],
"model__learning_rate": [0.01, 0.05],
"model__max_depth": [2, 3, 4],
},
cv=3,
scoring="f1",
)
grid_3.fit(X, y)
grid_3.best_score_
PassengerId = test.PassengerId
Pred_testcsv = grid_3.predict(test)
output = pd.DataFrame({"PassengerId": PassengerId, "Transported": Pred_testcsv})
output.to_csv("submission.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/881/129881808.ipynb
| null | null |
[{"Id": 129881808, "ScriptId": 38625521, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 847705, "CreationDate": "05/17/2023 06:59:16", "VersionNumber": 3.0, "Title": "Dimensionality reduction and feature selection", "EvaluationDate": "05/17/2023", "IsChange": false, "TotalLines": 196.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 196.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
| null | null | null | null |
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.decomposition import PCA
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import (
StandardScaler,
OneHotEncoder,
LabelEncoder,
OrdinalEncoder,
)
from sklearn.impute import SimpleImputer, KNNImputer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.metrics import accuracy_score, f1_score
from catboost import CatBoostClassifier
import warnings
from matplotlib import pyplot as plt
import seaborn as sns
warnings.filterwarnings("ignore")
train_df = pd.read_csv("/kaggle/input/spaceship-titanic/train.csv")
test_df = pd.read_csv("/kaggle/input/spaceship-titanic/test.csv")
train_df.head()
X_, y = train_df.drop("Transported", axis=1), train_df.Transported.apply(
lambda x: 1 if x is True else 0
)
_df_train = pd.DataFrame(
X_["Cabin"]
.fillna("//")
.apply(lambda x: x.split("/")[::2] if x != "//" else [None, None])
.to_list(),
columns=["deck", "side"],
)
_df_test = pd.DataFrame(
test_df["Cabin"]
.fillna("//")
.apply(lambda x: x.split("/")[::2] if x != "//" else [None, None])
.to_list(),
columns=["deck", "side"],
)
X = pd.concat([X_.drop("Cabin", axis=1), _df_train], axis=1)
test = pd.concat([X_.drop("Cabin", axis=1), _df_test], axis=1)
X.isna().sum() / X.count() * 100
fig, ax = plt.subplots(figsize=(9, 5))
sns.heatmap(X.isnull(), cbar=False, cmap="YlGnBu_r")
plt.show()
X.describe()
X.info()
print(X.nunique())
num_columns = X.columns[X.dtypes != "object"]
print(len(num_columns))
num_columns
c = 1
plt.rcParams["figure.figsize"] = [10, 8]
for i in num_columns:
plt.subplot(3, 3, c)
sns.distplot(X[i])
c += 1
plt.tight_layout()
plt.show()
tc = X.corr()
sns.heatmap(tc, cmap="coolwarm")
plt.title("titanic.corr()")
obj_columns = X.columns[X.dtypes == "object"]
cat_columns = [c for c in obj_columns if c not in ["PassengerId", "Name"]]
len(cat_columns)
df_train = pd.concat([X, pd.DataFrame(y, columns=["Transported"])], axis=1)
df_train.head()
nr_rows = 2
nr_cols = 3
fig, axs = plt.subplots(nr_rows, nr_cols, figsize=(nr_cols * 3.5, nr_rows * 3))
for r in range(0, nr_rows):
for c in range(0, nr_cols):
i = r * nr_cols + c
ax = axs[r][c]
sns.countplot(data=df_train, x=cat_columns[i], hue="Transported", ax=ax)
ax.set_title(cat_columns[i], fontsize=14, fontweight="bold")
ax.legend(title="Transported", loc="upper center")
plt.tight_layout()
pd.crosstab(df_train.Age, df_train.Transported).apply(
lambda r: r / r.sum(), axis=1
).style.background_gradient(cmap="summer_r")
cat_pipe = Pipeline(
[
("pre_encoder", OrdinalEncoder()),
("imputer", KNNImputer(n_neighbors=3)),
("encoder", OneHotEncoder(handle_unknown="ignore")),
]
)
num_pipe = Pipeline(
[("imputer", SimpleImputer(strategy="median")), ("scaler", StandardScaler())]
)
preprocessor = ColumnTransformer(
[("Cat", cat_pipe, cat_columns), ("Num", num_pipe, num_columns)]
)
pipe = Pipeline(
steps=[("preprocessor", preprocessor), ("model", GradientBoostingClassifier())]
)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid = GridSearchCV(
estimator=pipe,
param_grid={
"model__n_estimators": [150, 200],
"model__learning_rate": [0.001, 0.05],
"model__max_depth": [3, 4, 5],
},
cv=3,
scoring="f1",
)
grid.fit(X_train, y_train)
y_pred = grid.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid.best_estimator_
grid_2 = GridSearchCV(
estimator=pipe,
param_grid={
"model__n_estimators": [200],
"model__learning_rate": [0.05],
"model__max_depth": [3],
"preprocessor__Cat__imputer__n_neighbors": [3, 4, 5],
},
cv=3,
scoring="f1",
)
grid_2.fit(X_train, y_train)
y_pred = grid_2.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid_2.best_estimator_
cat_pipe_2 = Pipeline(
[
("pre_encoder", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore")),
]
)
num_pipe_2 = Pipeline(
[("imputer", SimpleImputer(strategy="median")), ("scaler", StandardScaler())]
)
preprocessor_2 = ColumnTransformer(
[("Cat", cat_pipe_2, cat_columns), ("Num", num_pipe_2, num_columns)]
)
pipe_2 = Pipeline(
steps=[
("preprocessor", preprocessor_2),
("pca", PCA(0.99)),
("model", GradientBoostingClassifier(n_estimators=200, learning_rate=0.05)),
]
)
pipe_2.fit(X_train, y_train)
y_pred = pipe_2.predict(X_test)
print(
"Accuracy = {}, f1 score = {}".format(
accuracy_score(y_test, y_pred), f1_score(y_test, y_pred)
)
)
grid_3 = GridSearchCV(
estimator=pipe_2,
param_grid={
"model__n_estimators": [100, 200],
"model__learning_rate": [0.01, 0.05],
"model__max_depth": [2, 3, 4],
},
cv=3,
scoring="f1",
)
grid_3.fit(X, y)
grid_3.best_score_
PassengerId = test.PassengerId
Pred_testcsv = grid_3.predict(test)
output = pd.DataFrame({"PassengerId": PassengerId, "Transported": Pred_testcsv})
output.to_csv("submission.csv", index=False)
| false | 0 | 1,976 | 5 | 1,976 | 1,976 |
||
129435301
|
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from xgboost import XGBRegressor
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv", index_col="id")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv", index_col="id")
# # Overview
train.shape
train.head()
train.tail()
train.info()
train.describe()
train.dtypes
# # EDA
train.corr()
sns.heatmap(train.corr())
plt.show()
train.corr()["yield"].sort_values(ascending=False)
# # Feature Engineering
# train.drop(["osmia","bumbles","andrena","honeybee",
# "AverageOfUpperTRange","AverageOfLowerTRange",
# "MinOfLowerTRange","MinOfUpperTRange",
# "MaxOfUpperTRange","MaxOfLowerTRange"],axis = 1, inplace = True)
# # Model Selection
def scaling(feature):
global X_train, X_test
scaler = MinMaxScaler()
scaler.fit
scaler.fit(X_train[feature].to_numpy().reshape(-1, 1))
X_train[feature] = scaler.transform(X_train[feature].to_numpy().reshape(-1, 1))
X_test[feature] = scaler.transform(X_test[feature].to_numpy().reshape(-1, 1))
RS = 13
# ### Random Forest Regressor
X = train.drop(["yield"], axis=1)
y = train[["yield"]]
list_mae_rfr = []
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for i in range(1, 100):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=i
)
for feature in scale_needed_features:
scaling(feature)
rfr = RandomForestRegressor(random_state=RS)
rfr.fit(X_train, y_train.values.ravel())
rfr_prediction = rfr.predict(X_test)
mae_rfr = mean_absolute_error(y_test, rfr_prediction)
list_mae_rfr.append(mae_rfr)
print(f"Mean RFR 100-FOLD: {np.mean(list_mae_rfr)}")
print(f"Median RFR 100-FOLD: {np.median(list_mae_rfr)}")
# ## Linear Regression
X = train.drop(["yield"], axis=1)
y = train[["yield"]]
list_mae_lr = []
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for i in range(1, 100):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=i
)
for feature in scale_needed_features:
scaling(feature)
lr = LinearRegression()
lr.fit(X_train, y_train)
lr_prediction = lr.predict(X_test)
mae_lr = mean_absolute_error(y_test, lr_prediction)
list_mae_lr.append(mae_lr)
print(f"Mean LR 100-FOLD: {np.mean(list_mae_lr)}")
print(f"Median LR 100-FOLD: {np.median(list_mae_lr)}")
# ### XGBRegressor
X = train.drop(["yield"], axis=1)
y = train[["yield"]]
list_mae_xgb = []
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for i in range(1, 100):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=i
)
for feature in scale_needed_features:
scaling(feature)
xgb = XGBRegressor(random_state=RS, max_depth=3, n_estimators=100)
xgb.fit(X_train, y_train)
xgb_prediction = xgb.predict(X_test)
mae_xgb = mean_absolute_error(y_test, xgb_prediction)
list_mae_xgb.append(mae_xgb)
print(f"Mean XGB 100-FOLD: {np.mean(list_mae_xgb)}")
print(f"Median XGB 100-FOLD: {np.median(list_mae_xgb)}")
# The Mean & median for XGB is the best. Also, XGB **performance** was 13.5 times FASTER than Random Forests.
# # Final Evaluation - XGB
X_train = train.drop(["yield"], axis=1)
y_train = train[["yield"]]
X_test = test.copy()
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for feature in scale_needed_features:
scaling(feature)
xgb = XGBRegressor(random_state=RS, max_depth=3, n_estimators=100)
xgb.fit(X_train, y_train)
xgb_prediction = xgb.predict(X_test)
# # Result
result = pd.DataFrame({"id": X_test.index, "yield": xgb_prediction})
result = pd.DataFrame({"yield": xgb_prediction}).set_index(X_test.index)
result
result.to_csv("first_sub.csv")
first_sub = pd.read_csv("first_sub.csv")
first_sub
# Author: amyrmahdy
# Date: 12 May 2023
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/435/129435301.ipynb
| null | null |
[{"Id": 129435301, "ScriptId": 38412702, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7867890, "CreationDate": "05/13/2023 19:23:11", "VersionNumber": 13.0, "Title": "playground-series-s3e14-wild-blueberry", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 182.0, "LinesInsertedFromPrevious": 122.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 60.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from xgboost import XGBRegressor
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv", index_col="id")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv", index_col="id")
# # Overview
train.shape
train.head()
train.tail()
train.info()
train.describe()
train.dtypes
# # EDA
train.corr()
sns.heatmap(train.corr())
plt.show()
train.corr()["yield"].sort_values(ascending=False)
# # Feature Engineering
# train.drop(["osmia","bumbles","andrena","honeybee",
# "AverageOfUpperTRange","AverageOfLowerTRange",
# "MinOfLowerTRange","MinOfUpperTRange",
# "MaxOfUpperTRange","MaxOfLowerTRange"],axis = 1, inplace = True)
# # Model Selection
def scaling(feature):
global X_train, X_test
scaler = MinMaxScaler()
scaler.fit
scaler.fit(X_train[feature].to_numpy().reshape(-1, 1))
X_train[feature] = scaler.transform(X_train[feature].to_numpy().reshape(-1, 1))
X_test[feature] = scaler.transform(X_test[feature].to_numpy().reshape(-1, 1))
RS = 13
# ### Random Forest Regressor
X = train.drop(["yield"], axis=1)
y = train[["yield"]]
list_mae_rfr = []
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for i in range(1, 100):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=i
)
for feature in scale_needed_features:
scaling(feature)
rfr = RandomForestRegressor(random_state=RS)
rfr.fit(X_train, y_train.values.ravel())
rfr_prediction = rfr.predict(X_test)
mae_rfr = mean_absolute_error(y_test, rfr_prediction)
list_mae_rfr.append(mae_rfr)
print(f"Mean RFR 100-FOLD: {np.mean(list_mae_rfr)}")
print(f"Median RFR 100-FOLD: {np.median(list_mae_rfr)}")
# ## Linear Regression
X = train.drop(["yield"], axis=1)
y = train[["yield"]]
list_mae_lr = []
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for i in range(1, 100):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=i
)
for feature in scale_needed_features:
scaling(feature)
lr = LinearRegression()
lr.fit(X_train, y_train)
lr_prediction = lr.predict(X_test)
mae_lr = mean_absolute_error(y_test, lr_prediction)
list_mae_lr.append(mae_lr)
print(f"Mean LR 100-FOLD: {np.mean(list_mae_lr)}")
print(f"Median LR 100-FOLD: {np.median(list_mae_lr)}")
# ### XGBRegressor
X = train.drop(["yield"], axis=1)
y = train[["yield"]]
list_mae_xgb = []
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for i in range(1, 100):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=i
)
for feature in scale_needed_features:
scaling(feature)
xgb = XGBRegressor(random_state=RS, max_depth=3, n_estimators=100)
xgb.fit(X_train, y_train)
xgb_prediction = xgb.predict(X_test)
mae_xgb = mean_absolute_error(y_test, xgb_prediction)
list_mae_xgb.append(mae_xgb)
print(f"Mean XGB 100-FOLD: {np.mean(list_mae_xgb)}")
print(f"Median XGB 100-FOLD: {np.median(list_mae_xgb)}")
# The Mean & median for XGB is the best. Also, XGB **performance** was 13.5 times FASTER than Random Forests.
# # Final Evaluation - XGB
X_train = train.drop(["yield"], axis=1)
y_train = train[["yield"]]
X_test = test.copy()
scale_needed_features = [
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for feature in scale_needed_features:
scaling(feature)
xgb = XGBRegressor(random_state=RS, max_depth=3, n_estimators=100)
xgb.fit(X_train, y_train)
xgb_prediction = xgb.predict(X_test)
# # Result
result = pd.DataFrame({"id": X_test.index, "yield": xgb_prediction})
result = pd.DataFrame({"yield": xgb_prediction}).set_index(X_test.index)
result
result.to_csv("first_sub.csv")
first_sub = pd.read_csv("first_sub.csv")
first_sub
# Author: amyrmahdy
# Date: 12 May 2023
| false | 0 | 1,713 | 0 | 1,713 | 1,713 |
||
129435342
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# pd.options.display.max_columns = None
# pd.options.display.max_rows = None
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
#
#
# IMPORT
#
#
train = pd.read_csv(
"/kaggle/input/house-prices-advanced-regression-techniques/train.csv"
)
test = pd.read_csv("/kaggle/input/house-prices-advanced-regression-techniques/test.csv")
actual = pd.read_csv(
"/kaggle/input/house-prices-advanced-regression-techniques/sample_submission.csv"
)
actual = actual.SalePrice
#
#
# DATA WRANGLING & TIDYING
#
# **1. explore columns with null values**
train = train[train.notna().sum().sort_values().index]
train.info()
train_with_null = train.columns[train.isnull().any()].tolist()
for column in train_with_null:
print(f"{train[column].describe()}\n")
test = test[test.notna().sum().sort_values().index]
test.info()
test_with_null = test.columns[test.isnull().any()].tolist()
for column in test_with_null:
print(f"{test[column].describe()}\n")
combine = [train, test]
# **2. drop columns:**
# * with ~50% or more missing data
# * with low variance (high frequency of top value) — features with low variance do not contribute much information to a model
for dataset in combine:
maxRows = dataset["Id"].sort_values().count()
print("% Missing Data:")
print((1 - dataset.count() / maxRows) * 100)
# drop columns with ~50% or more missing data
for dataset in combine:
dataset.drop(
columns=["PoolQC", "MiscFeature", "Alley", "Fence", "FireplaceQu"], inplace=True
)
# drop columns with low variance
for dataset in combine:
dataset.drop(
columns=[
"GarageQual",
"GarageCond",
"GarageYrBlt",
"GarageType",
"BsmtCond",
"BsmtQual",
"BsmtExposure",
"BsmtFinType2",
"MSZoning",
"Utilities",
"Functional",
"SaleType",
],
inplace=True,
)
# also drop uninformative columns
for dataset in combine:
dataset.drop(columns=["YearRemodAdd", "YrSold", "MoSold"], inplace=True)
# **3. fill columns that contain nan**
from sklearn.base import TransformerMixin
class DataFrameImputer(TransformerMixin):
def __init__(self):
"""Impute missing values.
Columns of dtype object are imputed with the most frequent value
in column.
Columns of other types are imputed with mean of column.
"""
def fit(self, X, y=None):
self.fill = pd.Series(
[
X[c].value_counts().index[0]
if X[c].dtype == np.dtype("O")
else X[c].mean()
for c in X
],
index=X.columns,
)
return self
def transform(self, X, y=None):
return X.fillna(self.fill)
data = train
train = pd.DataFrame(data)
train = DataFrameImputer().fit_transform(train)
data = test
test = pd.DataFrame(data)
test = DataFrameImputer().fit_transform(test)
# **4. transform float to int**
for dataset in combine:
dtypes_list = dataset.columns.to_series().groupby(dataset.dtypes).groups
dtypes_list
train = train.astype(
{
"LotFrontage": "int",
"MasVnrArea": "int",
"BsmtFullBath": "int",
"BsmtHalfBath": "int",
"BsmtFinSF2": "int",
"BsmtUnfSF": "int",
"TotalBsmtSF": "int",
"BsmtFinSF1": "int",
"GarageCars": "int",
"GarageArea": "int",
}
)
test = test.astype(
{
"LotFrontage": "int",
"MasVnrArea": "int",
"BsmtFullBath": "int",
"BsmtHalfBath": "int",
"BsmtFinSF2": "int",
"BsmtUnfSF": "int",
"TotalBsmtSF": "int",
"BsmtFinSF1": "int",
"GarageCars": "int",
"GarageArea": "int",
}
)
combine = [train, test]
train.YearBuilt.value_counts().sort_index(ascending=False)
train.YearBuilt = pd.qcut(train.YearBuilt, [0, 0.25, 0.5, 0.75, 1], labels=[1, 2, 3, 4])
train.YearBuilt = train.YearBuilt.astype("int64")
print(train.head())
# **5. transform categorical data**
from sklearn.preprocessing import LabelEncoder
object_data = [
"GarageFinish",
"BsmtFinType1",
"MasVnrType",
"KitchenQual",
"Exterior2nd",
"Exterior1st",
"PavedDrive",
"Heating",
"Street",
"LotShape",
"LandContour",
"LotConfig",
"LandSlope",
"Neighborhood",
"Condition1",
"Condition2",
"BldgType",
"HouseStyle",
"RoofStyle",
"RoofMatl",
"ExterQual",
"ExterCond",
"Foundation",
"HeatingQC",
"CentralAir",
"Electrical",
"SaleCondition",
]
encoder = LabelEncoder()
for dataset in combine:
for i in object_data:
dataset[i] = encoder.fit_transform(dataset[i])
# for dataset in combine:
# for i in object_data:
# print(f'{dataset[i].value_counts()}\n')
from sklearn.ensemble import RandomForestClassifier
features = [
"LotFrontage",
"GarageFinish",
"BsmtFinType1",
"MasVnrArea",
"MasVnrType",
"Electrical",
"KitchenQual",
"BedroomAbvGr",
"HalfBath",
"FullBath",
"BsmtHalfBath",
"TotRmsAbvGrd",
"BsmtFullBath",
"KitchenAbvGr",
"Id",
"GrLivArea",
"GarageCars",
"GarageArea",
"PavedDrive",
"WoodDeckSF",
"OpenPorchSF",
"EnclosedPorch",
"3SsnPorch",
"ScreenPorch",
"PoolArea",
"MiscVal",
"Fireplaces",
"LowQualFinSF",
"HeatingQC",
"1stFlrSF",
"HouseStyle",
"BldgType",
"Condition2",
"Condition1",
"Neighborhood",
"LandSlope",
"OverallQual",
"LotConfig",
"LandContour",
"LotShape",
"Street",
"LotArea",
"MSSubClass",
"2ndFlrSF",
"OverallCond",
"CentralAir",
"SaleCondition",
"Heating",
"TotalBsmtSF",
"BsmtUnfSF",
"BsmtFinSF2",
"YearBuilt",
"BsmtFinSF1",
"ExterCond",
"ExterQual",
"Exterior2nd",
"Exterior1st",
"RoofMatl",
"RoofStyle",
"Foundation",
]
X_train = train[features]
y_train = train.SalePrice
X_test = test[features]
y_test = actual
model = RandomForestClassifier()
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(criterion="gini")
# Fit the decision tree classifier
clf = clf.fit(X_train, y_train)
clf.feature_importances_
from sklearn.model_selection import GridSearchCV
param_grid = {
"n_estimators": [5, 10, 100],
"max_depth": [10, 20, 50],
"min_samples_split": [5, 10, 20],
"min_samples_leaf": [5, 10, 20],
"bootstrap": [False],
"criterion": ["gini"],
}
gs = GridSearchCV(model, param_grid=param_grid, cv=3, verbose=1, n_jobs=-1)
gs.fit(X, y)
print(gs.best_estimator_)
predictions = gs.best_estimator_.predict(X_test)
print(predictions)
from surprise import accuracy
RMSE = accuracy.rmse(predictions)
print(RMSE)
output = pd.DataFrame({"Id": test.Id, "SalePrice": predictions})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/435/129435342.ipynb
| null | null |
[{"Id": 129435342, "ScriptId": 38442203, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13731473, "CreationDate": "05/13/2023 19:23:37", "VersionNumber": 7.0, "Title": "House-Prices_solution", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 208.0, "LinesInsertedFromPrevious": 7.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 201.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# pd.options.display.max_columns = None
# pd.options.display.max_rows = None
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
#
#
# IMPORT
#
#
train = pd.read_csv(
"/kaggle/input/house-prices-advanced-regression-techniques/train.csv"
)
test = pd.read_csv("/kaggle/input/house-prices-advanced-regression-techniques/test.csv")
actual = pd.read_csv(
"/kaggle/input/house-prices-advanced-regression-techniques/sample_submission.csv"
)
actual = actual.SalePrice
#
#
# DATA WRANGLING & TIDYING
#
# **1. explore columns with null values**
train = train[train.notna().sum().sort_values().index]
train.info()
train_with_null = train.columns[train.isnull().any()].tolist()
for column in train_with_null:
print(f"{train[column].describe()}\n")
test = test[test.notna().sum().sort_values().index]
test.info()
test_with_null = test.columns[test.isnull().any()].tolist()
for column in test_with_null:
print(f"{test[column].describe()}\n")
combine = [train, test]
# **2. drop columns:**
# * with ~50% or more missing data
# * with low variance (high frequency of top value) — features with low variance do not contribute much information to a model
for dataset in combine:
maxRows = dataset["Id"].sort_values().count()
print("% Missing Data:")
print((1 - dataset.count() / maxRows) * 100)
# drop columns with ~50% or more missing data
for dataset in combine:
dataset.drop(
columns=["PoolQC", "MiscFeature", "Alley", "Fence", "FireplaceQu"], inplace=True
)
# drop columns with low variance
for dataset in combine:
dataset.drop(
columns=[
"GarageQual",
"GarageCond",
"GarageYrBlt",
"GarageType",
"BsmtCond",
"BsmtQual",
"BsmtExposure",
"BsmtFinType2",
"MSZoning",
"Utilities",
"Functional",
"SaleType",
],
inplace=True,
)
# also drop uninformative columns
for dataset in combine:
dataset.drop(columns=["YearRemodAdd", "YrSold", "MoSold"], inplace=True)
# **3. fill columns that contain nan**
from sklearn.base import TransformerMixin
class DataFrameImputer(TransformerMixin):
def __init__(self):
"""Impute missing values.
Columns of dtype object are imputed with the most frequent value
in column.
Columns of other types are imputed with mean of column.
"""
def fit(self, X, y=None):
self.fill = pd.Series(
[
X[c].value_counts().index[0]
if X[c].dtype == np.dtype("O")
else X[c].mean()
for c in X
],
index=X.columns,
)
return self
def transform(self, X, y=None):
return X.fillna(self.fill)
data = train
train = pd.DataFrame(data)
train = DataFrameImputer().fit_transform(train)
data = test
test = pd.DataFrame(data)
test = DataFrameImputer().fit_transform(test)
# **4. transform float to int**
for dataset in combine:
dtypes_list = dataset.columns.to_series().groupby(dataset.dtypes).groups
dtypes_list
train = train.astype(
{
"LotFrontage": "int",
"MasVnrArea": "int",
"BsmtFullBath": "int",
"BsmtHalfBath": "int",
"BsmtFinSF2": "int",
"BsmtUnfSF": "int",
"TotalBsmtSF": "int",
"BsmtFinSF1": "int",
"GarageCars": "int",
"GarageArea": "int",
}
)
test = test.astype(
{
"LotFrontage": "int",
"MasVnrArea": "int",
"BsmtFullBath": "int",
"BsmtHalfBath": "int",
"BsmtFinSF2": "int",
"BsmtUnfSF": "int",
"TotalBsmtSF": "int",
"BsmtFinSF1": "int",
"GarageCars": "int",
"GarageArea": "int",
}
)
combine = [train, test]
train.YearBuilt.value_counts().sort_index(ascending=False)
train.YearBuilt = pd.qcut(train.YearBuilt, [0, 0.25, 0.5, 0.75, 1], labels=[1, 2, 3, 4])
train.YearBuilt = train.YearBuilt.astype("int64")
print(train.head())
# **5. transform categorical data**
from sklearn.preprocessing import LabelEncoder
object_data = [
"GarageFinish",
"BsmtFinType1",
"MasVnrType",
"KitchenQual",
"Exterior2nd",
"Exterior1st",
"PavedDrive",
"Heating",
"Street",
"LotShape",
"LandContour",
"LotConfig",
"LandSlope",
"Neighborhood",
"Condition1",
"Condition2",
"BldgType",
"HouseStyle",
"RoofStyle",
"RoofMatl",
"ExterQual",
"ExterCond",
"Foundation",
"HeatingQC",
"CentralAir",
"Electrical",
"SaleCondition",
]
encoder = LabelEncoder()
for dataset in combine:
for i in object_data:
dataset[i] = encoder.fit_transform(dataset[i])
# for dataset in combine:
# for i in object_data:
# print(f'{dataset[i].value_counts()}\n')
from sklearn.ensemble import RandomForestClassifier
features = [
"LotFrontage",
"GarageFinish",
"BsmtFinType1",
"MasVnrArea",
"MasVnrType",
"Electrical",
"KitchenQual",
"BedroomAbvGr",
"HalfBath",
"FullBath",
"BsmtHalfBath",
"TotRmsAbvGrd",
"BsmtFullBath",
"KitchenAbvGr",
"Id",
"GrLivArea",
"GarageCars",
"GarageArea",
"PavedDrive",
"WoodDeckSF",
"OpenPorchSF",
"EnclosedPorch",
"3SsnPorch",
"ScreenPorch",
"PoolArea",
"MiscVal",
"Fireplaces",
"LowQualFinSF",
"HeatingQC",
"1stFlrSF",
"HouseStyle",
"BldgType",
"Condition2",
"Condition1",
"Neighborhood",
"LandSlope",
"OverallQual",
"LotConfig",
"LandContour",
"LotShape",
"Street",
"LotArea",
"MSSubClass",
"2ndFlrSF",
"OverallCond",
"CentralAir",
"SaleCondition",
"Heating",
"TotalBsmtSF",
"BsmtUnfSF",
"BsmtFinSF2",
"YearBuilt",
"BsmtFinSF1",
"ExterCond",
"ExterQual",
"Exterior2nd",
"Exterior1st",
"RoofMatl",
"RoofStyle",
"Foundation",
]
X_train = train[features]
y_train = train.SalePrice
X_test = test[features]
y_test = actual
model = RandomForestClassifier()
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(criterion="gini")
# Fit the decision tree classifier
clf = clf.fit(X_train, y_train)
clf.feature_importances_
from sklearn.model_selection import GridSearchCV
param_grid = {
"n_estimators": [5, 10, 100],
"max_depth": [10, 20, 50],
"min_samples_split": [5, 10, 20],
"min_samples_leaf": [5, 10, 20],
"bootstrap": [False],
"criterion": ["gini"],
}
gs = GridSearchCV(model, param_grid=param_grid, cv=3, verbose=1, n_jobs=-1)
gs.fit(X, y)
print(gs.best_estimator_)
predictions = gs.best_estimator_.predict(X_test)
print(predictions)
from surprise import accuracy
RMSE = accuracy.rmse(predictions)
print(RMSE)
output = pd.DataFrame({"Id": test.Id, "SalePrice": predictions})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
| false | 0 | 2,239 | 0 | 2,239 | 2,239 |
||
129435979
|
import pandas as pd
from statsmodels.graphics.tsaplots import plot_acf
import matplotlib.pyplot as plt
import numpy as np
from statsmodels.tsa.holtwinters import ExponentialSmoothing
import sys
from statsmodels.tsa.seasonal import seasonal_decompose
from sklearn.metrics import mean_squared_error, mean_absolute_error
from statsmodels.tsa.arima.model import ARIMA
import warnings
warnings.filterwarnings("ignore")
train = pd.read_csv("/kaggle/input/fixed-data-new/train_fixed_new.csv")
test = pd.read_csv("/kaggle/input/fixed-data-new/test_fixed_new.csv")
# # 1. Data preparation
# Tu ću sad pronaći mašinu koja ima seasonality koristeći plot_acf.
# AI10158
# AI10635
# AL11466
# AL12144
# CI12166
# DL101579
# DS100760
#
for machine in train.machine_name.unique():
machine_data = train[train["machine_name"] == machine]["total"]
decomposition = seasonal_decompose(
machine_data, model="additive", period=int(np.floor(len(machine_data) / 2))
)
seasonal_component = decomposition.seasonal
if abs(seasonal_component).mean() > 10000: # Adjust the threshold as needed
print("Seasonality detected-" + machine)
def getAnomalyLine(centil):
eff_1 = []
anomalije = train.loc[(train["label"] == 1)]
for index, row in anomalije.iterrows():
eff_1.append(row["broken"] / row["total"])
num_to_drop = int(len(eff_1) * centil)
eff_1.sort()
eff_1 = eff_1[num_to_drop:]
return eff_1[0]
def visualizeEff(machine_name, centil):
machine_data = train.loc[
(train["machine_name"] == machine_name) & (train["day"] > 364)
]
plt.figure(figsize=(20, 8))
anomalyLine = getAnomalyLine(centil)
eff = []
dani_anomalija = []
anomalija_eff = []
line = []
dani = []
for index, row in machine_data.iterrows():
eff.append(row["broken"] / row["total"])
line.append(anomalyLine)
dani.append(row["day"])
if row["label"] == 1:
dani_anomalija.append(row["day"])
anomalija_eff.append(row["broken"] / row["total"])
plt.title(machine_name)
plt.plot(dani, eff, "g-", label="Linija efektivnosti stroja po danima")
plt.scatter(
dani_anomalija,
anomalija_eff,
c="r",
edgecolors="black",
s=75,
label="Anomalije",
)
plt.plot(dani, line, "k--", label="Linija efikasnosti za dan centil")
plt.legend(loc="best")
plt.show()
return
def visualizeData(machine_name):
machine_data = train.loc[
(train["machine_name"] == machine_name) & (train["day"] > 364)
]
plt.figure(figsize=(20, 8))
total = []
broken = []
anomalija_day = []
anomalija_total = []
anomalija_broken = []
for index, row in machine_data.iterrows():
total.append(row["total"])
broken.append(row["broken"])
if row["label"] == 1:
anomalija_total.append(row["total"])
anomalija_broken.append(row["broken"])
anomalija_day.append(row["day"])
plt.title(machine_name)
plt.scatter(
range(365, 365 + len(total)),
np.log(total),
c="cyan",
edgecolors="black",
label="Total",
)
plt.scatter(
range(365, 365 + len(broken)),
np.log(broken),
c="yellow",
edgecolors="black",
label="Broken",
)
# plt.scatter(range(365,365+len(total)), total, c='cyan', edgecolors= "black", label='Total')
# plt.scatter(range(365,365+len(broken)), np.log(broken), c='yellow',edgecolors= "black", label='Broken')
plt.scatter(
anomalija_day,
np.log(anomalija_total),
c="b",
s=100,
edgecolors="black",
label="Total kod anomalije",
)
plt.scatter(
anomalija_day,
np.log(anomalija_broken),
c="r",
s=100,
edgecolors="black",
label="Broken kod anomalije",
)
plt.legend(loc="best")
plt.show()
return
visualizeData("CI101712")
visualizeEff("CI101712", 0.5)
# # 2. Exponential smoothing
machine_data = train[train["machine_name"] == "CI101712"]["total"]
train_size = int(len(machine_data) * 0.7)
train_data = machine_data[:train_size]
test_data = machine_data[train_size:]
train_values = train_data.values
test_values = test_data.values
model = ExponentialSmoothing(train_values).fit()
pred = model.predict(
start=len(train_values), end=len(train_values) + len(test_values) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.title("Exponential Smoothing")
plt.legend()
plt.show()
# seasonal_periods=23 jer je 162 dana a gore se vid 7 perioda
best_level = 0
best_slope = 0
best_seasonal = 0
best_error = sys.maxsize
for smoothing_level in [0.1, 0.2, 0.4]:
for smoothing_slope in [0.1, 0.2, 0.4]:
for smoothing_seasonal in [0.1, 0.2, 0.4]:
model = ExponentialSmoothing(
train_values, seasonal="add", seasonal_periods=23
).fit(
smoothing_level=smoothing_level,
smoothing_slope=smoothing_slope,
smoothing_seasonal=smoothing_seasonal,
)
pred = model.predict(
start=len(train_values), end=len(train_values) + len(test_values) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("Parametri:")
print("Soothing_level: " + str(smoothing_level), end="")
print(", Soothing_slope: " + str(smoothing_slope), end="")
print(", Soothing_seasonal: " + str(smoothing_seasonal))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_level = smoothing_level
best_slope = smoothing_slope
best_seasonal = smoothing_seasonal
best_error = rmse
print()
print("Najbolji parametri:")
print("Soothing_level: " + str(best_level))
print("Soothing_slope: " + str(best_slope))
print("Soothing_seasonal: " + str(best_seasonal))
model = ExponentialSmoothing(train_values, seasonal="add", seasonal_periods=23).fit(
smoothing_level=best_level,
smoothing_slope=best_slope,
smoothing_seasonal=best_seasonal,
)
pred = model.predict(
start=len(train_values), end=len(train_values) + len(test_values) - 1
)
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.title(
"s_level="
+ str(best_level)
+ "s_slope="
+ str(best_slope)
+ "s_seasonal="
+ str(best_seasonal)
)
plt.legend()
plt.show()
# # 3. ARIMA
model = ARIMA(train_data, order=(0, 0, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA")
plt.legend()
plt.show()
# p mora bit oko duzina dataseta/10 znaci do 2
best_p = 0
best_error = sys.maxsize
for p in range(10):
model = ARIMA(train_data, order=(p, 0, 0)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("P: " + str(p))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_p = p
best_error = rmse
print()
print("Najbolji parametri:")
print("P: " + str(best_p))
model = ARIMA(train_data, order=(best_p, 0, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za P=" + str(best_p))
plt.legend()
plt.show()
best_q = 0
best_error = sys.maxsize
for q in range(10):
model = ARIMA(train_data, order=(0, 0, q)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("Q: " + str(q))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_q = q
best_error = rmse
print()
print("Najbolji parametri:")
print("Q: " + str(best_q))
model = ARIMA(train_data, order=(0, best_q, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za Q=" + str(best_q))
plt.legend()
plt.show()
best_d = 0
best_error = sys.maxsize
for d in range(10):
model = ARIMA(train_data, order=(0, d, 0)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("D: " + str(d))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_d = d
best_error = rmse
print()
print("Najbolji parametri:")
print("D: " + str(best_d))
model = ARIMA(train_data, order=(0, best_d, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za D=" + str(best_d))
plt.legend()
plt.show()
best_p = 0
best_d = 0
best_q = 0
best_error = sys.maxsize
for p in range(7, 10):
for d in range(3):
for q in range(7, 10):
model = ARIMA(train_data, order=(p, d, q)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("Parametri:")
print("P: " + str(p), end="")
print(", D: " + str(d), end="")
print(", Q: " + str(q))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_p = p
best_d = d
best_q = q
best_error = rmse
print("Najbolji parametri:")
print("P: " + str(best_p))
print("D: " + str(best_d))
print("Q: " + str(best_q))
model = ARIMA(train_data, order=(best_p, best_d, best_q)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za P=" + str(best_p) + ", D=" + str(best_d) + ", Q=" + str(best_q))
plt.legend()
plt.show()
best_s = 0
best_error = sys.maxsize
for s in range(2, 10):
model = ARIMA(train_data, order=(0, 0, 0), seasonal_order=(0, 0, 0, s)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("S: " + str(s))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_s = s
best_error = rmse
print()
print("Najbolji parametri:")
print("S: " + str(best_s))
model = ARIMA(train_data, order=(0, 0, 0), seasonal_order=(0, 0, 0, best_s)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za s=" + str(best_s))
plt.legend()
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/435/129435979.ipynb
| null | null |
[{"Id": 129435979, "ScriptId": 38468064, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9230157, "CreationDate": "05/13/2023 19:33:01", "VersionNumber": 2.0, "Title": "[MN <0036524183>] Time-series (TS)", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 409.0, "LinesInsertedFromPrevious": 237.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 172.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
from statsmodels.graphics.tsaplots import plot_acf
import matplotlib.pyplot as plt
import numpy as np
from statsmodels.tsa.holtwinters import ExponentialSmoothing
import sys
from statsmodels.tsa.seasonal import seasonal_decompose
from sklearn.metrics import mean_squared_error, mean_absolute_error
from statsmodels.tsa.arima.model import ARIMA
import warnings
warnings.filterwarnings("ignore")
train = pd.read_csv("/kaggle/input/fixed-data-new/train_fixed_new.csv")
test = pd.read_csv("/kaggle/input/fixed-data-new/test_fixed_new.csv")
# # 1. Data preparation
# Tu ću sad pronaći mašinu koja ima seasonality koristeći plot_acf.
# AI10158
# AI10635
# AL11466
# AL12144
# CI12166
# DL101579
# DS100760
#
for machine in train.machine_name.unique():
machine_data = train[train["machine_name"] == machine]["total"]
decomposition = seasonal_decompose(
machine_data, model="additive", period=int(np.floor(len(machine_data) / 2))
)
seasonal_component = decomposition.seasonal
if abs(seasonal_component).mean() > 10000: # Adjust the threshold as needed
print("Seasonality detected-" + machine)
def getAnomalyLine(centil):
eff_1 = []
anomalije = train.loc[(train["label"] == 1)]
for index, row in anomalije.iterrows():
eff_1.append(row["broken"] / row["total"])
num_to_drop = int(len(eff_1) * centil)
eff_1.sort()
eff_1 = eff_1[num_to_drop:]
return eff_1[0]
def visualizeEff(machine_name, centil):
machine_data = train.loc[
(train["machine_name"] == machine_name) & (train["day"] > 364)
]
plt.figure(figsize=(20, 8))
anomalyLine = getAnomalyLine(centil)
eff = []
dani_anomalija = []
anomalija_eff = []
line = []
dani = []
for index, row in machine_data.iterrows():
eff.append(row["broken"] / row["total"])
line.append(anomalyLine)
dani.append(row["day"])
if row["label"] == 1:
dani_anomalija.append(row["day"])
anomalija_eff.append(row["broken"] / row["total"])
plt.title(machine_name)
plt.plot(dani, eff, "g-", label="Linija efektivnosti stroja po danima")
plt.scatter(
dani_anomalija,
anomalija_eff,
c="r",
edgecolors="black",
s=75,
label="Anomalije",
)
plt.plot(dani, line, "k--", label="Linija efikasnosti za dan centil")
plt.legend(loc="best")
plt.show()
return
def visualizeData(machine_name):
machine_data = train.loc[
(train["machine_name"] == machine_name) & (train["day"] > 364)
]
plt.figure(figsize=(20, 8))
total = []
broken = []
anomalija_day = []
anomalija_total = []
anomalija_broken = []
for index, row in machine_data.iterrows():
total.append(row["total"])
broken.append(row["broken"])
if row["label"] == 1:
anomalija_total.append(row["total"])
anomalija_broken.append(row["broken"])
anomalija_day.append(row["day"])
plt.title(machine_name)
plt.scatter(
range(365, 365 + len(total)),
np.log(total),
c="cyan",
edgecolors="black",
label="Total",
)
plt.scatter(
range(365, 365 + len(broken)),
np.log(broken),
c="yellow",
edgecolors="black",
label="Broken",
)
# plt.scatter(range(365,365+len(total)), total, c='cyan', edgecolors= "black", label='Total')
# plt.scatter(range(365,365+len(broken)), np.log(broken), c='yellow',edgecolors= "black", label='Broken')
plt.scatter(
anomalija_day,
np.log(anomalija_total),
c="b",
s=100,
edgecolors="black",
label="Total kod anomalije",
)
plt.scatter(
anomalija_day,
np.log(anomalija_broken),
c="r",
s=100,
edgecolors="black",
label="Broken kod anomalije",
)
plt.legend(loc="best")
plt.show()
return
visualizeData("CI101712")
visualizeEff("CI101712", 0.5)
# # 2. Exponential smoothing
machine_data = train[train["machine_name"] == "CI101712"]["total"]
train_size = int(len(machine_data) * 0.7)
train_data = machine_data[:train_size]
test_data = machine_data[train_size:]
train_values = train_data.values
test_values = test_data.values
model = ExponentialSmoothing(train_values).fit()
pred = model.predict(
start=len(train_values), end=len(train_values) + len(test_values) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.title("Exponential Smoothing")
plt.legend()
plt.show()
# seasonal_periods=23 jer je 162 dana a gore se vid 7 perioda
best_level = 0
best_slope = 0
best_seasonal = 0
best_error = sys.maxsize
for smoothing_level in [0.1, 0.2, 0.4]:
for smoothing_slope in [0.1, 0.2, 0.4]:
for smoothing_seasonal in [0.1, 0.2, 0.4]:
model = ExponentialSmoothing(
train_values, seasonal="add", seasonal_periods=23
).fit(
smoothing_level=smoothing_level,
smoothing_slope=smoothing_slope,
smoothing_seasonal=smoothing_seasonal,
)
pred = model.predict(
start=len(train_values), end=len(train_values) + len(test_values) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("Parametri:")
print("Soothing_level: " + str(smoothing_level), end="")
print(", Soothing_slope: " + str(smoothing_slope), end="")
print(", Soothing_seasonal: " + str(smoothing_seasonal))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_level = smoothing_level
best_slope = smoothing_slope
best_seasonal = smoothing_seasonal
best_error = rmse
print()
print("Najbolji parametri:")
print("Soothing_level: " + str(best_level))
print("Soothing_slope: " + str(best_slope))
print("Soothing_seasonal: " + str(best_seasonal))
model = ExponentialSmoothing(train_values, seasonal="add", seasonal_periods=23).fit(
smoothing_level=best_level,
smoothing_slope=best_slope,
smoothing_seasonal=best_seasonal,
)
pred = model.predict(
start=len(train_values), end=len(train_values) + len(test_values) - 1
)
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.title(
"s_level="
+ str(best_level)
+ "s_slope="
+ str(best_slope)
+ "s_seasonal="
+ str(best_seasonal)
)
plt.legend()
plt.show()
# # 3. ARIMA
model = ARIMA(train_data, order=(0, 0, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA")
plt.legend()
plt.show()
# p mora bit oko duzina dataseta/10 znaci do 2
best_p = 0
best_error = sys.maxsize
for p in range(10):
model = ARIMA(train_data, order=(p, 0, 0)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("P: " + str(p))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_p = p
best_error = rmse
print()
print("Najbolji parametri:")
print("P: " + str(best_p))
model = ARIMA(train_data, order=(best_p, 0, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za P=" + str(best_p))
plt.legend()
plt.show()
best_q = 0
best_error = sys.maxsize
for q in range(10):
model = ARIMA(train_data, order=(0, 0, q)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("Q: " + str(q))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_q = q
best_error = rmse
print()
print("Najbolji parametri:")
print("Q: " + str(best_q))
model = ARIMA(train_data, order=(0, best_q, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za Q=" + str(best_q))
plt.legend()
plt.show()
best_d = 0
best_error = sys.maxsize
for d in range(10):
model = ARIMA(train_data, order=(0, d, 0)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("D: " + str(d))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_d = d
best_error = rmse
print()
print("Najbolji parametri:")
print("D: " + str(best_d))
model = ARIMA(train_data, order=(0, best_d, 0)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za D=" + str(best_d))
plt.legend()
plt.show()
best_p = 0
best_d = 0
best_q = 0
best_error = sys.maxsize
for p in range(7, 10):
for d in range(3):
for q in range(7, 10):
model = ARIMA(train_data, order=(p, d, q)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("Parametri:")
print("P: " + str(p), end="")
print(", D: " + str(d), end="")
print(", Q: " + str(q))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_p = p
best_d = d
best_q = q
best_error = rmse
print("Najbolji parametri:")
print("P: " + str(best_p))
print("D: " + str(best_d))
print("Q: " + str(best_q))
model = ARIMA(train_data, order=(best_p, best_d, best_q)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za P=" + str(best_p) + ", D=" + str(best_d) + ", Q=" + str(best_q))
plt.legend()
plt.show()
best_s = 0
best_error = sys.maxsize
for s in range(2, 10):
model = ARIMA(train_data, order=(0, 0, 0), seasonal_order=(0, 0, 0, s)).fit()
pred = model.predict(
start=len(train_data), end=len(train_data) + len(test_data) - 1
)
rmse = mean_squared_error(test_values, pred, squared=False)
mae = mean_absolute_error(test_values, pred)
print("S: " + str(s))
print("RMSE: " + str(rmse))
print("MAE: " + str(mae))
print()
if rmse < best_error:
best_s = s
best_error = rmse
print()
print("Najbolji parametri:")
print("S: " + str(best_s))
model = ARIMA(train_data, order=(0, 0, 0), seasonal_order=(0, 0, 0, best_s)).fit()
pred = model.predict(start=len(train_data), end=len(train_data) + len(test_data) - 1)
forecast = model.get_forecast(steps=len(test_data))
conf_int = np.asarray(forecast.conf_int(alpha=0.05))
plt.figure(figsize=(20, 8))
plt.plot(range(1, len(train_values) + 1), train_values, label="train")
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
test_values,
label="test",
)
plt.plot(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
pred,
label="pred",
)
plt.fill_between(
range(len(train_values) + 1, len(train_values) + len(test_values) + 1),
conf_int[:, 0],
conf_int[:, 1],
color="lightskyblue",
alpha=0.3,
label="conf int",
)
plt.title("ARIMA za s=" + str(best_s))
plt.legend()
plt.show()
| false | 0 | 5,709 | 0 | 5,709 | 5,709 |
||
129435252
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
root_path = "/kaggle/input/nlp-getting-started/"
train_data = pd.read_csv(root_path + "/train.csv")
train_labels = train_data["target"].to_numpy()
test_data = pd.read_csv(root_path + "/test.csv")
train_data.head()
test_data.head()
print(
"There are {} rows and {} columns in train".format(
train_data.shape[0], train_data.shape[1]
)
)
print(
"There are {} rows and {} columns in train".format(
test_data.shape[0], test_data.shape[1]
)
)
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 2))
test = train_data.target.value_counts()
sns.barplot(x=test.index, y=test.values)
plt.show()
#
# ## Data Cleaning
# As we know,twitter tweets always have to be cleaned before we go onto modelling.So we will do some basic cleaning such as spelling correction,removing punctuations,removing html tags and emojis etc.So let's start.
#
df = train_data.copy()
df.shape
import re
import string
def remove_URL(text):
url = re.compile(r"https?://\S+|www\.\S+")
return url.sub(r"", text)
def remove_html(text):
html = re.compile(r"<.*?>")
return html.sub(r"", text)
# Reference : https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b
def remove_emoji(text):
emoji_pattern = re.compile(
"["
"\U0001F600-\U0001F64F" # emoticons
"\U0001F300-\U0001F5FF" # symbols & pictographs
"\U0001F680-\U0001F6FF" # transport & map symbols
"\U0001F1E0-\U0001F1FF" # flags (iOS)
"\U00002702-\U000027B0"
"\U000024C2-\U0001F251"
"]+",
flags=re.UNICODE,
)
return emoji_pattern.sub(r"", text)
def remove_punct(text):
table = str.maketrans("", "", string.punctuation)
return text.translate(table)
df["text"] = df["text"].apply(lambda x: remove_URL(x))
df["text"] = df["text"].apply(lambda x: remove_html(x))
df["text"] = df["text"].apply(lambda x: remove_emoji(x))
df["text"] = df["text"].apply(lambda x: remove_punct(x))
## Removing Stopwords [a, the, an, in ....]
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
stop = set(stopwords.words("english"))
def remove_stopwords(text):
filtered_text = [word.lower() for word in text.split() if word.lower() not in stop]
return " ".join(filtered_text)
df["text"] = df["text"].apply(lambda x: remove_stopwords(x))
df["text"].head()
from collections import Counter
max_len = 15
def count_words(text_arr):
count = Counter()
for text in text_arr:
for word in text.split():
count[word] += 1
return count
counts = count_words(df["text"])
unique_words = len(counts)
print(unique_words)
import tensorflow as tf
train_array = df["text"].to_numpy()
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=unique_words)
tokenizer.fit_on_texts(train_array)
tokenized_words = tokenizer.index_word
train_sequences = tokenizer.texts_to_sequences(train_array)
train_sequences = tf.keras.preprocessing.sequence.pad_sequences(
train_sequences, maxlen=max_len, padding="post", truncating="post"
)
print(train_array.shape)
print(train_sequences.shape)
print(len(tokenized_words))
def decode_sequence(sequence):
text = " ".join([tokenized_words.get(idx, " ") for idx in sequence])
return text
print(decode_sequence(train_sequences[0]))
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Embedding(unique_words, 12, input_length=max_len))
model.add(tf.keras.layers.LSTM(64, return_sequences=True))
model.add(tf.keras.layers.LSTM(32))
# model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.add(tf.keras.layers.Dense(1, activation="relu"))
model.summary()
optim = tf.keras.optimizers.Adam(learning_rate=0.01)
loss = tf.keras.losses.BinaryCrossentropy()
metric = ["accuracy"]
model.compile(optimizer=optim, loss=loss, metrics=metric)
epochs = 25
validation_split = 0.2
model.fit(
train_sequences, train_labels, epochs=epochs, validation_split=validation_split
)
test_array = test_data["text"].to_numpy()
test_array_ids = test_data["id"].to_numpy()
test_sequences = tokenizer.texts_to_sequences(test_array)
test_sequences = tf.keras.preprocessing.sequence.pad_sequences(
test_sequences, maxlen=max_len, padding="post", truncating="post"
)
prediction = model.predict(test_sequences)
prediction = [1 if value > 0.75 else 0 for value in prediction]
submission_dict = {
"id": test_array_ids,
"target": prediction,
}
submission_df = pd.DataFrame(submission_dict)
submission_df.to_csv("/kaggle/working/submission.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/435/129435252.ipynb
| null | null |
[{"Id": 129435252, "ScriptId": 38483982, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5734991, "CreationDate": "05/13/2023 19:22:18", "VersionNumber": 1.0, "Title": "technical_ass_RAISE", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 163.0, "LinesInsertedFromPrevious": 163.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
root_path = "/kaggle/input/nlp-getting-started/"
train_data = pd.read_csv(root_path + "/train.csv")
train_labels = train_data["target"].to_numpy()
test_data = pd.read_csv(root_path + "/test.csv")
train_data.head()
test_data.head()
print(
"There are {} rows and {} columns in train".format(
train_data.shape[0], train_data.shape[1]
)
)
print(
"There are {} rows and {} columns in train".format(
test_data.shape[0], test_data.shape[1]
)
)
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 2))
test = train_data.target.value_counts()
sns.barplot(x=test.index, y=test.values)
plt.show()
#
# ## Data Cleaning
# As we know,twitter tweets always have to be cleaned before we go onto modelling.So we will do some basic cleaning such as spelling correction,removing punctuations,removing html tags and emojis etc.So let's start.
#
df = train_data.copy()
df.shape
import re
import string
def remove_URL(text):
url = re.compile(r"https?://\S+|www\.\S+")
return url.sub(r"", text)
def remove_html(text):
html = re.compile(r"<.*?>")
return html.sub(r"", text)
# Reference : https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b
def remove_emoji(text):
emoji_pattern = re.compile(
"["
"\U0001F600-\U0001F64F" # emoticons
"\U0001F300-\U0001F5FF" # symbols & pictographs
"\U0001F680-\U0001F6FF" # transport & map symbols
"\U0001F1E0-\U0001F1FF" # flags (iOS)
"\U00002702-\U000027B0"
"\U000024C2-\U0001F251"
"]+",
flags=re.UNICODE,
)
return emoji_pattern.sub(r"", text)
def remove_punct(text):
table = str.maketrans("", "", string.punctuation)
return text.translate(table)
df["text"] = df["text"].apply(lambda x: remove_URL(x))
df["text"] = df["text"].apply(lambda x: remove_html(x))
df["text"] = df["text"].apply(lambda x: remove_emoji(x))
df["text"] = df["text"].apply(lambda x: remove_punct(x))
## Removing Stopwords [a, the, an, in ....]
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
stop = set(stopwords.words("english"))
def remove_stopwords(text):
filtered_text = [word.lower() for word in text.split() if word.lower() not in stop]
return " ".join(filtered_text)
df["text"] = df["text"].apply(lambda x: remove_stopwords(x))
df["text"].head()
from collections import Counter
max_len = 15
def count_words(text_arr):
count = Counter()
for text in text_arr:
for word in text.split():
count[word] += 1
return count
counts = count_words(df["text"])
unique_words = len(counts)
print(unique_words)
import tensorflow as tf
train_array = df["text"].to_numpy()
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=unique_words)
tokenizer.fit_on_texts(train_array)
tokenized_words = tokenizer.index_word
train_sequences = tokenizer.texts_to_sequences(train_array)
train_sequences = tf.keras.preprocessing.sequence.pad_sequences(
train_sequences, maxlen=max_len, padding="post", truncating="post"
)
print(train_array.shape)
print(train_sequences.shape)
print(len(tokenized_words))
def decode_sequence(sequence):
text = " ".join([tokenized_words.get(idx, " ") for idx in sequence])
return text
print(decode_sequence(train_sequences[0]))
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Embedding(unique_words, 12, input_length=max_len))
model.add(tf.keras.layers.LSTM(64, return_sequences=True))
model.add(tf.keras.layers.LSTM(32))
# model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.add(tf.keras.layers.Dense(1, activation="relu"))
model.summary()
optim = tf.keras.optimizers.Adam(learning_rate=0.01)
loss = tf.keras.losses.BinaryCrossentropy()
metric = ["accuracy"]
model.compile(optimizer=optim, loss=loss, metrics=metric)
epochs = 25
validation_split = 0.2
model.fit(
train_sequences, train_labels, epochs=epochs, validation_split=validation_split
)
test_array = test_data["text"].to_numpy()
test_array_ids = test_data["id"].to_numpy()
test_sequences = tokenizer.texts_to_sequences(test_array)
test_sequences = tf.keras.preprocessing.sequence.pad_sequences(
test_sequences, maxlen=max_len, padding="post", truncating="post"
)
prediction = model.predict(test_sequences)
prediction = [1 if value > 0.75 else 0 for value in prediction]
submission_dict = {
"id": test_array_ids,
"target": prediction,
}
submission_df = pd.DataFrame(submission_dict)
submission_df.to_csv("/kaggle/working/submission.csv", index=False)
| false | 0 | 1,729 | 0 | 1,729 | 1,729 |
||
129442837
|
<jupyter_start><jupyter_text>Song Lyrics
### Context
TXT files for Poetry Generation with Python
### Content
TXT files of lyrics and poems
Kaggle dataset identifier: poetry
<jupyter_script># **Open a file**
# **Read File**
# **Read File line by line**
# **Read file using Loop**
# **Read first n character**
# **Read a Binary file**
fb = open("../input/rsna-miccai-png/test/00001/FLAIR/Image-100.png", "rb")
data = fb.read()
t = open("C:\\Users\\rakshitvig\\kaggle\\working\\test.txt")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/442/129442837.ipynb
|
poetry
|
paultimothymooney
|
[{"Id": 129442837, "ScriptId": 25726453, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8669781, "CreationDate": "05/13/2023 21:20:55", "VersionNumber": 1.0, "Title": "25. File Operations", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 27.0, "LinesInsertedFromPrevious": 27.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185496235, "KernelVersionId": 129442837, "SourceDatasetVersionId": 81739}, {"Id": 185496236, "KernelVersionId": 129442837, "SourceDatasetVersionId": 2425289}]
|
[{"Id": 81739, "DatasetId": 6776, "DatasourceVersionId": 84205, "CreatorUserId": 1314380, "LicenseName": "CC0: Public Domain", "CreationDate": "08/18/2018 18:47:12", "VersionNumber": 16.0, "Title": "Song Lyrics", "Slug": "poetry", "Subtitle": "Poetry and Lyrics (TXT files)", "Description": "### Context\n\nTXT files for Poetry Generation with Python\n\n### Content\n\nTXT files of lyrics and poems\n\n### Acknowledgements\n\nFree lyric hosting websites\n\n### Inspiration\n\nTXT files for Poetry Generation with Python", "VersionNotes": "20180818", "TotalCompressedBytes": 6784114.0, "TotalUncompressedBytes": 6784114.0}]
|
[{"Id": 6776, "CreatorUserId": 1314380, "OwnerUserId": 1314380.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 81739.0, "CurrentDatasourceVersionId": 84205.0, "ForumId": 13382, "Type": 2, "CreationDate": "12/11/2017 16:32:57", "LastActivityDate": "02/04/2018", "TotalViews": 109768, "TotalDownloads": 6157, "TotalVotes": 84, "TotalKernels": 103}]
|
[{"Id": 1314380, "UserName": "paultimothymooney", "DisplayName": "Paul Mooney", "RegisterDate": "10/05/2017", "PerformanceTier": 5}]
|
# **Open a file**
# **Read File**
# **Read File line by line**
# **Read file using Loop**
# **Read first n character**
# **Read a Binary file**
fb = open("../input/rsna-miccai-png/test/00001/FLAIR/Image-100.png", "rb")
data = fb.read()
t = open("C:\\Users\\rakshitvig\\kaggle\\working\\test.txt")
| false | 0 | 116 | 0 | 163 | 116 |
||
129442940
|
# we will use bio python to read the sequences
from Bio import SeqIO
# numpy to work with arrays
import numpy as np
# plotly to plot the graphs
import plotly.graph_objects as go
# Counter to count stuff
from collections import Counter
# path to the train and test fasta files
train_fasta = (
"/kaggle/input/cafa-5-protein-function-prediction/Train/train_sequences.fasta"
)
test_fasta = (
"/kaggle/input/cafa-5-protein-function-prediction/Test (Targets)/testsuperset.fasta"
)
# read train and test fasta files
train_sequences = SeqIO.parse(train_fasta, "fasta")
test_sequences = SeqIO.parse(test_fasta, "fasta")
# take a look at the first sequence
print("First sequence in train fasta file:")
print(next(train_sequences))
# put ids and sequences in separate numpy arrays
train_ids = np.array(
[seq.id for seq in SeqIO.parse(train_fasta, "fasta")], dtype=object
)
train_sequences = np.array(
[seq.seq for seq in SeqIO.parse(train_fasta, "fasta")], dtype=object
)
test_ids = np.array([seq.id for seq in SeqIO.parse(test_fasta, "fasta")], dtype=object)
test_sequences = np.array(
[seq.seq for seq in SeqIO.parse(test_fasta, "fasta")], dtype=object
)
# basic info: how many ids and sequences are there in train and test fasta files
# how many unique ids and sequences are there in train and test fasta files
print("Train fasta file:")
print("Number of ids: ", len(train_ids))
print("Number of sequences: ", len(train_sequences))
print("Number of unique ids: ", len(np.unique(train_ids)))
print("Number of unique sequences: ", len(np.unique(train_sequences)))
print("Test fasta file:")
print("Number of ids: ", len(test_ids))
print("Number of sequences: ", len(test_sequences))
print("Number of unique ids: ", len(np.unique(test_ids)))
print("Number of unique sequences: ", len(np.unique(test_sequences)))
# it seems that there are ids which have the same sequence
# put only unique sequences in a numpy array
unique_train_sequences = np.unique(train_sequences)
unique_test_sequences = np.unique(test_sequences)
unique_train_sequences.shape, unique_test_sequences.shape
# plot the distribution of sequence lengths in train and test fasta files
fig = go.Figure()
fig.add_trace(
go.Histogram(x=[len(seq) for seq in train_sequences], name="Train", opacity=0.5)
)
fig.add_trace(
go.Histogram(x=[len(seq) for seq in test_sequences], name="Test", opacity=0.5)
)
fig.update_layout(
title="Distribution of sequence lengths in train and test fasta files",
xaxis_title="Sequence length",
yaxis_title="Count",
bargap=0.2,
bargroupgap=0.1,
)
fig.show()
# we see that the distribution rougly agrees for train and test fasta files
# plot again but cut off at the 95th percentile to see the distribution better
fig = go.Figure()
fig.add_trace(
go.Histogram(x=[len(seq) for seq in train_sequences], name="Train", opacity=0.5)
)
fig.add_trace(
go.Histogram(x=[len(seq) for seq in test_sequences], name="Test", opacity=0.5)
)
fig.update_layout(
title="Distribution of sequence lengths in train and test fasta files",
xaxis_title="Sequence length",
yaxis_title="Count",
bargap=0.2,
bargroupgap=0.1,
xaxis_range=[0, np.percentile([len(seq) for seq in train_sequences], 95)],
)
fig.show()
# the sequence arrays contain sequences
# convert them to strings to count the number of each amino acid in the sequences
train_sequences = np.array([str(seq) for seq in train_sequences], dtype=object)
test_sequences = np.array([str(seq) for seq in test_sequences], dtype=object)
# count the number of each amino acid in the sequences
train_aa_counts = Counter("".join(train_sequences))
test_aa_counts = Counter("".join(test_sequences))
# sort the amino acids by their counts
train_aa_counts = {
k: v
for k, v in sorted(train_aa_counts.items(), key=lambda item: item[1], reverse=True)
}
test_aa_counts = {
k: v
for k, v in sorted(test_aa_counts.items(), key=lambda item: item[1], reverse=True)
}
# plot the amino acid log counts
fig = go.Figure()
fig.add_trace(
go.Bar(
x=list(train_aa_counts.keys()),
y=np.log(list(train_aa_counts.values())),
name="Train",
opacity=0.5,
)
)
fig.add_trace(
go.Bar(
x=list(test_aa_counts.keys()),
y=np.log(list(test_aa_counts.values())),
name="Test",
opacity=0.5,
)
)
fig.update_layout(
title="Log counts of amino acids in train and test fasta files",
xaxis_title="Amino acid",
yaxis_title="Log count",
bargap=0.2,
bargroupgap=0.1,
)
fig.show()
# we see that the distribution rougly agrees for train and test fasta files
# we have the following values for the amino acids:
# use counter to get the letters
print("Amino acids in train fasta file: ", train_aa_counts.keys())
# number of amino acids
print("Number of amino acids in train fasta file: ", len(train_aa_counts.keys()))
# dict from the letters to their name
amino_acid_dict = {
"A": "Alanine",
"R": "Arginine",
"N": "Asparagine",
"D": "Aspartic Acid",
"C": "Cysteine",
"E": "Glutamic Acid",
"Q": "Glutamine",
"G": "Glycine",
"H": "Histidine",
"I": "Isoleucine",
"L": "Leucine",
"K": "Lysine",
"M": "Methionine",
"F": "Phenylalanine",
"P": "Proline",
"S": "Serine",
"T": "Threonine",
"W": "Tryptophan",
"Y": "Tyrosine",
"V": "Valine",
"X": "Any/Unknown",
"O": "Pyrrolysine",
"U": "Selenocysteine",
"B": "Asparagine or Aspartic Acid",
"Z": "Glutamine or Glutamic Acid",
}
# redo the plot but with the amino acid names
fig = go.Figure()
fig.add_trace(
go.Bar(
x=[amino_acid_dict[aa] for aa in train_aa_counts.keys()],
y=np.log(list(train_aa_counts.values())),
name="Train",
opacity=0.5,
)
)
fig.add_trace(
go.Bar(
x=[amino_acid_dict[aa] for aa in test_aa_counts.keys()],
y=np.log(list(test_aa_counts.values())),
name="Test",
opacity=0.5,
)
)
fig.update_layout(
title="Log counts of amino acids in train and test fasta files",
xaxis_title="Amino acid",
yaxis_title="Log count",
bargap=0.2,
bargroupgap=0.1,
)
fig.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/442/129442940.ipynb
| null | null |
[{"Id": 129442940, "ScriptId": 38488230, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11646918, "CreationDate": "05/13/2023 21:22:29", "VersionNumber": 1.0, "Title": "Basic EDA of sequences", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 138.0, "LinesInsertedFromPrevious": 138.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 14}]
| null | null | null | null |
# we will use bio python to read the sequences
from Bio import SeqIO
# numpy to work with arrays
import numpy as np
# plotly to plot the graphs
import plotly.graph_objects as go
# Counter to count stuff
from collections import Counter
# path to the train and test fasta files
train_fasta = (
"/kaggle/input/cafa-5-protein-function-prediction/Train/train_sequences.fasta"
)
test_fasta = (
"/kaggle/input/cafa-5-protein-function-prediction/Test (Targets)/testsuperset.fasta"
)
# read train and test fasta files
train_sequences = SeqIO.parse(train_fasta, "fasta")
test_sequences = SeqIO.parse(test_fasta, "fasta")
# take a look at the first sequence
print("First sequence in train fasta file:")
print(next(train_sequences))
# put ids and sequences in separate numpy arrays
train_ids = np.array(
[seq.id for seq in SeqIO.parse(train_fasta, "fasta")], dtype=object
)
train_sequences = np.array(
[seq.seq for seq in SeqIO.parse(train_fasta, "fasta")], dtype=object
)
test_ids = np.array([seq.id for seq in SeqIO.parse(test_fasta, "fasta")], dtype=object)
test_sequences = np.array(
[seq.seq for seq in SeqIO.parse(test_fasta, "fasta")], dtype=object
)
# basic info: how many ids and sequences are there in train and test fasta files
# how many unique ids and sequences are there in train and test fasta files
print("Train fasta file:")
print("Number of ids: ", len(train_ids))
print("Number of sequences: ", len(train_sequences))
print("Number of unique ids: ", len(np.unique(train_ids)))
print("Number of unique sequences: ", len(np.unique(train_sequences)))
print("Test fasta file:")
print("Number of ids: ", len(test_ids))
print("Number of sequences: ", len(test_sequences))
print("Number of unique ids: ", len(np.unique(test_ids)))
print("Number of unique sequences: ", len(np.unique(test_sequences)))
# it seems that there are ids which have the same sequence
# put only unique sequences in a numpy array
unique_train_sequences = np.unique(train_sequences)
unique_test_sequences = np.unique(test_sequences)
unique_train_sequences.shape, unique_test_sequences.shape
# plot the distribution of sequence lengths in train and test fasta files
fig = go.Figure()
fig.add_trace(
go.Histogram(x=[len(seq) for seq in train_sequences], name="Train", opacity=0.5)
)
fig.add_trace(
go.Histogram(x=[len(seq) for seq in test_sequences], name="Test", opacity=0.5)
)
fig.update_layout(
title="Distribution of sequence lengths in train and test fasta files",
xaxis_title="Sequence length",
yaxis_title="Count",
bargap=0.2,
bargroupgap=0.1,
)
fig.show()
# we see that the distribution rougly agrees for train and test fasta files
# plot again but cut off at the 95th percentile to see the distribution better
fig = go.Figure()
fig.add_trace(
go.Histogram(x=[len(seq) for seq in train_sequences], name="Train", opacity=0.5)
)
fig.add_trace(
go.Histogram(x=[len(seq) for seq in test_sequences], name="Test", opacity=0.5)
)
fig.update_layout(
title="Distribution of sequence lengths in train and test fasta files",
xaxis_title="Sequence length",
yaxis_title="Count",
bargap=0.2,
bargroupgap=0.1,
xaxis_range=[0, np.percentile([len(seq) for seq in train_sequences], 95)],
)
fig.show()
# the sequence arrays contain sequences
# convert them to strings to count the number of each amino acid in the sequences
train_sequences = np.array([str(seq) for seq in train_sequences], dtype=object)
test_sequences = np.array([str(seq) for seq in test_sequences], dtype=object)
# count the number of each amino acid in the sequences
train_aa_counts = Counter("".join(train_sequences))
test_aa_counts = Counter("".join(test_sequences))
# sort the amino acids by their counts
train_aa_counts = {
k: v
for k, v in sorted(train_aa_counts.items(), key=lambda item: item[1], reverse=True)
}
test_aa_counts = {
k: v
for k, v in sorted(test_aa_counts.items(), key=lambda item: item[1], reverse=True)
}
# plot the amino acid log counts
fig = go.Figure()
fig.add_trace(
go.Bar(
x=list(train_aa_counts.keys()),
y=np.log(list(train_aa_counts.values())),
name="Train",
opacity=0.5,
)
)
fig.add_trace(
go.Bar(
x=list(test_aa_counts.keys()),
y=np.log(list(test_aa_counts.values())),
name="Test",
opacity=0.5,
)
)
fig.update_layout(
title="Log counts of amino acids in train and test fasta files",
xaxis_title="Amino acid",
yaxis_title="Log count",
bargap=0.2,
bargroupgap=0.1,
)
fig.show()
# we see that the distribution rougly agrees for train and test fasta files
# we have the following values for the amino acids:
# use counter to get the letters
print("Amino acids in train fasta file: ", train_aa_counts.keys())
# number of amino acids
print("Number of amino acids in train fasta file: ", len(train_aa_counts.keys()))
# dict from the letters to their name
amino_acid_dict = {
"A": "Alanine",
"R": "Arginine",
"N": "Asparagine",
"D": "Aspartic Acid",
"C": "Cysteine",
"E": "Glutamic Acid",
"Q": "Glutamine",
"G": "Glycine",
"H": "Histidine",
"I": "Isoleucine",
"L": "Leucine",
"K": "Lysine",
"M": "Methionine",
"F": "Phenylalanine",
"P": "Proline",
"S": "Serine",
"T": "Threonine",
"W": "Tryptophan",
"Y": "Tyrosine",
"V": "Valine",
"X": "Any/Unknown",
"O": "Pyrrolysine",
"U": "Selenocysteine",
"B": "Asparagine or Aspartic Acid",
"Z": "Glutamine or Glutamic Acid",
}
# redo the plot but with the amino acid names
fig = go.Figure()
fig.add_trace(
go.Bar(
x=[amino_acid_dict[aa] for aa in train_aa_counts.keys()],
y=np.log(list(train_aa_counts.values())),
name="Train",
opacity=0.5,
)
)
fig.add_trace(
go.Bar(
x=[amino_acid_dict[aa] for aa in test_aa_counts.keys()],
y=np.log(list(test_aa_counts.values())),
name="Test",
opacity=0.5,
)
)
fig.update_layout(
title="Log counts of amino acids in train and test fasta files",
xaxis_title="Amino acid",
yaxis_title="Log count",
bargap=0.2,
bargroupgap=0.1,
)
fig.show()
| false | 0 | 2,005 | 14 | 2,005 | 2,005 |
||
129442936
|
<jupyter_start><jupyter_text>CIFAR-10 PNGs in folders
### Context
This dataset is only here for convenience. The original dataset in binary form can be found at https://www.cs.toronto.edu/~kriz/cifar.html
And the dataset in ImageNet format (each class is a subfolder) can be found at
https://course.fast.ai/datasets
### Content
From the description on the dataset's home page,
"The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. Here are the classes in the dataset: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck "
Cover photo by Ethan McArthur on Unsplash
Kaggle dataset identifier: cifar10-pngs-in-folders
<jupyter_script>f = open("../input/poetry/Kanye_West.txt")
help(open)
# **Reading the file**
f.read()
# **print function**
print("My name is Rakshit", end="\n")
print("We are learning Python")
help(print)
f.close()
# **read file using print function**
print(f.read())
# **read file using loop**
for i in f:
print(i, end="\n")
# **Read line by line**
f.readline()
# **Other Important methods**
f = open("../input/poetry/Kanye_West.txt", encoding="utf-8-sig")
f.close()
help(open)
f.read(8)
f.read(4)
f.seek(0)
f.close()
# **Default Argumets in open()**
help(open)
# **'r' - for reading the file (default)**
# **'w' - for writing to the file. Creates a new file if does not exist - overrides the existing content**
# **'a' - append content to the end of the file**
# **'t' - opens in text mode(default)**
# **'b' - opens in binary mode**
f.close()
# **Write to the file**
f = open("test.txt", mode="r")
f.write("This is my second line")
print(f.read())
f.close()
# **Append to the file**
f = open("test.txt", mode="r")
f.write("Going good")
print(f.read())
f.close()
# **Binary Files** - Non-text file. Images are stored as binary files.
# The mode in the open function should be 'rb' (read binary)
img = open("../input/cifar10-pngs-in-folders/cifar10/test/airplane/0001.png", "rb")
img.readline()
store = img.read() # store the content of img to variable store
a = open("airplane.png", "wb")
a.write(store)
a.close()
img.close()
# **Exception handling** - It is important to close our file to make sure that the resources get free up. Also, there might be data loss in case we don't close the file
try:
f = open("../input/poetry/Kanye_West.txt")
1 / 0
f.close() # this will not get executed
except:
print("There is an issue in the try block")
finally:
f.close()
f.readline()
with open("python.txt", "w") as f: # f is the file handler/variabkle
f.write(
"Hi i am learning python"
) # once we get out of this block, the file is automatically close
1 + 2
x = open("./python.txt", "w")
x.write("Hi i am learning pythin")
x.close()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/442/129442936.ipynb
|
cifar10-pngs-in-folders
|
swaroopkml
|
[{"Id": 129442936, "ScriptId": 26584504, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8669781, "CreationDate": "05/13/2023 21:22:24", "VersionNumber": 1.0, "Title": "26. File Operation", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 125.0, "LinesInsertedFromPrevious": 125.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185496373, "KernelVersionId": 129442936, "SourceDatasetVersionId": 283795}, {"Id": 185496372, "KernelVersionId": 129442936, "SourceDatasetVersionId": 81739}]
|
[{"Id": 283795, "DatasetId": 118250, "DatasourceVersionId": 296256, "CreatorUserId": 1930552, "LicenseName": "Unknown", "CreationDate": "02/10/2019 11:16:19", "VersionNumber": 1.0, "Title": "CIFAR-10 PNGs in folders", "Slug": "cifar10-pngs-in-folders", "Subtitle": "The CIFAR 10 dataset as a bunch of PNGs", "Description": "### Context\nThis dataset is only here for convenience. The original dataset in binary form can be found at https://www.cs.toronto.edu/~kriz/cifar.html \nAnd the dataset in ImageNet format (each class is a subfolder) can be found at \nhttps://course.fast.ai/datasets\n\n### Content\n\nFrom the description on the dataset's home page,\n\"The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. Here are the classes in the dataset: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck \"\n\nCover photo by Ethan McArthur on Unsplash", "VersionNotes": "Initial release", "TotalCompressedBytes": 146683706.0, "TotalUncompressedBytes": 146683706.0}]
|
[{"Id": 118250, "CreatorUserId": 1930552, "OwnerUserId": 1930552.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 283795.0, "CurrentDatasourceVersionId": 296256.0, "ForumId": 128152, "Type": 2, "CreationDate": "02/10/2019 11:16:19", "LastActivityDate": "02/10/2019", "TotalViews": 31421, "TotalDownloads": 7774, "TotalVotes": 81, "TotalKernels": 107}]
|
[{"Id": 1930552, "UserName": "swaroopkml", "DisplayName": "Swaroop Kumar", "RegisterDate": "05/21/2018", "PerformanceTier": 0}]
|
f = open("../input/poetry/Kanye_West.txt")
help(open)
# **Reading the file**
f.read()
# **print function**
print("My name is Rakshit", end="\n")
print("We are learning Python")
help(print)
f.close()
# **read file using print function**
print(f.read())
# **read file using loop**
for i in f:
print(i, end="\n")
# **Read line by line**
f.readline()
# **Other Important methods**
f = open("../input/poetry/Kanye_West.txt", encoding="utf-8-sig")
f.close()
help(open)
f.read(8)
f.read(4)
f.seek(0)
f.close()
# **Default Argumets in open()**
help(open)
# **'r' - for reading the file (default)**
# **'w' - for writing to the file. Creates a new file if does not exist - overrides the existing content**
# **'a' - append content to the end of the file**
# **'t' - opens in text mode(default)**
# **'b' - opens in binary mode**
f.close()
# **Write to the file**
f = open("test.txt", mode="r")
f.write("This is my second line")
print(f.read())
f.close()
# **Append to the file**
f = open("test.txt", mode="r")
f.write("Going good")
print(f.read())
f.close()
# **Binary Files** - Non-text file. Images are stored as binary files.
# The mode in the open function should be 'rb' (read binary)
img = open("../input/cifar10-pngs-in-folders/cifar10/test/airplane/0001.png", "rb")
img.readline()
store = img.read() # store the content of img to variable store
a = open("airplane.png", "wb")
a.write(store)
a.close()
img.close()
# **Exception handling** - It is important to close our file to make sure that the resources get free up. Also, there might be data loss in case we don't close the file
try:
f = open("../input/poetry/Kanye_West.txt")
1 / 0
f.close() # this will not get executed
except:
print("There is an issue in the try block")
finally:
f.close()
f.readline()
with open("python.txt", "w") as f: # f is the file handler/variabkle
f.write(
"Hi i am learning python"
) # once we get out of this block, the file is automatically close
1 + 2
x = open("./python.txt", "w")
x.write("Hi i am learning pythin")
x.close()
| false | 0 | 720 | 0 | 954 | 720 |
||
129442537
|
# **Issues/problems in program because of which program stops abruptly or give the wrong result**
# **3 Categories:**
# * Syntax Errors
# * Runtime Errors
# * Logical Errors
# **Syntax**
"""
a = 90
print("The value of a is ", a
"""
"""
if a > 80
print("Yes")
"""
"""
i = 10
whlie i>0:
print(i)
i = i-1
"""
# **Runtime** - also known as exceptions
"""
num = int(input("Enter the value of numerator"))
den = int(input("Enter the value of denominator"))
c = num / den
print(c)
"""
# **Logical Error**
a = 90
b = 30
c = 60
f = a + b + c / 3
print(f)
if a < 80:
print("Yes")
else:
print("No")
# **Different types of run time exception**
# Index Error
marks = [53, 76, 43, 86, 33]
# marks[5]
# Key Error
d = {"India": 7897643, "USA": 365454, "Chian": 7686978967}
# d["Russia"]
# Module not found Error
# import panda
# Type Error
"""
a = 56
b = "6"
"""
"""
c = a+b
print(c)
"""
# Name Error
# print(op)
# Zero Division error
# a = 45/0
# **Exception Handling** - Python provides us a way to handle the exceptions so that our code can be executed without getting haulted. We can control the flow of our code using EH
"""
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = (a+b)/2
print(c)
print("this is some code which needs to get executed")
"""
"""
try:
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = a/b
print(c)
except: #this will get executed if we have a exception in try block
print("The value entered is not correct")
print("the next lines of code will get executed")
"""
"""
try:
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = a/b
print(c)
except Exception as e: #this will get executed if we have a exception in try block
print("The value entered is not correct")
print(e)
print("the next lines of code will get executed")
"""
"""
try:
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = a/b
print(c)
except ZeroDivisionError: #this will get executed if we have a exception in try block
print("The denominator cannot be zero")
except NameError:
print("We need to initializa the variable first")
else: #this will get executed if we dont have any error in try block
print("Your input is perfectly fine")
finally: #this block of code will get executed no matter what happens
print("This will always get executed")
print("Now lets move onto the next set of code")
print("This is somethiing imp")
"""
# **Lambda function**
"""
addition = lambda a,b:a-b
addition(2,5)
"""
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/442/129442537.ipynb
| null | null |
[{"Id": 129442537, "ScriptId": 25324690, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8669781, "CreationDate": "05/13/2023 21:16:23", "VersionNumber": 5.0, "Title": "21 Errors & Exception", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 159.0, "LinesInsertedFromPrevious": 2.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 157.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
# **Issues/problems in program because of which program stops abruptly or give the wrong result**
# **3 Categories:**
# * Syntax Errors
# * Runtime Errors
# * Logical Errors
# **Syntax**
"""
a = 90
print("The value of a is ", a
"""
"""
if a > 80
print("Yes")
"""
"""
i = 10
whlie i>0:
print(i)
i = i-1
"""
# **Runtime** - also known as exceptions
"""
num = int(input("Enter the value of numerator"))
den = int(input("Enter the value of denominator"))
c = num / den
print(c)
"""
# **Logical Error**
a = 90
b = 30
c = 60
f = a + b + c / 3
print(f)
if a < 80:
print("Yes")
else:
print("No")
# **Different types of run time exception**
# Index Error
marks = [53, 76, 43, 86, 33]
# marks[5]
# Key Error
d = {"India": 7897643, "USA": 365454, "Chian": 7686978967}
# d["Russia"]
# Module not found Error
# import panda
# Type Error
"""
a = 56
b = "6"
"""
"""
c = a+b
print(c)
"""
# Name Error
# print(op)
# Zero Division error
# a = 45/0
# **Exception Handling** - Python provides us a way to handle the exceptions so that our code can be executed without getting haulted. We can control the flow of our code using EH
"""
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = (a+b)/2
print(c)
print("this is some code which needs to get executed")
"""
"""
try:
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = a/b
print(c)
except: #this will get executed if we have a exception in try block
print("The value entered is not correct")
print("the next lines of code will get executed")
"""
"""
try:
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = a/b
print(c)
except Exception as e: #this will get executed if we have a exception in try block
print("The value entered is not correct")
print(e)
print("the next lines of code will get executed")
"""
"""
try:
a = int(input("Enter the first number"))
b = int(input("Enter the second number"))
c = a/b
print(c)
except ZeroDivisionError: #this will get executed if we have a exception in try block
print("The denominator cannot be zero")
except NameError:
print("We need to initializa the variable first")
else: #this will get executed if we dont have any error in try block
print("Your input is perfectly fine")
finally: #this block of code will get executed no matter what happens
print("This will always get executed")
print("Now lets move onto the next set of code")
print("This is somethiing imp")
"""
# **Lambda function**
"""
addition = lambda a,b:a-b
addition(2,5)
"""
| false | 0 | 860 | 1 | 860 | 860 |
||
129442949
|
# [https://www.w3schools.com/python/module_math.asp](http://)
import math
# **Constants**
math.e
round(math.pi, 2)
# **Round off decimals**
a = -4.5
math.ceil(a) # this will round off to the next high integer
a = 3.4
math.floor(a)
math.floor(a) # this will round off to the next lower integer
b = 3.9
math.trunc(b)
# **Exponential & Logarithmic functions**
x = 3
math.exp(x) # e to the power of 3
math.log(1000) # by default the base is e
math.log(1000, 10) # now the base is 10
# **Trigonometric Function**
degree = 90
math.sin(math.radians(degree))
# **Mathematical Function**
math.sqrt(4)
math.factorial(5) # 5x4x3x2x1
l = [1.2, 2.3, 4.6, 5.5]
math.fsum(l)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/442/129442949.ipynb
| null | null |
[{"Id": 129442949, "ScriptId": 26853102, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8669781, "CreationDate": "05/13/2023 21:22:38", "VersionNumber": 1.0, "Title": "27. Math module", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 45.0, "LinesInsertedFromPrevious": 45.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
# [https://www.w3schools.com/python/module_math.asp](http://)
import math
# **Constants**
math.e
round(math.pi, 2)
# **Round off decimals**
a = -4.5
math.ceil(a) # this will round off to the next high integer
a = 3.4
math.floor(a)
math.floor(a) # this will round off to the next lower integer
b = 3.9
math.trunc(b)
# **Exponential & Logarithmic functions**
x = 3
math.exp(x) # e to the power of 3
math.log(1000) # by default the base is e
math.log(1000, 10) # now the base is 10
# **Trigonometric Function**
degree = 90
math.sin(math.radians(degree))
# **Mathematical Function**
math.sqrt(4)
math.factorial(5) # 5x4x3x2x1
l = [1.2, 2.3, 4.6, 5.5]
math.fsum(l)
| false | 0 | 294 | 1 | 294 | 294 |
||
129442966
|
# **Numpy** - is a library that :-
# * provides us an optimized way to store data
# * Easier to handle the data
# * It is fast and takes less memory
# * It is easier to learn numpy
# * Memory management can be done as it is more closer to hardware
# * A lot of mathematical operations can be performed on data stored in numpy array
import numpy as np
# Dimensional Array
# * Data in numpy is stored in numpy array
# * Numpy array is a collection/grid of data
# * Data stored in numpy array should be of same data type
# * Array are contiguous memory location
# 1 D Array is also known as a **vector**
# 2 D Array is known as **matrix**
# n-D Array is known as **tensor**
# **1D Array**
# Using array method
import numpy as np
a = np.array([2, 3, 8, 5])
a
# **Implicit Conversion**
b = np.array([1, 2, 3, 4.5])
b
# Using arange method
c = np.arange(10, 20, 3) # arange(first_number, last_number, step)
c
# Memory management
#
d = np.array([1, 2, 300, 4], dtype=np.int16)
d
help(np.array)
# Using linspace method
e = np.linspace(1, 10, 20) # linspace(first, last(inclusive), count of numbers)
e
# Attributes - shape & dimension
a
a.ndim
a.shape
# **2-D Array**
# Create 2-D Array using array function
a = np.array([[1.4, 2.5, 3.6, 4.8], [5, 6, 7, 8], [55, 45, 37, 89]])
a
# Attributes
a.ndim # it will give information about the number of dimension
a.shape
# 2D Array using Methods
# Using ones method
import numpy as np
a = np.ones([3, 5])
a
# Using zeroes method
b = np.zeros([5, 6])
b
# Using eye method
c = np.eye(4, 2) # is also known as Identity Matrix
c
# Using diag method
d = np.diag([1, 2, 3])
d
# Extracting diagonals
np.diag(d)
# **3-D Array**
#
import numpy as np
a = np.array(
[
[[311, 312, 313], [321, 322, 323], [331, 332, 333]],
[[211, 212, 213], [221, 222, 223], [231, 232, 233]],
[[111, 112, 113], [121, 122, 123], [131, 132, 133]],
]
)
a
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/442/129442966.ipynb
| null | null |
[{"Id": 129442966, "ScriptId": 26890741, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8669781, "CreationDate": "05/13/2023 21:22:55", "VersionNumber": 1.0, "Title": "28. Numpy", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 144.0, "LinesInsertedFromPrevious": 144.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
# **Numpy** - is a library that :-
# * provides us an optimized way to store data
# * Easier to handle the data
# * It is fast and takes less memory
# * It is easier to learn numpy
# * Memory management can be done as it is more closer to hardware
# * A lot of mathematical operations can be performed on data stored in numpy array
import numpy as np
# Dimensional Array
# * Data in numpy is stored in numpy array
# * Numpy array is a collection/grid of data
# * Data stored in numpy array should be of same data type
# * Array are contiguous memory location
# 1 D Array is also known as a **vector**
# 2 D Array is known as **matrix**
# n-D Array is known as **tensor**
# **1D Array**
# Using array method
import numpy as np
a = np.array([2, 3, 8, 5])
a
# **Implicit Conversion**
b = np.array([1, 2, 3, 4.5])
b
# Using arange method
c = np.arange(10, 20, 3) # arange(first_number, last_number, step)
c
# Memory management
#
d = np.array([1, 2, 300, 4], dtype=np.int16)
d
help(np.array)
# Using linspace method
e = np.linspace(1, 10, 20) # linspace(first, last(inclusive), count of numbers)
e
# Attributes - shape & dimension
a
a.ndim
a.shape
# **2-D Array**
# Create 2-D Array using array function
a = np.array([[1.4, 2.5, 3.6, 4.8], [5, 6, 7, 8], [55, 45, 37, 89]])
a
# Attributes
a.ndim # it will give information about the number of dimension
a.shape
# 2D Array using Methods
# Using ones method
import numpy as np
a = np.ones([3, 5])
a
# Using zeroes method
b = np.zeros([5, 6])
b
# Using eye method
c = np.eye(4, 2) # is also known as Identity Matrix
c
# Using diag method
d = np.diag([1, 2, 3])
d
# Extracting diagonals
np.diag(d)
# **3-D Array**
#
import numpy as np
a = np.array(
[
[[311, 312, 313], [321, 322, 323], [331, 332, 333]],
[[211, 212, 213], [221, 222, 223], [231, 232, 233]],
[[111, 112, 113], [121, 122, 123], [131, 132, 133]],
]
)
a
| false | 0 | 780 | 1 | 780 | 780 |
||
129818595
|
<jupyter_start><jupyter_text>VoterPersuasionDataset
Kaggle dataset identifier: voterpersuasiondataset
<jupyter_script># # Political Persuasion
# ## Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pylab as plot
import warnings
import seaborn as sns
import scipy.stats as scistat
import math
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, auc
from dmba import classificationSummary
from lightgbm import LGBMClassifier
from sklearn.tree import export_graphviz
from sklearn import tree
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.simplefilter("ignore")
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", 500)
# ## Read Data
data = pd.read_csv(r"/kaggle/input/voterpersuasiondataset/Voter-Persuasion.csv")
data.head()
data.info()
# #### The shape of data
data.shape
# #### Data Columns
data.columns
# #### Check Duplicated Values
data.duplicated().any()
# #### Check Missing Values
data.isna().sum()
data.describe()
data[["MESSAGE_A"]].value_counts()
data["MESSAGE_A_REV"].value_counts()
data["MOVED_A"].value_counts()
data["opposite"].value_counts()
# ## Data Preprocessing
data.drop(["VOTER_ID", "MOVED_A", "opposite", "MESSAGE_A_REV"], axis=1, inplace=True)
data["MOVED_AD"] = data["MOVED_AD"].replace(
{"N": 0, "Y": 1}
) # Change N to 0 and Y to 1
data.head()
# MESSAGE_A is the column that shows whether a voter got the flyer or not. 1 represents getting the flyer and 0 represents not getting the flyer
# Overall, we calculate how well the flyer did in moving voters in a Democratic direction? (We look at the target variable among those who got the flyer, compared to those who did not.)
flyer = data[(data["MESSAGE_A"] == 1) & (data["MOVED_AD"] == 1)]
flyer
per = (flyer.shape[0] / data.shape[0]) * 100
print(
"The percentage of voters who got the flyer and moved is: "
+ str(round(per, 2))
+ "%"
)
no_flyer = data[(data["MESSAGE_A"] == 0) & (data["MOVED_AD"] == 1)]
no_flyer
per2 = (no_flyer.shape[0] / data.shape[0]) * 100
print(
"The percentage of voters who did not get the flyer and moved is: "
+ str(round(per2, 2))
+ "%"
)
# The flyer did a good job in moving about 20% of the total number of voters who got the flyer as against the 17% who didn't get it and still moved.
# ## Exploratory Data Analysis (EDA)
# Side-by-side boxplots are useful in classification tasks for evaluating the potential of numerical predictors. This is done by using the x-axis for the categorical outcome and the y-axis for a numerical predictor. The first set of examples shown below helps us to see the effects of SET_NO, OPP_SEX, AGE, HH_ND, HH_NR, HH_NI, MED_AGE, NH_WHITE, NH_AA, NH_ASIAN on MOVED_AD. These pairs do not clearly separate the outcome variable so we will use the correlation plot to select potentially useful variables
fig, axes = plot.subplots(nrows=1, ncols=10, figsize=(23, 5))
data.boxplot(column="SET_NO", by="MOVED_AD", ax=axes[0])
data.boxplot(column="OPP_SEX", by="MOVED_AD", ax=axes[1])
data.boxplot(column="AGE", by="MOVED_AD", ax=axes[2])
data.boxplot(column="HH_ND", by="MOVED_AD", ax=axes[3])
data.boxplot(column="HH_NR", by="MOVED_AD", ax=axes[4])
data.boxplot(column="HH_NI", by="MOVED_AD", ax=axes[5])
data.boxplot(column="MED_AGE", by="MOVED_AD", ax=axes[6])
data.boxplot(column="NH_WHITE", by="MOVED_AD", ax=axes[7])
data.boxplot(column="NH_AA", by="MOVED_AD", ax=axes[8])
data.boxplot(column="NH_ASIAN", by="MOVED_AD", ax=axes[9])
for ax in axes:
ax.set_xlabel("MOVED_AD")
# ### Correlation Analysis & Feature Selection
numerical = data.drop(
["CAND1S", "CAND2S", "CAND1_UND", "CAND2_UND", "I3", "Partition"], axis=1
)
categorical = data.filter(
["CAND1S", "CAND2S", "CAND1_UND", "CAND2_UND", "I3", "Partition"]
)
cat_numerical = pd.get_dummies(categorical, drop_first=True)
cat_numerical.head()
data = pd.concat([numerical, cat_numerical], axis=1)
data.head()
corr_data = data.corr()
corr_data
plt.figure(figsize=(5, 20))
heatmap = sns.heatmap(
corr_data[["MOVED_AD"]].sort_values(by="MOVED_AD", ascending=False),
vmin=-1,
vmax=1,
annot=True,
cmap="BrBG",
)
heatmap.set_title(
"Features Correlating with MOVED_A", fontdict={"fontsize": 18}, pad=16
)
# Testing for measures of central tendency, shape and spread among selected predictors
# The getdistprops function takes a series and generates measures of central tendency, shape, and spread. The function returns a dictionary with these measures. It also handles situations where the Shapiro test for normality does not return a vaule. It will not add keys for normstat and normpvalue when that happens.
def getdistprops(seriestotest):
out = {}
normstat, normpvalue = scistat.shapiro(seriestotest)
if not math.isnan(normstat):
out["normstat"] = normstat
if normpvalue >= 0.05:
out["normpvalue"] = str(round(normpvalue, 2)) + ":Accept Normal"
elif normpvalue < 0.05:
out["normpvalue"] = str(round(normpvalue, 2)) + ": Reject Normal"
out["mean"] = seriestotest.mean()
out["median"] = seriestotest.median()
out["std"] = seriestotest.std()
out["kurtosis"] = seriestotest.kurtosis()
out["skew"] = seriestotest.skew()
out["count"] = seriestotest.count()
return out
dist_hhnd = getdistprops(data.HH_ND)
print(dist_hhnd)
sns.distplot(data.HH_ND)
plot.title("Distribution plot for HH_ND")
plot.show()
# For HH_ND, the skew and kurtosis values suggest that its distribution has slightly positive skew and fatter tails than a normally distributed variable. The shapiro test of normality (normpvalue) confirms this. The HH_ND variable has less variability and it is leptokurtic.It is also multimodal i.e having multiple peaks
dist_nhwhite = getdistprops(data.NH_WHITE)
print(dist_nhwhite)
sns.distplot(data.NH_WHITE)
plot.title("Distribution plot for NH_WHITE")
plot.show()
# For NH_WHITE, the skew value suggest that its distribution has slightly negative skew. It is flattened, skewed to the left and dispersed. Therefore, can we say that NH_WHITE is platykurtic and multimodal.
dist_partyr = getdistprops(data.PARTY_R)
print(dist_partyr)
sns.distplot(data.PARTY_R)
plot.title("Distribution plot for PARTY_R")
plot.show()
# For PARTY_R, the skew value suggest that its distribution has slightly positive skew. It is flattened, dispersed and bimodal. PARTY_R variable is platykurtic.
dist_vpp_08 = getdistprops(data.VPP_08)
print(dist_vpp_08)
sns.distplot(data.VPP_08)
plot.title("Distribution plot for VPP_08")
plot.show()
# For VPP_O8, the skew value suggest that its distribution has slightly positive skew. It is flattened, dispersed and bimodal. VPP_08 variable is platykurtic.
dist_upscale = getdistprops(data.UPSCALEMAL)
print(dist_upscale)
sns.distplot(data.UPSCALEMAL)
plot.title("Distribution plot for UPSCALEMAL")
plot.show()
# The skew and kurtosis values suggest that the distribution of UPSCALEMAL has significantly positive skew and fatter tails than a normally distributed variable. It is leptokurtic. The Shapiro test of normality(normpvalue) confirms this.
dist_mess_a = getdistprops(data.MESSAGE_A)
print(dist_mess_a)
sns.distplot(data.MESSAGE_A)
plot.title("Distribution plot for MESSAGE_A")
plot.show()
# For MESSAGE_A, the skew value suggest that its distribution is neither positive nor negative. This means that is it perfectly symmetrical. It is flattened, dispersed and bimodal. MEESAGE_A variable is platykurtic.
dist_cand1s_s = getdistprops(data.CAND1S_S)
print(dist_cand1s_s)
sns.distplot(data.CAND1S_S)
plot.title("Distribution plot for CAND1S_S")
plot.show()
# For CAND1S_S, the skew value suggest that its distribution has slightly negative skew. It is flattened, skewed to the left and dispersed. Therefore, can we say that CAND1S_S is platykurtic and bimodal.
dist_cand2s_s = getdistprops(data.CAND2S_S)
print(dist_cand2s_s)
sns.distplot(data.CAND2S_S)
plot.title("Distribution plot for CAND2S_S")
plot.show()
# For CAND2S_S, the skew value suggest that its distribution has slightly negative skew. It peaks sharply with fat tails . Therefore, can we say that CAND2S_S is leptokurtic and bimodal and it has less variability.
dist_cand1_undy = getdistprops(data.CAND1_UND_Y)
print(dist_cand1_undy)
sns.distplot(data.CAND1_UND_Y)
plot.title("Distribution plot for CAND1_UND_Y")
plot.show()
# For CAND1_UND_Y, the skew value suggest that its distribution has slightly positive skew. It is flattened and highly dispersed.Therefore, can we say that CAND1_UND_Y is platykurtic and bimodal.
# ## Modelling and Judging Classifier Performance
variables = [
"HH_ND",
"NH_WHITE",
"HH_NR",
"PARTY_R",
"VPP_08",
"UPSCALEMAL",
"MESSAGE_A",
"CAND1S_S",
"CAND2S_S",
"CAND1_UND_Y",
]
X = data[variables]
y = data["MOVED_AD"]
train_X, valid_X, train_y, valid_y = train_test_split(
X, y, test_size=0.4, random_state=1, stratify=y
)
lgbm_model = LGBMClassifier(num_leaves=3, reg_alpha=10, reg_lambda=5)
lgbm_model.fit(train_X, train_y, eval_set=[(train_X, train_y), (valid_X, valid_y)])
fea_imp = pd.DataFrame({"imp": lgbm_model.feature_importances_, "col": X.columns})
fea_imp = fea_imp.sort_values(["imp", "col"], ascending=[True, False]).iloc[-5:]
_ = fea_imp.plot(kind="barh", x="col", y="imp", figsize=(7, 3))
plot.title("LightGbm_Feature_Importance")
plot.show()
# For the lightgbm classifier, NH_WHITE is the most important feature for predicting MOVED_AD
lgbm_model_pred = lgbm_model.predict(valid_X)
lgbm_model_proba = lgbm_model.predict_proba(valid_X)
lgbm_model_proba_1 = lgbm_model_proba[:, 1]
classificationSummary(valid_y, lgbm_model_pred)
valid_y = pd.DataFrame(valid_y)
lgbm_model_proba_1 = pd.DataFrame(lgbm_model_proba_1)
# compute ROC curve and AUC
fpr, tpr, _ = roc_curve(valid_y, lgbm_model_proba_1)
roc_auc = auc(fpr, tpr)
plot.figure(figsize=[5, 5])
lw = 2
plot.plot(
fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.4f)" % roc_auc
)
plot.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--")
plot.title("ROC Curve for LightGbm Classifier")
plot.xlim([0.0, 1.0])
plot.ylim([0.0, 1.05])
plot.xlabel("False Positive Rate (1 - Specificity)")
plot.ylabel("True Positive Rate (Sensitivity)")
plot.legend(loc="lower right")
plot.show()
# ### train model with a decision tree classifier
smallClassTree = tree.DecisionTreeClassifier(
max_depth=30, min_samples_split=20, min_impurity_decrease=0.01
)
smallClassTree.fit(train_X, train_y)
plot.figure(figsize=(8, 8))
tree.plot_tree(smallClassTree, feature_names=train_X.columns, filled=True)
plot.show()
fea_imp = pd.DataFrame({"imp": smallClassTree.feature_importances_, "col": X.columns})
fea_imp = fea_imp.sort_values(["imp", "col"], ascending=[True, False]).iloc[-3:]
_ = fea_imp.plot(kind="barh", x="col", y="imp", figsize=(7, 3))
plot.title("DecisionTree_Feature_Importance")
plot.show()
# For the decision tree model, CAND1S_S is the most important feature for predicting MOVED_AD
tree_pred = smallClassTree.predict(valid_X)
tree_pred_proba = smallClassTree.predict_proba(valid_X)
tree_pred_proba_1 = tree_pred_proba[:, 1]
classificationSummary(valid_y, tree_pred)
valid_y = pd.DataFrame(valid_y)
tree_pred_proba_1 = pd.DataFrame(tree_pred_proba_1)
# compute ROC curve and AUC
fpr, tpr, _ = roc_curve(valid_y, tree_pred_proba_1)
roc_auc = auc(fpr, tpr)
plot.figure(figsize=[5, 5])
lw = 2
plot.plot(
fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.4f)" % roc_auc
)
plot.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--")
plot.title("ROC Curve for Decision Tree Classifier")
plot.xlim([0.0, 1.0])
plot.ylim([0.0, 1.05])
plot.xlabel("False Positive Rate (1 - Specificity)")
plot.ylabel("True Positive Rate (Sensitivity)")
plot.legend(loc="lower right")
plot.show()
# In terms of predictive power, the best model is the light GBM classifier model. It was chosen because it has the highest AUC score.
lgbm_model_proba = pd.DataFrame(lgbm_model_proba)
# Using a cut-off of 0.5, we report the propensities for the first three records in the validation set for the lightgbm model.
lgbm_model_proba.head(3)
# The first record has 3.12% propensity of moving in favour of a democratic candidate. The second record has 2.78% propensity of moving in favour of a democratic candidate. The third record has 80.72% propensity of moving in favour of a democratic candidate.
rvariables = [
"HH_ND",
"NH_WHITE",
"HH_NR",
"PARTY_R",
"VPP_08",
"UPSCALEMAL",
"MESSAGE_A",
"CAND1S_S",
"CAND2S_S",
"CAND1_UND_Y",
]
A = data[rvariables]
b = data["MOVED_AD"]
train_A, valid_A, train_b, valid_b = train_test_split(
A, b, test_size=0.4, random_state=1
)
rlgbm_model = LGBMClassifier(num_leaves=3, reg_alpha=10, reg_lambda=5)
rlgbm_model.fit(train_A, train_b, eval_set=[(train_A, train_b), (valid_A, valid_b)])
rlgbm_pred = rlgbm_model.predict(valid_A)
rlgbm_pred_proba = rlgbm_model.predict_proba(valid_A)
rlgbm_pred_proba_1 = rlgbm_pred_proba[:, 1]
classificationSummary(valid_b, rlgbm_pred)
valid_b = pd.DataFrame(valid_b)
rlgbm_pred_proba_1 = pd.DataFrame(rlgbm_pred_proba_1)
# compute ROC curve and AUC
fpr, tpr, _ = roc_curve(valid_b, rlgbm_pred_proba_1)
roc_auc = auc(fpr, tpr)
plot.figure(figsize=[5, 5])
lw = 2
plot.plot(
fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.4f)" % roc_auc
)
plot.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--")
plot.title("ROC score for lightGbm when MESSAGE_A_REV is used")
plot.xlim([0.0, 1.0])
plot.ylim([0.0, 1.05])
plot.xlabel("False Positive Rate (1 - Specificity)")
plot.ylabel("True Positive Rate (Sensitivity)")
plot.legend(loc="lower right")
plot.show()
rlgbm_pred_proba = pd.DataFrame(rlgbm_pred_proba)
rlgbm_pred_proba.head(3)
# The first record has 66% propensity of moving in favour of a democratic candidate. The second record has 1.66% propensity of moving in favour of a democratic candidate. The third record has 80.38% propensity of moving in favour of a democratic candidate.
# We compute the uplift for each of the voters in the validation set, and report the uplift for the first three records.
# We compute the uplift for each of the voters in the validation set, and report the uplift for the first three records
uplift_df = valid_X.copy() # Need to create a copy to allow modifying data
uplift_df.MESSAGE_A = 1
predTreatment = lgbm_model.predict_proba(uplift_df)
uplift_df.MESSAGE_A = 0
predControl = lgbm_model.predict_proba(uplift_df)
upliftResult_df = pd.DataFrame(
{
"probMessage": predTreatment[:, 1],
"probNoMessage": predControl[:, 1],
"uplift": predTreatment[:, 1] - predControl[:, 1],
},
index=uplift_df.index,
)
print(upliftResult_df.head(3))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/818/129818595.ipynb
|
voterpersuasiondataset
|
aakarkale
|
[{"Id": 129818595, "ScriptId": 38608886, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10470986, "CreationDate": "05/16/2023 17:29:13", "VersionNumber": 2.0, "Title": "Political Persuasion", "EvaluationDate": "05/16/2023", "IsChange": false, "TotalLines": 386.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 386.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186194444, "KernelVersionId": 129818595, "SourceDatasetVersionId": 1230015}]
|
[{"Id": 1230015, "DatasetId": 704063, "DatasourceVersionId": 1261598, "CreatorUserId": 2459355, "LicenseName": "Unknown", "CreationDate": "06/09/2020 18:36:47", "VersionNumber": 1.0, "Title": "VoterPersuasionDataset", "Slug": "voterpersuasiondataset", "Subtitle": "Voter Persuasion Dataset", "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 704063, "CreatorUserId": 2459355, "OwnerUserId": 2459355.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1230015.0, "CurrentDatasourceVersionId": 1261598.0, "ForumId": 718757, "Type": 2, "CreationDate": "06/09/2020 18:36:47", "LastActivityDate": "06/09/2020", "TotalViews": 3482, "TotalDownloads": 85, "TotalVotes": 3, "TotalKernels": 5}]
|
[{"Id": 2459355, "UserName": "aakarkale", "DisplayName": "Aakar Kale", "RegisterDate": "11/07/2018", "PerformanceTier": 1}]
|
# # Political Persuasion
# ## Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pylab as plot
import warnings
import seaborn as sns
import scipy.stats as scistat
import math
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, auc
from dmba import classificationSummary
from lightgbm import LGBMClassifier
from sklearn.tree import export_graphviz
from sklearn import tree
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.simplefilter("ignore")
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", 500)
# ## Read Data
data = pd.read_csv(r"/kaggle/input/voterpersuasiondataset/Voter-Persuasion.csv")
data.head()
data.info()
# #### The shape of data
data.shape
# #### Data Columns
data.columns
# #### Check Duplicated Values
data.duplicated().any()
# #### Check Missing Values
data.isna().sum()
data.describe()
data[["MESSAGE_A"]].value_counts()
data["MESSAGE_A_REV"].value_counts()
data["MOVED_A"].value_counts()
data["opposite"].value_counts()
# ## Data Preprocessing
data.drop(["VOTER_ID", "MOVED_A", "opposite", "MESSAGE_A_REV"], axis=1, inplace=True)
data["MOVED_AD"] = data["MOVED_AD"].replace(
{"N": 0, "Y": 1}
) # Change N to 0 and Y to 1
data.head()
# MESSAGE_A is the column that shows whether a voter got the flyer or not. 1 represents getting the flyer and 0 represents not getting the flyer
# Overall, we calculate how well the flyer did in moving voters in a Democratic direction? (We look at the target variable among those who got the flyer, compared to those who did not.)
flyer = data[(data["MESSAGE_A"] == 1) & (data["MOVED_AD"] == 1)]
flyer
per = (flyer.shape[0] / data.shape[0]) * 100
print(
"The percentage of voters who got the flyer and moved is: "
+ str(round(per, 2))
+ "%"
)
no_flyer = data[(data["MESSAGE_A"] == 0) & (data["MOVED_AD"] == 1)]
no_flyer
per2 = (no_flyer.shape[0] / data.shape[0]) * 100
print(
"The percentage of voters who did not get the flyer and moved is: "
+ str(round(per2, 2))
+ "%"
)
# The flyer did a good job in moving about 20% of the total number of voters who got the flyer as against the 17% who didn't get it and still moved.
# ## Exploratory Data Analysis (EDA)
# Side-by-side boxplots are useful in classification tasks for evaluating the potential of numerical predictors. This is done by using the x-axis for the categorical outcome and the y-axis for a numerical predictor. The first set of examples shown below helps us to see the effects of SET_NO, OPP_SEX, AGE, HH_ND, HH_NR, HH_NI, MED_AGE, NH_WHITE, NH_AA, NH_ASIAN on MOVED_AD. These pairs do not clearly separate the outcome variable so we will use the correlation plot to select potentially useful variables
fig, axes = plot.subplots(nrows=1, ncols=10, figsize=(23, 5))
data.boxplot(column="SET_NO", by="MOVED_AD", ax=axes[0])
data.boxplot(column="OPP_SEX", by="MOVED_AD", ax=axes[1])
data.boxplot(column="AGE", by="MOVED_AD", ax=axes[2])
data.boxplot(column="HH_ND", by="MOVED_AD", ax=axes[3])
data.boxplot(column="HH_NR", by="MOVED_AD", ax=axes[4])
data.boxplot(column="HH_NI", by="MOVED_AD", ax=axes[5])
data.boxplot(column="MED_AGE", by="MOVED_AD", ax=axes[6])
data.boxplot(column="NH_WHITE", by="MOVED_AD", ax=axes[7])
data.boxplot(column="NH_AA", by="MOVED_AD", ax=axes[8])
data.boxplot(column="NH_ASIAN", by="MOVED_AD", ax=axes[9])
for ax in axes:
ax.set_xlabel("MOVED_AD")
# ### Correlation Analysis & Feature Selection
numerical = data.drop(
["CAND1S", "CAND2S", "CAND1_UND", "CAND2_UND", "I3", "Partition"], axis=1
)
categorical = data.filter(
["CAND1S", "CAND2S", "CAND1_UND", "CAND2_UND", "I3", "Partition"]
)
cat_numerical = pd.get_dummies(categorical, drop_first=True)
cat_numerical.head()
data = pd.concat([numerical, cat_numerical], axis=1)
data.head()
corr_data = data.corr()
corr_data
plt.figure(figsize=(5, 20))
heatmap = sns.heatmap(
corr_data[["MOVED_AD"]].sort_values(by="MOVED_AD", ascending=False),
vmin=-1,
vmax=1,
annot=True,
cmap="BrBG",
)
heatmap.set_title(
"Features Correlating with MOVED_A", fontdict={"fontsize": 18}, pad=16
)
# Testing for measures of central tendency, shape and spread among selected predictors
# The getdistprops function takes a series and generates measures of central tendency, shape, and spread. The function returns a dictionary with these measures. It also handles situations where the Shapiro test for normality does not return a vaule. It will not add keys for normstat and normpvalue when that happens.
def getdistprops(seriestotest):
out = {}
normstat, normpvalue = scistat.shapiro(seriestotest)
if not math.isnan(normstat):
out["normstat"] = normstat
if normpvalue >= 0.05:
out["normpvalue"] = str(round(normpvalue, 2)) + ":Accept Normal"
elif normpvalue < 0.05:
out["normpvalue"] = str(round(normpvalue, 2)) + ": Reject Normal"
out["mean"] = seriestotest.mean()
out["median"] = seriestotest.median()
out["std"] = seriestotest.std()
out["kurtosis"] = seriestotest.kurtosis()
out["skew"] = seriestotest.skew()
out["count"] = seriestotest.count()
return out
dist_hhnd = getdistprops(data.HH_ND)
print(dist_hhnd)
sns.distplot(data.HH_ND)
plot.title("Distribution plot for HH_ND")
plot.show()
# For HH_ND, the skew and kurtosis values suggest that its distribution has slightly positive skew and fatter tails than a normally distributed variable. The shapiro test of normality (normpvalue) confirms this. The HH_ND variable has less variability and it is leptokurtic.It is also multimodal i.e having multiple peaks
dist_nhwhite = getdistprops(data.NH_WHITE)
print(dist_nhwhite)
sns.distplot(data.NH_WHITE)
plot.title("Distribution plot for NH_WHITE")
plot.show()
# For NH_WHITE, the skew value suggest that its distribution has slightly negative skew. It is flattened, skewed to the left and dispersed. Therefore, can we say that NH_WHITE is platykurtic and multimodal.
dist_partyr = getdistprops(data.PARTY_R)
print(dist_partyr)
sns.distplot(data.PARTY_R)
plot.title("Distribution plot for PARTY_R")
plot.show()
# For PARTY_R, the skew value suggest that its distribution has slightly positive skew. It is flattened, dispersed and bimodal. PARTY_R variable is platykurtic.
dist_vpp_08 = getdistprops(data.VPP_08)
print(dist_vpp_08)
sns.distplot(data.VPP_08)
plot.title("Distribution plot for VPP_08")
plot.show()
# For VPP_O8, the skew value suggest that its distribution has slightly positive skew. It is flattened, dispersed and bimodal. VPP_08 variable is platykurtic.
dist_upscale = getdistprops(data.UPSCALEMAL)
print(dist_upscale)
sns.distplot(data.UPSCALEMAL)
plot.title("Distribution plot for UPSCALEMAL")
plot.show()
# The skew and kurtosis values suggest that the distribution of UPSCALEMAL has significantly positive skew and fatter tails than a normally distributed variable. It is leptokurtic. The Shapiro test of normality(normpvalue) confirms this.
dist_mess_a = getdistprops(data.MESSAGE_A)
print(dist_mess_a)
sns.distplot(data.MESSAGE_A)
plot.title("Distribution plot for MESSAGE_A")
plot.show()
# For MESSAGE_A, the skew value suggest that its distribution is neither positive nor negative. This means that is it perfectly symmetrical. It is flattened, dispersed and bimodal. MEESAGE_A variable is platykurtic.
dist_cand1s_s = getdistprops(data.CAND1S_S)
print(dist_cand1s_s)
sns.distplot(data.CAND1S_S)
plot.title("Distribution plot for CAND1S_S")
plot.show()
# For CAND1S_S, the skew value suggest that its distribution has slightly negative skew. It is flattened, skewed to the left and dispersed. Therefore, can we say that CAND1S_S is platykurtic and bimodal.
dist_cand2s_s = getdistprops(data.CAND2S_S)
print(dist_cand2s_s)
sns.distplot(data.CAND2S_S)
plot.title("Distribution plot for CAND2S_S")
plot.show()
# For CAND2S_S, the skew value suggest that its distribution has slightly negative skew. It peaks sharply with fat tails . Therefore, can we say that CAND2S_S is leptokurtic and bimodal and it has less variability.
dist_cand1_undy = getdistprops(data.CAND1_UND_Y)
print(dist_cand1_undy)
sns.distplot(data.CAND1_UND_Y)
plot.title("Distribution plot for CAND1_UND_Y")
plot.show()
# For CAND1_UND_Y, the skew value suggest that its distribution has slightly positive skew. It is flattened and highly dispersed.Therefore, can we say that CAND1_UND_Y is platykurtic and bimodal.
# ## Modelling and Judging Classifier Performance
variables = [
"HH_ND",
"NH_WHITE",
"HH_NR",
"PARTY_R",
"VPP_08",
"UPSCALEMAL",
"MESSAGE_A",
"CAND1S_S",
"CAND2S_S",
"CAND1_UND_Y",
]
X = data[variables]
y = data["MOVED_AD"]
train_X, valid_X, train_y, valid_y = train_test_split(
X, y, test_size=0.4, random_state=1, stratify=y
)
lgbm_model = LGBMClassifier(num_leaves=3, reg_alpha=10, reg_lambda=5)
lgbm_model.fit(train_X, train_y, eval_set=[(train_X, train_y), (valid_X, valid_y)])
fea_imp = pd.DataFrame({"imp": lgbm_model.feature_importances_, "col": X.columns})
fea_imp = fea_imp.sort_values(["imp", "col"], ascending=[True, False]).iloc[-5:]
_ = fea_imp.plot(kind="barh", x="col", y="imp", figsize=(7, 3))
plot.title("LightGbm_Feature_Importance")
plot.show()
# For the lightgbm classifier, NH_WHITE is the most important feature for predicting MOVED_AD
lgbm_model_pred = lgbm_model.predict(valid_X)
lgbm_model_proba = lgbm_model.predict_proba(valid_X)
lgbm_model_proba_1 = lgbm_model_proba[:, 1]
classificationSummary(valid_y, lgbm_model_pred)
valid_y = pd.DataFrame(valid_y)
lgbm_model_proba_1 = pd.DataFrame(lgbm_model_proba_1)
# compute ROC curve and AUC
fpr, tpr, _ = roc_curve(valid_y, lgbm_model_proba_1)
roc_auc = auc(fpr, tpr)
plot.figure(figsize=[5, 5])
lw = 2
plot.plot(
fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.4f)" % roc_auc
)
plot.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--")
plot.title("ROC Curve for LightGbm Classifier")
plot.xlim([0.0, 1.0])
plot.ylim([0.0, 1.05])
plot.xlabel("False Positive Rate (1 - Specificity)")
plot.ylabel("True Positive Rate (Sensitivity)")
plot.legend(loc="lower right")
plot.show()
# ### train model with a decision tree classifier
smallClassTree = tree.DecisionTreeClassifier(
max_depth=30, min_samples_split=20, min_impurity_decrease=0.01
)
smallClassTree.fit(train_X, train_y)
plot.figure(figsize=(8, 8))
tree.plot_tree(smallClassTree, feature_names=train_X.columns, filled=True)
plot.show()
fea_imp = pd.DataFrame({"imp": smallClassTree.feature_importances_, "col": X.columns})
fea_imp = fea_imp.sort_values(["imp", "col"], ascending=[True, False]).iloc[-3:]
_ = fea_imp.plot(kind="barh", x="col", y="imp", figsize=(7, 3))
plot.title("DecisionTree_Feature_Importance")
plot.show()
# For the decision tree model, CAND1S_S is the most important feature for predicting MOVED_AD
tree_pred = smallClassTree.predict(valid_X)
tree_pred_proba = smallClassTree.predict_proba(valid_X)
tree_pred_proba_1 = tree_pred_proba[:, 1]
classificationSummary(valid_y, tree_pred)
valid_y = pd.DataFrame(valid_y)
tree_pred_proba_1 = pd.DataFrame(tree_pred_proba_1)
# compute ROC curve and AUC
fpr, tpr, _ = roc_curve(valid_y, tree_pred_proba_1)
roc_auc = auc(fpr, tpr)
plot.figure(figsize=[5, 5])
lw = 2
plot.plot(
fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.4f)" % roc_auc
)
plot.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--")
plot.title("ROC Curve for Decision Tree Classifier")
plot.xlim([0.0, 1.0])
plot.ylim([0.0, 1.05])
plot.xlabel("False Positive Rate (1 - Specificity)")
plot.ylabel("True Positive Rate (Sensitivity)")
plot.legend(loc="lower right")
plot.show()
# In terms of predictive power, the best model is the light GBM classifier model. It was chosen because it has the highest AUC score.
lgbm_model_proba = pd.DataFrame(lgbm_model_proba)
# Using a cut-off of 0.5, we report the propensities for the first three records in the validation set for the lightgbm model.
lgbm_model_proba.head(3)
# The first record has 3.12% propensity of moving in favour of a democratic candidate. The second record has 2.78% propensity of moving in favour of a democratic candidate. The third record has 80.72% propensity of moving in favour of a democratic candidate.
rvariables = [
"HH_ND",
"NH_WHITE",
"HH_NR",
"PARTY_R",
"VPP_08",
"UPSCALEMAL",
"MESSAGE_A",
"CAND1S_S",
"CAND2S_S",
"CAND1_UND_Y",
]
A = data[rvariables]
b = data["MOVED_AD"]
train_A, valid_A, train_b, valid_b = train_test_split(
A, b, test_size=0.4, random_state=1
)
rlgbm_model = LGBMClassifier(num_leaves=3, reg_alpha=10, reg_lambda=5)
rlgbm_model.fit(train_A, train_b, eval_set=[(train_A, train_b), (valid_A, valid_b)])
rlgbm_pred = rlgbm_model.predict(valid_A)
rlgbm_pred_proba = rlgbm_model.predict_proba(valid_A)
rlgbm_pred_proba_1 = rlgbm_pred_proba[:, 1]
classificationSummary(valid_b, rlgbm_pred)
valid_b = pd.DataFrame(valid_b)
rlgbm_pred_proba_1 = pd.DataFrame(rlgbm_pred_proba_1)
# compute ROC curve and AUC
fpr, tpr, _ = roc_curve(valid_b, rlgbm_pred_proba_1)
roc_auc = auc(fpr, tpr)
plot.figure(figsize=[5, 5])
lw = 2
plot.plot(
fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.4f)" % roc_auc
)
plot.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--")
plot.title("ROC score for lightGbm when MESSAGE_A_REV is used")
plot.xlim([0.0, 1.0])
plot.ylim([0.0, 1.05])
plot.xlabel("False Positive Rate (1 - Specificity)")
plot.ylabel("True Positive Rate (Sensitivity)")
plot.legend(loc="lower right")
plot.show()
rlgbm_pred_proba = pd.DataFrame(rlgbm_pred_proba)
rlgbm_pred_proba.head(3)
# The first record has 66% propensity of moving in favour of a democratic candidate. The second record has 1.66% propensity of moving in favour of a democratic candidate. The third record has 80.38% propensity of moving in favour of a democratic candidate.
# We compute the uplift for each of the voters in the validation set, and report the uplift for the first three records.
# We compute the uplift for each of the voters in the validation set, and report the uplift for the first three records
uplift_df = valid_X.copy() # Need to create a copy to allow modifying data
uplift_df.MESSAGE_A = 1
predTreatment = lgbm_model.predict_proba(uplift_df)
uplift_df.MESSAGE_A = 0
predControl = lgbm_model.predict_proba(uplift_df)
upliftResult_df = pd.DataFrame(
{
"probMessage": predTreatment[:, 1],
"probNoMessage": predControl[:, 1],
"uplift": predTreatment[:, 1] - predControl[:, 1],
},
index=uplift_df.index,
)
print(upliftResult_df.head(3))
| false | 1 | 5,120 | 0 | 5,145 | 5,120 |
||
129818644
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
train_data.head()
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
test_data.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
train_data.drop("Survived", axis=1), train_data["Survived"], test_size=0.25
)
X_train.head()
import matplotlib.pyplot as plt
import seaborn as sns
X_train.info()
object_cols = X_train.select_dtypes("object").columns
object_cols
sns.heatmap(X_train.isnull(), cmap="viridis")
from sklearn.base import BaseEstimator, TransformerMixin
plt.hist("Age", data=X_train, bins=40)
plt.show()
X_train.nunique()
sns.boxplot(data=X_train, y="Age", x="Embarked")
sns.boxplot(data=X_train, y="Age", x="Pclass")
X_train["Embarked"].dropna().mode()
class AgeImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
mean_ages = X.groupby("Pclass")["Age"].mean()
X["Age"] = X["Age"].fillna(X["Pclass"].map(mean_ages))
return X
from sklearn.preprocessing import OneHotEncoder
class FeatureEncoder(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
encoder = OneHotEncoder(sparse_output=False, handle_unknown="ignore")
X["Embarked"].fillna("S", inplace=True)
OH_cols = pd.DataFrame(
encoder.fit_transform(X[["Embarked"]]),
columns=encoder.get_feature_names_out(),
)
OH_cols.index = X.index
X = pd.concat([X, OH_cols], axis=1)
X.drop("Embarked", axis=1, inplace=True)
OH_cols = pd.DataFrame(
encoder.fit_transform(X[["Sex"]]), columns=encoder.get_feature_names_out()
)
OH_cols.index = X.index
X = pd.concat([X, OH_cols], axis=1)
X.drop("Sex", axis=1, inplace=True)
return X
class ColumnDropper(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return X.drop(["Cabin", "Name", "Ticket"], axis=1)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
pipeline = Pipeline(
steps=[
("coldrop", ColumnDropper()),
("imputer", AgeImputer()),
("encoder", FeatureEncoder()),
]
)
test_train = X_train.copy()
test_test = X_test.copy()
# enc = FeatureEncoder()
# enc.fit_transform(test_train)
# enc.transform(test_test)
# test_train.info()
# test_test['Embarked'].isna().sum()
pipeline.fit_transform(test_train, y_train)
pipeline.transform(test_test)
X_train_new = pipeline.fit_transform(X_train, y_train)
X_test_new = pipeline.transform(X_test)
print(X_train_new.head())
X_test_new.head()
sns.heatmap(X_train_new.isnull(), cmap="viridis")
plt.show()
X_train_new.info()
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
rfc = RandomForestClassifier()
param_grid = {
"n_estimators": [50, 100, 200, 500, 600],
"max_depth": [5, 10, 20],
"min_samples_split": [2, 5, 10, 15],
}
gridcv = GridSearchCV(estimator=rfc, param_grid=param_grid, verbose=3)
gridcv.fit(X_train_new, y_train)
gridcv.best_estimator_
gridcv_pred = gridcv.predict(X_test_new)
from sklearn.metrics import classification_report
print(classification_report(y_test, gridcv_pred))
new_train_data = train_data.copy()
new_train_data.head()
new_y_train = new_train_data["Survived"]
new_y_train.head()
new_train_data.drop("Survived", axis=1, inplace=True)
new_train_data.head()
new_train_data = pipeline.fit_transform(new_train_data, new_y_train)
param_grid = {
"n_estimators": [50, 100, 200, 500],
"max_depth": [None, 5, 10],
"min_samples_split": [2, 3, 4],
}
gridcv = GridSearchCV(
estimator=rfc, param_grid=param_grid, verbose=3, scoring="accuracy"
)
gridcv.fit(new_train_data, new_y_train)
print(gridcv.best_estimator_)
grid_pred = gridcv.predict(X_test_new)
print(classification_report(y_test, grid_pred))
print(gridcv.score(X_test_new, y_test))
final_test_data = pipeline.fit_transform(test_data)
X_final_test = final_test_data.fillna(method="ffill")
predictions = gridcv.predict(X_final_test)
output = pd.DataFrame({"PassengerId": test_data.PassengerId, "Survived": predictions})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/818/129818644.ipynb
| null | null |
[{"Id": 129818644, "ScriptId": 38404961, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11509960, "CreationDate": "05/16/2023 17:29:48", "VersionNumber": 1.0, "Title": "Titanic Competition", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 188.0, "LinesInsertedFromPrevious": 188.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
train_data.head()
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
test_data.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
train_data.drop("Survived", axis=1), train_data["Survived"], test_size=0.25
)
X_train.head()
import matplotlib.pyplot as plt
import seaborn as sns
X_train.info()
object_cols = X_train.select_dtypes("object").columns
object_cols
sns.heatmap(X_train.isnull(), cmap="viridis")
from sklearn.base import BaseEstimator, TransformerMixin
plt.hist("Age", data=X_train, bins=40)
plt.show()
X_train.nunique()
sns.boxplot(data=X_train, y="Age", x="Embarked")
sns.boxplot(data=X_train, y="Age", x="Pclass")
X_train["Embarked"].dropna().mode()
class AgeImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
mean_ages = X.groupby("Pclass")["Age"].mean()
X["Age"] = X["Age"].fillna(X["Pclass"].map(mean_ages))
return X
from sklearn.preprocessing import OneHotEncoder
class FeatureEncoder(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
encoder = OneHotEncoder(sparse_output=False, handle_unknown="ignore")
X["Embarked"].fillna("S", inplace=True)
OH_cols = pd.DataFrame(
encoder.fit_transform(X[["Embarked"]]),
columns=encoder.get_feature_names_out(),
)
OH_cols.index = X.index
X = pd.concat([X, OH_cols], axis=1)
X.drop("Embarked", axis=1, inplace=True)
OH_cols = pd.DataFrame(
encoder.fit_transform(X[["Sex"]]), columns=encoder.get_feature_names_out()
)
OH_cols.index = X.index
X = pd.concat([X, OH_cols], axis=1)
X.drop("Sex", axis=1, inplace=True)
return X
class ColumnDropper(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return X.drop(["Cabin", "Name", "Ticket"], axis=1)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
pipeline = Pipeline(
steps=[
("coldrop", ColumnDropper()),
("imputer", AgeImputer()),
("encoder", FeatureEncoder()),
]
)
test_train = X_train.copy()
test_test = X_test.copy()
# enc = FeatureEncoder()
# enc.fit_transform(test_train)
# enc.transform(test_test)
# test_train.info()
# test_test['Embarked'].isna().sum()
pipeline.fit_transform(test_train, y_train)
pipeline.transform(test_test)
X_train_new = pipeline.fit_transform(X_train, y_train)
X_test_new = pipeline.transform(X_test)
print(X_train_new.head())
X_test_new.head()
sns.heatmap(X_train_new.isnull(), cmap="viridis")
plt.show()
X_train_new.info()
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
rfc = RandomForestClassifier()
param_grid = {
"n_estimators": [50, 100, 200, 500, 600],
"max_depth": [5, 10, 20],
"min_samples_split": [2, 5, 10, 15],
}
gridcv = GridSearchCV(estimator=rfc, param_grid=param_grid, verbose=3)
gridcv.fit(X_train_new, y_train)
gridcv.best_estimator_
gridcv_pred = gridcv.predict(X_test_new)
from sklearn.metrics import classification_report
print(classification_report(y_test, gridcv_pred))
new_train_data = train_data.copy()
new_train_data.head()
new_y_train = new_train_data["Survived"]
new_y_train.head()
new_train_data.drop("Survived", axis=1, inplace=True)
new_train_data.head()
new_train_data = pipeline.fit_transform(new_train_data, new_y_train)
param_grid = {
"n_estimators": [50, 100, 200, 500],
"max_depth": [None, 5, 10],
"min_samples_split": [2, 3, 4],
}
gridcv = GridSearchCV(
estimator=rfc, param_grid=param_grid, verbose=3, scoring="accuracy"
)
gridcv.fit(new_train_data, new_y_train)
print(gridcv.best_estimator_)
grid_pred = gridcv.predict(X_test_new)
print(classification_report(y_test, grid_pred))
print(gridcv.score(X_test_new, y_test))
final_test_data = pipeline.fit_transform(test_data)
X_final_test = final_test_data.fillna(method="ffill")
predictions = gridcv.predict(X_final_test)
output = pd.DataFrame({"PassengerId": test_data.PassengerId, "Survived": predictions})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
| false | 0 | 1,679 | 2 | 1,679 | 1,679 |
||
129609349
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import warnings
warnings.filterwarnings("ignore")
from timeit import default_timer as timer
import tensorflow as tf
train_df = pd.read_csv(
"/kaggle/input/godaddy-microbusiness-density-forecasting/train.csv"
)
test_df = pd.read_csv(
"/kaggle/input/godaddy-microbusiness-density-forecasting/test.csv"
)
revealed_df = pd.read_csv(
"/kaggle/input/godaddy-microbusiness-density-forecasting/revealed_test.csv"
)
revealed_df.shape
total_df = pd.concat([train_df, revealed_df])
cfips_df = train_df[["cfips", "microbusiness_density"]]
cfips = cfips_df["cfips"].unique()
final = pd.DataFrame()
final
for i in cfips:
k = cfips_df.loc[cfips_df["cfips"] == i]
# print(k)
df_lags = pd.DataFrame(k)
for inc in range(1, 4):
field_name = "lag_" + str(inc)
df_lags[field_name] = df_lags["microbusiness_density"].shift(inc)
# drop null values
df_lags = df_lags.dropna().reset_index(drop=True)
final = final.append(df_lags, ignore_index=True)
from tensorflow import keras
X.shape[1:]
def nn(X, y):
model = keras.models.Sequential()
model.add(keras.layers.Dense(30, activation="relu", input_shape=X.shape[1:]))
model.add(keras.layers.Dense(1))
model.compile(loss="mse", optimizer="adam")
model.fit(X, y, epochs=100, verbose=0)
x1 = X.iloc[-1][0:2].values
x1 = x1.flatten()
y1 = y.tail(1).values
xy = np.concatenate((y1, x1))
xy = xy.flatten()
x_input = xy
x_input = x_input.reshape(1, -1)
temp_input = x_input
lst_output = []
i = 0
while i < 6:
yhat = model.predict(x_input)
lst_output.append(yhat[0])
temp_input = np.append(temp_input, yhat[0])
temp_input = temp_input[1:4]
x_input = temp_input.reshape(1, -1)
i = i + 1
return lst_output
start = timer()
output = []
for i in cfips:
k = final.loc[final["cfips"] == i]
dfs = pd.DataFrame(k)
y = dfs.microbusiness_density
X = dfs[["lag_1", "lag_2", "lag_3"]]
g = nn(X, y)
output.append(g)
del k
end = timer()
print(f" Total time taken to complete grid search in seconds: {(end - start)}")
ol = np.array(output)
ol = ol.flatten()
forforecastingop = pd.DataFrame(ol, columns=["microbusiness_density"])
# test = revealed_df['microbusiness_density']
# def smape(a, f):
# return 1/len(a) * np.sum(2 * np.abs(f-a) / (np.abs(a) + np.abs(f))*100)
# smape(test,ol)
a = np.split(
forforecastingop["microbusiness_density"].values, len(forforecastingop) / 6
)
b = np.split(
revealed_df["microbusiness_density"].values,
len(revealed_df["microbusiness_density"]) / 2,
)
c = np.concatenate(np.hstack(list(zip(b, a))))
c
forecasting = pd.DataFrame(c, columns=["microbusiness_density"])
forecasting = forecasting.set_index(test_df["row_id"])
forecasting.shape
forecasting.to_csv("submission.csv")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/609/129609349.ipynb
| null | null |
[{"Id": 129609349, "ScriptId": 36136580, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5544268, "CreationDate": "05/15/2023 08:02:17", "VersionNumber": 1.0, "Title": "godaddy_nn", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 127.0, "LinesInsertedFromPrevious": 127.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 4}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import warnings
warnings.filterwarnings("ignore")
from timeit import default_timer as timer
import tensorflow as tf
train_df = pd.read_csv(
"/kaggle/input/godaddy-microbusiness-density-forecasting/train.csv"
)
test_df = pd.read_csv(
"/kaggle/input/godaddy-microbusiness-density-forecasting/test.csv"
)
revealed_df = pd.read_csv(
"/kaggle/input/godaddy-microbusiness-density-forecasting/revealed_test.csv"
)
revealed_df.shape
total_df = pd.concat([train_df, revealed_df])
cfips_df = train_df[["cfips", "microbusiness_density"]]
cfips = cfips_df["cfips"].unique()
final = pd.DataFrame()
final
for i in cfips:
k = cfips_df.loc[cfips_df["cfips"] == i]
# print(k)
df_lags = pd.DataFrame(k)
for inc in range(1, 4):
field_name = "lag_" + str(inc)
df_lags[field_name] = df_lags["microbusiness_density"].shift(inc)
# drop null values
df_lags = df_lags.dropna().reset_index(drop=True)
final = final.append(df_lags, ignore_index=True)
from tensorflow import keras
X.shape[1:]
def nn(X, y):
model = keras.models.Sequential()
model.add(keras.layers.Dense(30, activation="relu", input_shape=X.shape[1:]))
model.add(keras.layers.Dense(1))
model.compile(loss="mse", optimizer="adam")
model.fit(X, y, epochs=100, verbose=0)
x1 = X.iloc[-1][0:2].values
x1 = x1.flatten()
y1 = y.tail(1).values
xy = np.concatenate((y1, x1))
xy = xy.flatten()
x_input = xy
x_input = x_input.reshape(1, -1)
temp_input = x_input
lst_output = []
i = 0
while i < 6:
yhat = model.predict(x_input)
lst_output.append(yhat[0])
temp_input = np.append(temp_input, yhat[0])
temp_input = temp_input[1:4]
x_input = temp_input.reshape(1, -1)
i = i + 1
return lst_output
start = timer()
output = []
for i in cfips:
k = final.loc[final["cfips"] == i]
dfs = pd.DataFrame(k)
y = dfs.microbusiness_density
X = dfs[["lag_1", "lag_2", "lag_3"]]
g = nn(X, y)
output.append(g)
del k
end = timer()
print(f" Total time taken to complete grid search in seconds: {(end - start)}")
ol = np.array(output)
ol = ol.flatten()
forforecastingop = pd.DataFrame(ol, columns=["microbusiness_density"])
# test = revealed_df['microbusiness_density']
# def smape(a, f):
# return 1/len(a) * np.sum(2 * np.abs(f-a) / (np.abs(a) + np.abs(f))*100)
# smape(test,ol)
a = np.split(
forforecastingop["microbusiness_density"].values, len(forforecastingop) / 6
)
b = np.split(
revealed_df["microbusiness_density"].values,
len(revealed_df["microbusiness_density"]) / 2,
)
c = np.concatenate(np.hstack(list(zip(b, a))))
c
forecasting = pd.DataFrame(c, columns=["microbusiness_density"])
forecasting = forecasting.set_index(test_df["row_id"])
forecasting.shape
forecasting.to_csv("submission.csv")
| false | 0 | 1,194 | 4 | 1,194 | 1,194 |
||
129609423
|
# ## Imports
import os
import gc
import glob
import json
import multiprocessing as mp
import warnings
import albumentations as A
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import PIL.Image as Image
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as thd
import segmentation_models_pytorch as smp
from torchvision import transforms
from collections import defaultdict
from types import SimpleNamespace
from typing import Dict, List, Optional, Tuple
from pathlib import Path
from sklearn.metrics import fbeta_score
from sklearn.exceptions import UndefinedMetricWarning
from albumentations.pytorch import ToTensorV2
from segmentation_models_pytorch.encoders import get_preprocessing_fn
from tqdm.auto import tqdm
warnings.simplefilter("ignore")
# ## Config
class CFG:
# ============== comp exp name =============
comp_name = "vesuvius"
comp_dir_path = "/kaggle/input"
comp_folder_name = "vesuvius-challenge-ink-detection"
comp_dataset_path = os.path.join(comp_dir_path, comp_folder_name)
exp_name = "vesuvius_2d_slide_unet_exp001"
# ============== pred target =============
target_size = 1
# ============== model cfg =============
model_name = "Unet"
backbone = "efficientnet-b0"
# backbone = 'se_resnext50_32x4d'
in_chans = 6 # 65
# ============== data preprocessing =============
preprocess_input = get_preprocessing_fn(backbone, pretrained="imagenet")
# ============== training cfg =============
size = 224
tile_size = 224
stride = tile_size // 2
train_batch_size = 32 # 32
valid_batch_size = train_batch_size
use_amp = True
scheduler = "GradualWarmupSchedulerV2"
# scheduler = 'CosineAnnealingLR'
epochs = 15 # 30
# adamW warmupあり
warmup_factor = 10
# lr = 1e-3 / warmup_factor
lr = 1e-3
# ============== fold =============
valid_id = 1
# objective_cv = 'binary' # 'binary', 'multiclass', 'regression'
metric_direction = "maximize" # maximize, 'minimize'
# metrics = 'dice_coef'
# ============== fixed =============
pretrained = True
inf_weight = "best" # 'best'
min_lr = 1e-6
weight_decay = 1e-6
max_grad_norm = 1000
print_freq = 50
num_workers = 4
seed = 42
# ============== set dataset path =============
outputs_path = f"/kaggle/working/outputs/{comp_name}/{exp_name}/"
submission_dir = outputs_path + "submissions/"
submission_path = submission_dir + f"submission_{exp_name}.csv"
model_dir = outputs_path + f"{comp_name}-models/"
figures_dir = outputs_path + "figures/"
log_dir = outputs_path + "logs/"
log_path = log_dir + f"{exp_name}.txt"
# ============== augmentation =============
train_aug_list = [
A.Resize(size, size),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.75),
A.ShiftScaleRotate(p=0.75),
A.OneOf(
[
A.GaussNoise(var_limit=[10, 50]),
A.GaussianBlur(),
A.MotionBlur(),
],
p=0.4,
),
A.GridDistortion(num_steps=5, distort_limit=0.3, p=0.5),
A.CoarseDropout(
max_holes=1,
max_width=int(size * 0.3),
max_height=int(size * 0.3),
mask_fill_value=0,
p=0.5,
),
A.Normalize(mean=[0] * in_chans, std=[1] * in_chans),
ToTensorV2(transpose_mask=True),
]
# A.Compose([
# A.RandomResizedCrop(height=224, width=224, scale=(0.08, 1.0)),
# A.HorizontalFlip(p=0.5),
# A.OneOf([
# A.RandomBrightnessContrast(brightness_limit=0.4, contrast_limit=0.4),
# A.HueSaturationValue(hue_shift_limit=10, sat_shift_limit=20, val_shift_limit=10),
# ], p=0.75),
# A.GaussNoise(var_limit=(10.0, 50.0)),
# A.CoarseDropout(max_holes=8, max_height=32, max_width=32, p=0.5),
# # A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
# ToTensorV2()
# ])
valid_aug_list = [
A.Resize(height=256, width=256),
A.CenterCrop(height=224, width=224),
# A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2(),
]
# ## Set up data
class SubvolumeDataset(thd.Dataset):
def __init__(self, fragments: List[Path], transform=None, train=True):
# self.fragments = sorted(map(lambda path: path.resolve(), fragments))
self.fragments_ids = [i + 1 for i in range(len(fragments))]
self.transform = transform
self.train = train
image_stacks = []
labels = []
# for fragment_id, fragment_path in enumerate(self.fragments):
# for fragment_id in (self.fragments_ids):
# fragment_path = fragment_path.resolve() # absolute path
# print(fragment_path)
images, masks = self.slice_fragment_to_subvolumes(self.fragment_ids)
# # clear memory
# del images
# del label
# del images_slices
# del label_slices
print(f"Loaded fragment {fragment_path} on {os.getpid()}")
self.masks = labels
self.images = images
def slice_fragment_to_subvolumes(self, fragment_ids):
sliced_images = []
sliced_ink_masks = []
for fragment_id in fragment_ids:
image, mask = read_image_mask(fragment_id)
x1_list = list(range(0, images.shape[1] - CFG.tile_size + 1, CFG.stride))
y1_list = list(range(0, images.shape[0] - CFG.tile_size + 1, CFG.stride))
for y1 in y1_list:
for x1 in x1_list:
y2 = y1 + CFG.tile_size
x2 = x1 + CFG.tile_size
sliced_images.append(images[y1:y2, x1:x2])
sliced_ink_masks.append(mask[y1:y2, x1:x2, None])
return sliced_images, sliced_ink_masks
# def slice_fragment_to_subvolumes(self, images, mask):
# sliced_images = []
# if self.train:
# sliced_ink_masks = []
# x1_list = list(range(0, images.shape[1] - CFG.tile_size + 1, CFG.stride))
# y1_list = list(range(0, images.shape[0] - CFG.tile_size + 1, CFG.stride))
# for y1 in y1_list:
# for x1 in x1_list:
# y2 = y1 + CFG.tile_size
# x2 = x1 + CFG.tile_size
# sliced_images.append(images[y1:y2, x1:x2])
# if self.train:
# sliced_ink_masks.append(mask[y1:y2, x1:x2, None])
# if not self.train:
# return sliced_images
# return sliced_images, sliced_ink_masks
def read_image_mask(self, fragment_id):
z_dim = CFG.in_chans
z_mid = 65 // 2 # len(surface_volume_paths) // 2
z_start, z_end = z_mid - z_dim // 2, z_mid + z_dim // 2
indx = range(z_start, z_end)
images = []
for i in tqdm(indx):
image = np.array(
Image.open(
CFG.comp_dataset_path
+ f"/train/{fragment_id}/surface_volume/{i:02}.tif"
),
dtype="float32",
)
pad0 = CFG.tile_size - image.shape[0] % CFG.tile_size
pad1 = CFG.tile_size - image.shape[1] % CFG.tile_size
image = np.pad(image, [(0, pad0), (0, pad1)], constant_values=0)
images.append(image)
images = np.stack(images, axis=2)
mask = Image.open(CFG.comp_dataset_path + f"/train/{fragment_id}/inklabels.png")
mask = np.pad(mask, [(0, pad0), (0, pad1)], constant_values=0)
mask = mask.astype("float32")
mask /= 255.0
print(images.shape)
return images, mask
# surface_volume_paths = sorted (
# (fragment_path / "surface_volume").rglob("*.tif")
# )
# z_dim = CFG.in_chans
# z_mid = len(surface_volume_paths) // 2
# z_start, z_end = z_mid - z_dim // 2, z_mid + z_dim // 2
# # we don't convert to torch since it doesn't support uint16
# images = [
# np.array(Image.open(fn), dtype='float32') for fn in surface_volume_paths[z_start:z_end]
# ]
# pad0 = (CFG.tile_size - images[0].shape[0] % CFG.tile_size)
# pad1 = (CFG.tile_size - images[0].shape[1] % CFG.tile_size)
# images = np.pad(images, ((0,0), (0, pad0), (0, pad1)), mode='constant')
# images = np.stack(np.array(images), axis=0)
# if not self.train:
# return images
# ink_mask = np.array(Image.open(str(fragment_path / "inklabels.png"))
# .convert("1"))
# ink_mask = np.pad(ink_mask, [(0, pad0), (0, pad1)], constant_values=0)
# ink_mask = ink_mask.astype('float32')
# ink_mask /= 255.0
# ink_mask = ink_mask.astype('int32')
# return images, ink_mask
def __len__(self):
return len(self.image_stacks)
def __getitem__(self, index):
images = self.image_stacks[index]
if self.train:
labels = self.labels[index]
if self.transform:
print(type(images))
data = self.transform(image=images, mask=labels)
images = data["image"]
labels = data["mask"]
return images, labels
def plot_label(self, index, **kwargs):
subv_volum = self.image_stacks[index]
label = self.labels[index]
print("Index:", index)
if isinstance(label, torch.Tensor):
label = label.numpy()
fig, ax = plt.subplots(**kwargs)
ax.imshow(label, cmap="gray")
y, x, _ = pixel
_, y_dim, x_dim = self.voxel_shape
x_min = x - (x_dim // 2)
x_max = x + (x_dim // 2)
y_min = y - (y_dim // 2)
y_max = y + (y_dim // 2)
rect = plt.Rectangle(
(x_min, y_min), x_dim, y_dim, linewidth=2, edgecolor="y", facecolor="none"
)
ax.add_patch(rect)
plt.show()
base_path = Path("/kaggle/input/vesuvius-challenge-ink-detection")
train_path = base_path / "train"
all_fragments = sorted([f.name for f in train_path.iterdir()])
print("All fragments:", all_fragments)
train_fragments = [train_path / fragment_name for fragment_name in all_fragments[:1]]
train_fragments
train_trasnforms = A.Compose(CFG.train_aug_list)
train_dset = SubvolumeDataset(fragments=train_fragments, transform=train_trasnforms)
print("Num items (pixels)", len(train_dset))
del train_dset
gc.collect()
# #### Sanity check
index = 0
print(f"Sub Volume image shape = {train_dset[index][0].shape}")
print(f"Sub Volume image shape = {len(train_dset)}")
# train_dset.plot_label(index, figsize=(16, 10))
plot_dataset = SubvolumeDataset(fragments=train_fragments)
transform = CFG.train_aug_list
transform = A.Compose(
[t for t in transform if not isinstance(t, (A.Normalize, ToTensorV2))]
)
plot_count = 0
for i in range(1000):
image, mask = plot_dataset[i]
data = transform(image=image, mask=mask)
aug_image = data["image"]
aug_mask = data["mask"]
if mask.sum() == 0:
continue
fig, axes = plt.subplots(1, 4, figsize=(15, 8))
axes[0].imshow(image[..., 0], cmap="gray")
axes[1].imshow(mask, cmap="gray")
axes[2].imshow(aug_image[..., 0], cmap="gray")
axes[3].imshow(aug_mask, cmap="gray")
plot_count += 1
if plot_count == 5:
break
train_loader = thd.DataLoader(train_dset, batch_size=CFG.train_batch_size, shuffle=True)
print("Num batches:", len(train_loader))
# ### Set up model
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class InkDetector(torch.nn.Module):
def __init__(self, cfg, weight=None):
super().__init__()
self.cfg = cfg
self.model = smp.Unet(
encoder_name=cfg.backbone,
encoder_weights=weight,
in_channels=cfg.in_chans,
classes=cfg.target_size,
activation=None,
)
def forward(self, image):
output = self.model(image)
return output
model = InkDetector(CFG, "imagenet").to(DEVICE)
# ### Train
TRAINING_STEPS = 10
LEARNING_RATE = CFG.lr
TRAIN_RUN = True # To avoid re-running when saving the notebook
if TRAIN_RUN:
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(model.parameters(), lr=LEARNING_RATE)
scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer, max_lr=LEARNING_RATE, total_steps=TRAINING_STEPS
)
model.train()
running_loss = 0.0
running_accuracy = 0.0
running_fbeta = 0.0
denom = 0
pbar = tqdm(enumerate(train_loader), total=TRAINING_STEPS)
for i, (subvolumes, inklabels) in pbar:
if i >= TRAINING_STEPS:
break
optimizer.zero_grad()
outputs = model(subvolumes.to(DEVICE))
loss = criterion(outputs, inklabels.to(DEVICE))
loss.backward()
optimizer.step()
scheduler.step()
pred_ink = outputs.detach().sigmoid().gt(0.4).cpu().int()
accuracy = (pred_ink == inklabels).sum().float().div(inklabels.size(0))
running_fbeta += fbeta_score(
inklabels.view(-1).numpy(), pred_ink.view(-1).numpy(), beta=0.5
)
running_accuracy += accuracy.item()
running_loss += loss.item()
denom += 1
pbar.set_postfix(
{
"Loss": running_loss / denom,
"Accuracy": running_accuracy / denom,
"[email protected]": running_fbeta / denom,
}
)
if (i + 1) % 500 == 0:
running_loss = 0.0
running_accuracy = 0.0
running_fbeta = 0.0
denom = 0
torch.save(model.state_dict(), "/kaggle/working/model.pt")
else:
model_weights = torch.load("/kaggle/working/model.pt")
model.load_state_dict(model_weights)
# ### Evaluate
# Clear memory before loading test fragments
train_dset.labels = None
train_dset.image_stacks = []
del train_loader, train_dset
gc.collect()
test_path = base_path / "test"
test_fragments = [test_path / fragment_name for fragment_name in test_path.iterdir()]
print("All fragments:", test_fragments)
pred_images = []
model.eval()
for test_fragment in test_fragments:
outputs = []
eval_dset = SubvolumeDataset(fragments=[test_fragment], train=False)
eval_loader = thd.DataLoader(eval_dset, batch_size=BATCH_SIZE, shuffle=False)
with torch.no_grad():
for i, (subvolumes, _) in enumerate(tqdm(eval_loader)):
output = model(subvolumes.to(DEVICE)).view(-1).sigmoid().cpu().numpy()
outputs.append(output)
# we only load 1 fragment at a time
image_shape = eval_dset.image_stacks[0].shape[1:]
eval_dset.labels = None
eval_dset.image_stacks = None
del eval_loader
gc.collect()
pred_image = np.zeros(image_shape, dtype=np.uint8)
outputs = np.concatenate(outputs)
for (y, x, _), prob in zip(eval_dset.pixels[: outputs.shape[0]], outputs):
pred_image[y, x] = prob > 0.4
pred_images.append(pred_image)
eval_dset.pixels = None
del eval_dset
gc.collect()
print("Finished", test_fragment)
plt.imshow(pred_images[1], cmap="gray")
# ### Submission
def rle(output):
flat_img = np.where(output > 0.4, 1, 0).astype(np.uint8)
starts = np.array((flat_img[:-1] == 0) & (flat_img[1:] == 1))
ends = np.array((flat_img[:-1] == 1) & (flat_img[1:] == 0))
starts_ix = np.where(starts)[0] + 2
ends_ix = np.where(ends)[0] + 2
lengths = ends_ix - starts_ix
return " ".join(map(str, sum(zip(starts_ix, lengths), ())))
submission = defaultdict(list)
for fragment_id, fragment_name in enumerate(test_fragments):
submission["Id"].append(fragment_name.name)
submission["Predicted"].append(rle(pred_images[fragment_id]))
pd.DataFrame.from_dict(submission).to_csv("/kaggle/working/submission.csv", index=False)
pd.DataFrame.from_dict(submission)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/609/129609423.ipynb
| null | null |
[{"Id": 129609423, "ScriptId": 38514101, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11219911, "CreationDate": "05/15/2023 08:02:56", "VersionNumber": 2.0, "Title": "UNet Segmentataion [training]", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 537.0, "LinesInsertedFromPrevious": 207.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 330.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# ## Imports
import os
import gc
import glob
import json
import multiprocessing as mp
import warnings
import albumentations as A
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import PIL.Image as Image
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as thd
import segmentation_models_pytorch as smp
from torchvision import transforms
from collections import defaultdict
from types import SimpleNamespace
from typing import Dict, List, Optional, Tuple
from pathlib import Path
from sklearn.metrics import fbeta_score
from sklearn.exceptions import UndefinedMetricWarning
from albumentations.pytorch import ToTensorV2
from segmentation_models_pytorch.encoders import get_preprocessing_fn
from tqdm.auto import tqdm
warnings.simplefilter("ignore")
# ## Config
class CFG:
# ============== comp exp name =============
comp_name = "vesuvius"
comp_dir_path = "/kaggle/input"
comp_folder_name = "vesuvius-challenge-ink-detection"
comp_dataset_path = os.path.join(comp_dir_path, comp_folder_name)
exp_name = "vesuvius_2d_slide_unet_exp001"
# ============== pred target =============
target_size = 1
# ============== model cfg =============
model_name = "Unet"
backbone = "efficientnet-b0"
# backbone = 'se_resnext50_32x4d'
in_chans = 6 # 65
# ============== data preprocessing =============
preprocess_input = get_preprocessing_fn(backbone, pretrained="imagenet")
# ============== training cfg =============
size = 224
tile_size = 224
stride = tile_size // 2
train_batch_size = 32 # 32
valid_batch_size = train_batch_size
use_amp = True
scheduler = "GradualWarmupSchedulerV2"
# scheduler = 'CosineAnnealingLR'
epochs = 15 # 30
# adamW warmupあり
warmup_factor = 10
# lr = 1e-3 / warmup_factor
lr = 1e-3
# ============== fold =============
valid_id = 1
# objective_cv = 'binary' # 'binary', 'multiclass', 'regression'
metric_direction = "maximize" # maximize, 'minimize'
# metrics = 'dice_coef'
# ============== fixed =============
pretrained = True
inf_weight = "best" # 'best'
min_lr = 1e-6
weight_decay = 1e-6
max_grad_norm = 1000
print_freq = 50
num_workers = 4
seed = 42
# ============== set dataset path =============
outputs_path = f"/kaggle/working/outputs/{comp_name}/{exp_name}/"
submission_dir = outputs_path + "submissions/"
submission_path = submission_dir + f"submission_{exp_name}.csv"
model_dir = outputs_path + f"{comp_name}-models/"
figures_dir = outputs_path + "figures/"
log_dir = outputs_path + "logs/"
log_path = log_dir + f"{exp_name}.txt"
# ============== augmentation =============
train_aug_list = [
A.Resize(size, size),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.75),
A.ShiftScaleRotate(p=0.75),
A.OneOf(
[
A.GaussNoise(var_limit=[10, 50]),
A.GaussianBlur(),
A.MotionBlur(),
],
p=0.4,
),
A.GridDistortion(num_steps=5, distort_limit=0.3, p=0.5),
A.CoarseDropout(
max_holes=1,
max_width=int(size * 0.3),
max_height=int(size * 0.3),
mask_fill_value=0,
p=0.5,
),
A.Normalize(mean=[0] * in_chans, std=[1] * in_chans),
ToTensorV2(transpose_mask=True),
]
# A.Compose([
# A.RandomResizedCrop(height=224, width=224, scale=(0.08, 1.0)),
# A.HorizontalFlip(p=0.5),
# A.OneOf([
# A.RandomBrightnessContrast(brightness_limit=0.4, contrast_limit=0.4),
# A.HueSaturationValue(hue_shift_limit=10, sat_shift_limit=20, val_shift_limit=10),
# ], p=0.75),
# A.GaussNoise(var_limit=(10.0, 50.0)),
# A.CoarseDropout(max_holes=8, max_height=32, max_width=32, p=0.5),
# # A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
# ToTensorV2()
# ])
valid_aug_list = [
A.Resize(height=256, width=256),
A.CenterCrop(height=224, width=224),
# A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2(),
]
# ## Set up data
class SubvolumeDataset(thd.Dataset):
def __init__(self, fragments: List[Path], transform=None, train=True):
# self.fragments = sorted(map(lambda path: path.resolve(), fragments))
self.fragments_ids = [i + 1 for i in range(len(fragments))]
self.transform = transform
self.train = train
image_stacks = []
labels = []
# for fragment_id, fragment_path in enumerate(self.fragments):
# for fragment_id in (self.fragments_ids):
# fragment_path = fragment_path.resolve() # absolute path
# print(fragment_path)
images, masks = self.slice_fragment_to_subvolumes(self.fragment_ids)
# # clear memory
# del images
# del label
# del images_slices
# del label_slices
print(f"Loaded fragment {fragment_path} on {os.getpid()}")
self.masks = labels
self.images = images
def slice_fragment_to_subvolumes(self, fragment_ids):
sliced_images = []
sliced_ink_masks = []
for fragment_id in fragment_ids:
image, mask = read_image_mask(fragment_id)
x1_list = list(range(0, images.shape[1] - CFG.tile_size + 1, CFG.stride))
y1_list = list(range(0, images.shape[0] - CFG.tile_size + 1, CFG.stride))
for y1 in y1_list:
for x1 in x1_list:
y2 = y1 + CFG.tile_size
x2 = x1 + CFG.tile_size
sliced_images.append(images[y1:y2, x1:x2])
sliced_ink_masks.append(mask[y1:y2, x1:x2, None])
return sliced_images, sliced_ink_masks
# def slice_fragment_to_subvolumes(self, images, mask):
# sliced_images = []
# if self.train:
# sliced_ink_masks = []
# x1_list = list(range(0, images.shape[1] - CFG.tile_size + 1, CFG.stride))
# y1_list = list(range(0, images.shape[0] - CFG.tile_size + 1, CFG.stride))
# for y1 in y1_list:
# for x1 in x1_list:
# y2 = y1 + CFG.tile_size
# x2 = x1 + CFG.tile_size
# sliced_images.append(images[y1:y2, x1:x2])
# if self.train:
# sliced_ink_masks.append(mask[y1:y2, x1:x2, None])
# if not self.train:
# return sliced_images
# return sliced_images, sliced_ink_masks
def read_image_mask(self, fragment_id):
z_dim = CFG.in_chans
z_mid = 65 // 2 # len(surface_volume_paths) // 2
z_start, z_end = z_mid - z_dim // 2, z_mid + z_dim // 2
indx = range(z_start, z_end)
images = []
for i in tqdm(indx):
image = np.array(
Image.open(
CFG.comp_dataset_path
+ f"/train/{fragment_id}/surface_volume/{i:02}.tif"
),
dtype="float32",
)
pad0 = CFG.tile_size - image.shape[0] % CFG.tile_size
pad1 = CFG.tile_size - image.shape[1] % CFG.tile_size
image = np.pad(image, [(0, pad0), (0, pad1)], constant_values=0)
images.append(image)
images = np.stack(images, axis=2)
mask = Image.open(CFG.comp_dataset_path + f"/train/{fragment_id}/inklabels.png")
mask = np.pad(mask, [(0, pad0), (0, pad1)], constant_values=0)
mask = mask.astype("float32")
mask /= 255.0
print(images.shape)
return images, mask
# surface_volume_paths = sorted (
# (fragment_path / "surface_volume").rglob("*.tif")
# )
# z_dim = CFG.in_chans
# z_mid = len(surface_volume_paths) // 2
# z_start, z_end = z_mid - z_dim // 2, z_mid + z_dim // 2
# # we don't convert to torch since it doesn't support uint16
# images = [
# np.array(Image.open(fn), dtype='float32') for fn in surface_volume_paths[z_start:z_end]
# ]
# pad0 = (CFG.tile_size - images[0].shape[0] % CFG.tile_size)
# pad1 = (CFG.tile_size - images[0].shape[1] % CFG.tile_size)
# images = np.pad(images, ((0,0), (0, pad0), (0, pad1)), mode='constant')
# images = np.stack(np.array(images), axis=0)
# if not self.train:
# return images
# ink_mask = np.array(Image.open(str(fragment_path / "inklabels.png"))
# .convert("1"))
# ink_mask = np.pad(ink_mask, [(0, pad0), (0, pad1)], constant_values=0)
# ink_mask = ink_mask.astype('float32')
# ink_mask /= 255.0
# ink_mask = ink_mask.astype('int32')
# return images, ink_mask
def __len__(self):
return len(self.image_stacks)
def __getitem__(self, index):
images = self.image_stacks[index]
if self.train:
labels = self.labels[index]
if self.transform:
print(type(images))
data = self.transform(image=images, mask=labels)
images = data["image"]
labels = data["mask"]
return images, labels
def plot_label(self, index, **kwargs):
subv_volum = self.image_stacks[index]
label = self.labels[index]
print("Index:", index)
if isinstance(label, torch.Tensor):
label = label.numpy()
fig, ax = plt.subplots(**kwargs)
ax.imshow(label, cmap="gray")
y, x, _ = pixel
_, y_dim, x_dim = self.voxel_shape
x_min = x - (x_dim // 2)
x_max = x + (x_dim // 2)
y_min = y - (y_dim // 2)
y_max = y + (y_dim // 2)
rect = plt.Rectangle(
(x_min, y_min), x_dim, y_dim, linewidth=2, edgecolor="y", facecolor="none"
)
ax.add_patch(rect)
plt.show()
base_path = Path("/kaggle/input/vesuvius-challenge-ink-detection")
train_path = base_path / "train"
all_fragments = sorted([f.name for f in train_path.iterdir()])
print("All fragments:", all_fragments)
train_fragments = [train_path / fragment_name for fragment_name in all_fragments[:1]]
train_fragments
train_trasnforms = A.Compose(CFG.train_aug_list)
train_dset = SubvolumeDataset(fragments=train_fragments, transform=train_trasnforms)
print("Num items (pixels)", len(train_dset))
del train_dset
gc.collect()
# #### Sanity check
index = 0
print(f"Sub Volume image shape = {train_dset[index][0].shape}")
print(f"Sub Volume image shape = {len(train_dset)}")
# train_dset.plot_label(index, figsize=(16, 10))
plot_dataset = SubvolumeDataset(fragments=train_fragments)
transform = CFG.train_aug_list
transform = A.Compose(
[t for t in transform if not isinstance(t, (A.Normalize, ToTensorV2))]
)
plot_count = 0
for i in range(1000):
image, mask = plot_dataset[i]
data = transform(image=image, mask=mask)
aug_image = data["image"]
aug_mask = data["mask"]
if mask.sum() == 0:
continue
fig, axes = plt.subplots(1, 4, figsize=(15, 8))
axes[0].imshow(image[..., 0], cmap="gray")
axes[1].imshow(mask, cmap="gray")
axes[2].imshow(aug_image[..., 0], cmap="gray")
axes[3].imshow(aug_mask, cmap="gray")
plot_count += 1
if plot_count == 5:
break
train_loader = thd.DataLoader(train_dset, batch_size=CFG.train_batch_size, shuffle=True)
print("Num batches:", len(train_loader))
# ### Set up model
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class InkDetector(torch.nn.Module):
def __init__(self, cfg, weight=None):
super().__init__()
self.cfg = cfg
self.model = smp.Unet(
encoder_name=cfg.backbone,
encoder_weights=weight,
in_channels=cfg.in_chans,
classes=cfg.target_size,
activation=None,
)
def forward(self, image):
output = self.model(image)
return output
model = InkDetector(CFG, "imagenet").to(DEVICE)
# ### Train
TRAINING_STEPS = 10
LEARNING_RATE = CFG.lr
TRAIN_RUN = True # To avoid re-running when saving the notebook
if TRAIN_RUN:
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(model.parameters(), lr=LEARNING_RATE)
scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer, max_lr=LEARNING_RATE, total_steps=TRAINING_STEPS
)
model.train()
running_loss = 0.0
running_accuracy = 0.0
running_fbeta = 0.0
denom = 0
pbar = tqdm(enumerate(train_loader), total=TRAINING_STEPS)
for i, (subvolumes, inklabels) in pbar:
if i >= TRAINING_STEPS:
break
optimizer.zero_grad()
outputs = model(subvolumes.to(DEVICE))
loss = criterion(outputs, inklabels.to(DEVICE))
loss.backward()
optimizer.step()
scheduler.step()
pred_ink = outputs.detach().sigmoid().gt(0.4).cpu().int()
accuracy = (pred_ink == inklabels).sum().float().div(inklabels.size(0))
running_fbeta += fbeta_score(
inklabels.view(-1).numpy(), pred_ink.view(-1).numpy(), beta=0.5
)
running_accuracy += accuracy.item()
running_loss += loss.item()
denom += 1
pbar.set_postfix(
{
"Loss": running_loss / denom,
"Accuracy": running_accuracy / denom,
"[email protected]": running_fbeta / denom,
}
)
if (i + 1) % 500 == 0:
running_loss = 0.0
running_accuracy = 0.0
running_fbeta = 0.0
denom = 0
torch.save(model.state_dict(), "/kaggle/working/model.pt")
else:
model_weights = torch.load("/kaggle/working/model.pt")
model.load_state_dict(model_weights)
# ### Evaluate
# Clear memory before loading test fragments
train_dset.labels = None
train_dset.image_stacks = []
del train_loader, train_dset
gc.collect()
test_path = base_path / "test"
test_fragments = [test_path / fragment_name for fragment_name in test_path.iterdir()]
print("All fragments:", test_fragments)
pred_images = []
model.eval()
for test_fragment in test_fragments:
outputs = []
eval_dset = SubvolumeDataset(fragments=[test_fragment], train=False)
eval_loader = thd.DataLoader(eval_dset, batch_size=BATCH_SIZE, shuffle=False)
with torch.no_grad():
for i, (subvolumes, _) in enumerate(tqdm(eval_loader)):
output = model(subvolumes.to(DEVICE)).view(-1).sigmoid().cpu().numpy()
outputs.append(output)
# we only load 1 fragment at a time
image_shape = eval_dset.image_stacks[0].shape[1:]
eval_dset.labels = None
eval_dset.image_stacks = None
del eval_loader
gc.collect()
pred_image = np.zeros(image_shape, dtype=np.uint8)
outputs = np.concatenate(outputs)
for (y, x, _), prob in zip(eval_dset.pixels[: outputs.shape[0]], outputs):
pred_image[y, x] = prob > 0.4
pred_images.append(pred_image)
eval_dset.pixels = None
del eval_dset
gc.collect()
print("Finished", test_fragment)
plt.imshow(pred_images[1], cmap="gray")
# ### Submission
def rle(output):
flat_img = np.where(output > 0.4, 1, 0).astype(np.uint8)
starts = np.array((flat_img[:-1] == 0) & (flat_img[1:] == 1))
ends = np.array((flat_img[:-1] == 1) & (flat_img[1:] == 0))
starts_ix = np.where(starts)[0] + 2
ends_ix = np.where(ends)[0] + 2
lengths = ends_ix - starts_ix
return " ".join(map(str, sum(zip(starts_ix, lengths), ())))
submission = defaultdict(list)
for fragment_id, fragment_name in enumerate(test_fragments):
submission["Id"].append(fragment_name.name)
submission["Predicted"].append(rle(pred_images[fragment_id]))
pd.DataFrame.from_dict(submission).to_csv("/kaggle/working/submission.csv", index=False)
pd.DataFrame.from_dict(submission)
| false | 0 | 5,060 | 0 | 5,060 | 5,060 |
||
129609066
|
import nltk
from nltk.collocations import TrigramCollocationFinder, BigramCollocationFinder
from nltk.metrics import TrigramAssocMeasures, BigramAssocMeasures
# Tokenize the sentence
# tokens = nltk.word_tokenize(txt[0])
tokens = []
for i in txt:
for j in i:
tokens.append(j)
# Create a trigram collocation finder
finder_b = BigramCollocationFinder.from_words(tokens)
finder_t = TrigramCollocationFinder.from_words(tokens)
# Filter out common words and punctuation
finder_b.apply_freq_filter(1)
finder_t.apply_freq_filter(1)
# Set the scoring metric
scoring_measure_b = BigramAssocMeasures.raw_freq
scoring_measure_t = TrigramAssocMeasures.raw_freq
# Get the top 10 trigrams based on the scoring metric
top_trigrams = finder_t.nbest(scoring_measure_t, 10)
top_bigrams = finder_b.nbest(scoring_measure_b, 10)
# Print the top trigrams
print("Top Trigrams:")
for trigram in top_trigrams:
print(trigram)
print("Top Bigrams:")
for trigram in top_bigrams:
print(trigram)
import re
# Preprocessing
def remove_string_special_characters(s):
# removes special characters with ' '
stripped = re.sub("[^0-9a-zA-z\s]", "", s)
stripped = re.sub("_", "", stripped)
# Change any white space to one space
stripped = re.sub("\s+", " ", stripped)
# Remove start and end white spaces
stripped = stripped.strip()
if stripped != "":
return stripped.lower()
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from nltk import word_tokenize
lemmatizer = WordNetLemmatizer()
# Stopword removal
stop_words = set(stopwords.words("english"))
for i, line in enumerate(txt):
line = remove_string_special_characters(line)
# txt[i] = [x for x in line if ( x not in stop_words )]
txt[i] = [
lemmatizer.lemmatize(x) for x in word_tokenize(line) if (x not in stop_words)
]
import pandas as pd
data_may1 = pd.read_json("/kaggle/input/nctc-may-1-7/NCTC_may_1.json")
txt = list(data_may1["Text"])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/609/129609066.ipynb
| null | null |
[{"Id": 129609066, "ScriptId": 38536156, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14218315, "CreationDate": "05/15/2023 08:00:04", "VersionNumber": 1.0, "Title": "ngram_workspace", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 72.0, "LinesInsertedFromPrevious": 72.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import nltk
from nltk.collocations import TrigramCollocationFinder, BigramCollocationFinder
from nltk.metrics import TrigramAssocMeasures, BigramAssocMeasures
# Tokenize the sentence
# tokens = nltk.word_tokenize(txt[0])
tokens = []
for i in txt:
for j in i:
tokens.append(j)
# Create a trigram collocation finder
finder_b = BigramCollocationFinder.from_words(tokens)
finder_t = TrigramCollocationFinder.from_words(tokens)
# Filter out common words and punctuation
finder_b.apply_freq_filter(1)
finder_t.apply_freq_filter(1)
# Set the scoring metric
scoring_measure_b = BigramAssocMeasures.raw_freq
scoring_measure_t = TrigramAssocMeasures.raw_freq
# Get the top 10 trigrams based on the scoring metric
top_trigrams = finder_t.nbest(scoring_measure_t, 10)
top_bigrams = finder_b.nbest(scoring_measure_b, 10)
# Print the top trigrams
print("Top Trigrams:")
for trigram in top_trigrams:
print(trigram)
print("Top Bigrams:")
for trigram in top_bigrams:
print(trigram)
import re
# Preprocessing
def remove_string_special_characters(s):
# removes special characters with ' '
stripped = re.sub("[^0-9a-zA-z\s]", "", s)
stripped = re.sub("_", "", stripped)
# Change any white space to one space
stripped = re.sub("\s+", " ", stripped)
# Remove start and end white spaces
stripped = stripped.strip()
if stripped != "":
return stripped.lower()
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from nltk import word_tokenize
lemmatizer = WordNetLemmatizer()
# Stopword removal
stop_words = set(stopwords.words("english"))
for i, line in enumerate(txt):
line = remove_string_special_characters(line)
# txt[i] = [x for x in line if ( x not in stop_words )]
txt[i] = [
lemmatizer.lemmatize(x) for x in word_tokenize(line) if (x not in stop_words)
]
import pandas as pd
data_may1 = pd.read_json("/kaggle/input/nctc-may-1-7/NCTC_may_1.json")
txt = list(data_may1["Text"])
| false | 0 | 625 | 0 | 625 | 625 |
||
129690177
|
<jupyter_start><jupyter_text>Pakistan's Largest E-Commerce Dataset
### Context
This is the largest retail e-commerce orders dataset from Pakistan. It contains half a million transaction records from March 2016 to August 2018. The data was collected from various e-commerce merchants as part of a research study. I am releasing this dataset as a capstone project for my data science course at Alnafi (alnafi.com/zusmani).
There is a dire need for such dataset to learn about Pakistan’s emerging e-commerce potential and I hope this will help many startups in many ways.
### Content
Geography: Pakistan
Time period: 03/2016 – 08/2018
Unit of analysis: E-Commerce Orders
Dataset: The dataset contains detailed information of half a million e-commerce orders in Pakistan from March 2016 to August 2018. It contains item details, shipping method, payment method like credit card, Easy-Paisa, Jazz-Cash, cash-on-delivery, product categories like fashion, mobile, electronics, appliance etc., date of order, SKU, price, quantity, total and customer ID. This is the most detailed dataset about e-commerce in Pakistan that you can find in the Public domain.
Variables: The dataset contains Item ID, Order Status (Completed, Cancelled, Refund), Date of Order, SKU, Price, Quantity, Grand Total, Category, Payment Method and Customer ID.
Size: 101 MB
File Type: CSV
Kaggle dataset identifier: pakistans-largest-ecommerce-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('pakistans-largest-ecommerce-dataset/Pakistan Largest Ecommerce Dataset.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 1048575 entries, 0 to 1048574
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 item_id 584524 non-null float64
1 status 584509 non-null object
2 created_at 584524 non-null object
3 sku 584504 non-null object
4 price 584524 non-null float64
5 qty_ordered 584524 non-null float64
6 grand_total 584524 non-null float64
7 increment_id 584524 non-null object
8 category_name_1 584360 non-null object
9 sales_commission_code 447349 non-null object
10 discount_amount 584524 non-null float64
11 payment_method 584524 non-null object
12 Working Date 584524 non-null object
13 BI Status 584524 non-null object
14 MV 584524 non-null object
15 Year 584524 non-null float64
16 Month 584524 non-null float64
17 Customer Since 584513 non-null object
18 M-Y 584524 non-null object
19 FY 584524 non-null object
20 Customer ID 584513 non-null float64
21 Unnamed: 21 0 non-null float64
22 Unnamed: 22 0 non-null float64
23 Unnamed: 23 0 non-null float64
24 Unnamed: 24 0 non-null float64
25 Unnamed: 25 0 non-null float64
dtypes: float64(13), object(13)
memory usage: 208.0+ MB
<jupyter_text>Examples:
{
"item_id": 211131,
"status": "complete",
"created_at": "2016-07-01 00:00:00",
"sku": "kreations_YI 06-L",
"price": 1950,
"qty_ordered": 1,
"grand_total": 1950,
"increment_id": 100147443,
"category_name_1": "Women's Fashion",
"sales_commission_code": "\\N",
"discount_amount": 0,
"payment_method": "cod",
"Working Date": "7/1/2016",
"BI Status": "#REF!",
" MV ": " 1,950 ",
"Year": 2016,
"Month": 7,
"Customer Since": "2016-7",
"M-Y": "7-2016",
"FY": "FY17",
"...": "and 6 more columns"
}
{
"item_id": 211133,
"status": "canceled",
"created_at": "2016-07-01 00:00:00",
"sku": "kcc_Buy 2 Frey Air Freshener & Get 1 Kasual Body Spray Free",
"price": 240,
"qty_ordered": 1,
"grand_total": 240,
"increment_id": 100147444,
"category_name_1": "Beauty & Grooming",
"sales_commission_code": "\\N",
"discount_amount": 0,
"payment_method": "cod",
"Working Date": "7/1/2016",
"BI Status": "Gross",
" MV ": " 240 ",
"Year": 2016,
"Month": 7,
"Customer Since": "2016-7",
"M-Y": "7-2016",
"FY": "FY17",
"...": "and 6 more columns"
}
{
"item_id": 211134,
"status": "canceled",
"created_at": "2016-07-01 00:00:00",
"sku": "Ego_UP0017-999-MR0",
"price": 2450,
"qty_ordered": 1,
"grand_total": 2450,
"increment_id": 100147445,
"category_name_1": "Women's Fashion",
"sales_commission_code": "\\N",
"discount_amount": 0,
"payment_method": "cod",
"Working Date": "7/1/2016",
"BI Status": "Gross",
" MV ": " 2,450 ",
"Year": 2016,
"Month": 7,
"Customer Since": "2016-7",
"M-Y": "7-2016",
"FY": "FY17",
"...": "and 6 more columns"
}
{
"item_id": 211135,
"status": "complete",
"created_at": "2016-07-01 00:00:00",
"sku": "kcc_krone deal",
"price": 360,
"qty_ordered": 1,
"grand_total": 60,
"increment_id": 100147446,
"category_name_1": "Beauty & Grooming",
"sales_commission_code": "R-FSD-52352",
"discount_amount": 300,
"payment_method": "cod",
"Working Date": "7/1/2016",
"BI Status": "Net",
" MV ": " 360 ",
"Year": 2016,
"Month": 7,
"Customer Since": "2016-7",
"M-Y": "7-2016",
"FY": "FY17",
"...": "and 6 more columns"
}
<jupyter_script># ## **"Unleashing the Power of Exploratory Data Analysis in Pakistan's Ecommerce Industry: A 6-Step Guide to Unlock Insights and Drive Data-Driven Decision Making"**
# The data was gathered during the month of March 2016 to August 2018.
# **Written by: Faisal Mehmood**\
# **Date: 04-05-2023**\
# **Email:** [email protected]
# [](https://www.facebook.com/FMGillani01)
# [](https://www.instagram.com/fmgillani/)
# [](https://www.linkedin.com/in/faisalmehmood1122/)
# [](https://twitter.com/FMGillani)
# [](https://github.com/faisalmehmood2013)
# [](https://medium.com/@shahfaisal1122)
# [](https://www.kaggle.com/faisalmehmood2022)
# [](https://www.youtube.com/channel/UCcAduSDM92_Jk05ZZXnQhfQ)
# [](https://www.tiktok.com/@faisalgillani1070)
# [](https://www.quora.com/profile/Faisal-Gillani-6)
# # **Step 1: Import the Liabraries**
# Basic Liabraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import plotly as px
# Statistic Liabrary
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# Others Liabraries
import missingno as msno
# To look the entire columns we can use pandas function pd.set_option()
pd.set_option("display.max_columns", None)
# # **Step 2: Load the Data**
df = pd.read_csv(
"/kaggle/input/pakistans-largest-ecommerce-dataset/Pakistan Largest Ecommerce Dataset.csv",
low_memory=False,
)
# # **Step 3: Explore the Data**
# ## **Understanding the Data**
df.head()
df.tail()
df.info()
df.shape
df.columns
df.dtypes
# # **Observaion 1:**
# * The dataset consists of **1048575 rows** and **26 columns**.
# * There are **5 columns** and **464051 rows** that contain no data and should be removed, leaving **21 columns** and **584524 rows**.
# * Out of the 26 columns, **13 columns contain numerical data** and **13 columns contain categorical data**.
# * There are some columns with incorrect data types that need to be corrected.
# * **26 Columns are:** item_id, status, created_at, sku, price, qty_ordered, grand_total, increment_id, category_name_1, sales_commission_code, discount_amount, payment_method, Working Date, BI Status, MV, Year, Month, Customer Since, M-Y, FY, Customer ID, Unnamed: 21, Unnamed: 22, Unnamed: 23, Unnamed: 24, Unnamed: 25
# # **Descriptive Statistics**
# **To describe numerical data in a dataset**
with pd.option_context("float_format", "{:.2f}".format):
display(df.describe())
# **To describe categorical data in a dataset**
df.describe(include="object")
# # **Observation 2:**
# * The **maximum price in the dataset is 1012625.90** and the **highest quantity ordered was 1000**.
# * Out of 584509 **orders, 233685 were completed** successfully.
# * The majority of items were purchased in the **Mobile and Tablets category**.
# * The maximum orders were delivered using **Cash on Delivery (COD)** as the payment method.
# * The highest number of purchases occurred on **25-11-2016**.
# # **Step 4: Identifying and Handling Missing Values**
df.drop(
columns=["Unnamed: 25", "Unnamed: 24", "Unnamed: 23", "Unnamed: 22", "Unnamed: 21"],
inplace=True,
)
df.dropna(inplace=True, how="all")
# **Let's see the missing value in the data**
df.isnull().sum().sort_values(ascending=False)
# **Let's plot the missing values**
msno.matrix(df, sparkline=False)
plt.rcParams["figure.figsize"] = (20, 6)
sns.heatmap(df.isnull(), yticklabels=False, cbar=False, cmap="viridis")
plt.title("Missing Null Values")
# **Let's see the percentage of missing values in the data**
missing_percentage = df.isnull().sum().sort_values(ascending=False) / len(df) * 100
missing_percentage
# **Let's plot the percentagethe of missing values**
missing_percentage = missing_percentage[missing_percentage != 0]
plt.rcParams["figure.figsize"] = (20, 6)
missing_percentage.plot(kind="bar", align="center")
plt.title("Missing Percentage of Null Values")
# ## Duplicated Values
# **Checking any duplicates in the item_id**
duplicated = df["item_id"].duplicated().any()
duplicated
# # **Observation 3:**
# * The column with the highest percentage of null values is **sales_commission_code**. This column may not be very helpful, so we can consider dropping it from the dataset.
# * For columns with a **small number of null values** such as **SKU, Customer ID, and Customer Since**, we can simply drop the corresponding rows.
# * Since **category_name_1 and status are important columns**, it's better to fill in the null values using appropriate methods such as imputation or interpolation.
# * **No duplicate values** found in the dataset.
# **Dropping and Filling the Null Values**
df.drop(columns=["sales_commission_code"], inplace=True)
df.dropna(subset=["sku", "Customer ID", "Customer Since"], inplace=True)
df["status"].fillna(df["status"].mode()[0], inplace=True)
df["category_name_1"].fillna(df["category_name_1"].mode()[0], inplace=True)
df.isnull().sum()
# # **Step 5: Understanding the Variables:**
# **As we can see from the above, some columns in the dataset are not in the correct data type. Therefore, we need to perform casting to correct the data types of these columns.**
# Convert the datatype to strring and int
df["Customer ID"] = df["Customer ID"].astype(str)
df["item_id"] = df["item_id"].astype(str)
df["qty_ordered"] = df["qty_ordered"].astype(int)
df["Year"] = df["Year"].astype(int)
df["Month"] = df["Month"].astype(int)
# Convert the datatype to datetime
df["created_at"] = pd.to_datetime(df["created_at"])
# Rename columns
df.rename(
columns={
"category_name_1": "category_name",
"created_at": "order_date",
"Customer ID": "customer_id",
"Customer Since": "customer_since",
"Year": "year",
"Month": "month",
},
inplace=True,
)
# Remove the - Symbols in the discount_amount
df["discount_amount"].replace(-599.5, 599.5, regex=True, inplace=True)
df["discount_amount"].replace(-2.0, 2.0, regex=True, inplace=True)
df.info()
# **Let's explore the numerical columns**
for col in df.describe().columns:
print(f"Column Name: ", col)
print(f"Maximum Value: ", df[col].max())
print(f"Unique Values:\n", df[col].unique())
print(f"Unique Values Counts:\n", df[col].value_counts())
print("-" * 100)
# **Let's explore the categorical columns**
for col in df.describe(include="object").columns:
print(f"Column Name: ", col)
print(f"Number of frequencies: ", df[col].mode()[0])
print(f"Unique Values:\n", df[col].unique())
print(f"Unique Values Counts:\n", df[col].value_counts())
print("-" * 100)
# # **Observation 4:**
# * The majority of products in the dataset fall within the price range of 01 and 1000.**
# * Most of the people ordered in **single quantity.**
# * People made purchases **in November, May, March, and August**.
# * **Order Status are:** complete, canceled, order_refunded, received, refund, closed, fraud, holded, exchange, pending_paypal, paid, N, cod, pending, processing, payment_review
# * **Category Names are:** Mobiles & Tablets, Men's Fashion, Women's Fashion, Appliances, Superstore, Beauty & Grooming, Soghaat, Others, Home & Living, Entertainment, Health & Sports, Kids & Baby, Computing, N, School & Education, Books
# * **Payment methods are:** cod, Payaxis, Easypay, jazzwallet, easypay_voucher, bankalfalah, jazzvoucher, Easypay_MA, customercredit, apg, ublcreditcard, cashatdoorstep, mcblite, mygateway, internetbanking, productcredit, marketingexpense, financesettlement,
# # **Step No 06: Data preprocessing**
# **Let's see the status columns in the dataset**
# * In this dataset, the "complete," "received," "closed," "COD," and "paid" statuses indicate completed orders.
# * In this dataset, the "canceled," "fraud," "holded," and "pending_paypal," statuses indicate cancelled orders.
# * In this dataset, the statuses "order_refunded," "refund," "exchange," "N," "pending," "processing," and "payment_review" represent Refund order statuses.
df["order_status"] = "Refund"
df.loc[
(df["status"] == "complete")
| (df["status"] == "received")
| (df["status"] == "closed")
| (df["status"] == "cod")
| (df["status"] == "paid"),
"order_status",
] = "Completed"
df.loc[
(df["status"] == "canceled")
| (df["status"] == "fraud")
| (df["status"] == "holded")
| (df["status"] == "pending_paypal"),
"order_status",
] = "Cancelled"
df["order_status"].value_counts()
# **Let's see the price column in the dataset**
df["price_range"] = "No price mention"
df.loc[
(df["price"] > 0) & (df["price"] <= 1000), "price_range"
] = "Less than 1000 Rupees"
df.loc[
(df["price"] > 1000) & (df["price"] <= 5000), "price_range"
] = "Between 1001 to 5000 Rupees"
df.loc[
(df["price"] > 5000) & (df["price"] <= 10000), "price_range"
] = "Between 5001 to 10000 Rupees"
df.loc[
(df["price"] > 10000) & (df["price"] <= 100000), "price_range"
] = "Between 10001 to 100k Rupees"
df.loc[(df["price"] > 100000), "price_range"] = "More than 100k"
df["price_range"].value_counts(normalize=True) * 100
# * There are 2215 missing values in the price column, which are represented as zeros.
# * Maximum price range less than 1000 means that the highest value of prices in the dataset is below 1000.
# **Let's see the grand_total column in the dataset**
# * The grand_total column is calculated by multiplying the number of quantities ordered by the price and then subtracting the discount amount. However, in this dataset, it appears that the grand_total values were not calculated using this formula.
df["before_discount_total_amount"] = (df["qty_ordered"] * df["price"]).astype(float)
df["after_discount_total_amount"] = (
(df["qty_ordered"] * df["price"]) - df["discount_amount"]
).astype(float)
# **Create a new dataset after performing preprocessing steps.**
new_df = df[
[
"customer_id",
"order_date",
"year",
"month",
"category_name",
"qty_ordered",
"price",
"before_discount_total_amount",
"discount_amount",
"after_discount_total_amount",
"order_status",
"payment_method",
"price_range",
]
]
new_df.head(5)
# # **Questions and Answers:**
# * What is the best-selling category?
# * Visualize payment method and order status frequency
# **Q: What is the best-selling category?**
# **Let's explore the top 10 best-selling categories in the dataset.**
top_category = (
new_df["category_name"]
.value_counts()
.reset_index()
.rename(columns={"category_name": "count", "index": "category_name"})
)
top_10_category = top_category.head(10)
top_10_category
# **Let's create a bar plot of the top 10 categories in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.xlabel("Category")
plt.ylabel("Category Count")
plt.title("Top 10 Categories")
sns.barplot(x=top_10_category["category_name"], y=top_10_category["count"])
# **Let's explore the percentage of best-selling categories in the dataset.**
top_category_percentage = (
(new_df["category_name"].value_counts(normalize=True) * 100)
.reset_index()
.rename(columns={"category_name": "count", "index": "category_name"})
)
top_category_percentage
# **Let's create a bar plot to visualize the percentage of best-selling categories in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.xlabel("Category")
plt.ylabel("Category Count")
plt.title("Percentage of Best-Selling Categories")
sns.barplot(
x=top_category_percentage["category_name"], y=top_category_percentage["count"]
)
# **Let's examine the top 10 categories in terms of value before the discount amount.**
top_10_categories_by_value = (
new_df.groupby(["category_name"])
.sum()[["before_discount_total_amount"]]
.sort_values(by="before_discount_total_amount", ascending=False)
.head(10)
)
with pd.option_context("float_format", "{:.2f}".format):
display(top_10_categories_by_value)
# **Let's create a bar plot to visualize the best-selling categories by their value in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Top 10 Best-Selling Categories by their Value")
sns.barplot(
x=top_10_categories_by_value.index,
y=top_10_categories_by_value["before_discount_total_amount"],
)
# **Let's analyze the distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status = pd.crosstab(
new_df["category_name"], new_df["order_status"]
).sort_values(by="Completed", ascending=False)
category_order_status
# **Let's create a plot to visualize the distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status.plot(kind="bar", figsize=(14, 7))
plt.xlabel("Category")
plt.ylabel("Count")
plt.title("Best Selling Category with order status")
# **Let's analyze the percentage distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status_percentage = (
pd.crosstab(new_df["category_name"], new_df["order_status"])
.apply(lambda x: round(x / x.sum() * 100, 1), axis=1)
.sort_values(by="Completed", ascending=False)
)
category_order_status_percentage
# **Let's plot the percentage distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status_percentage.plot(kind="bar", figsize=(14, 7))
plt.xlabel("Category")
plt.ylabel("Percentage")
plt.title("Percentage of Best Selling Category with order status")
# **Let's examine the percentage of completed orders based on the price range in the dataset.**
completed_price_range = new_df.loc[
(
(new_df["price_range"] == "Less than 1000 Rupees")
| (new_df["price_range"] == "Between 1001 to 5000 Rupees")
| (new_df["price_range"] == "Between 10001 to 100k Rupees")
| (new_df["price_range"] == "Between 5001 to 10000 Rupees")
| (new_df["price_range"] == "No price mention")
| (new_df["price_range"] == "More than 100k")
)
& (new_df["order_status"] == "Completed")
]
(
completed_price_range["price_range"].value_counts()
/ new_df["price_range"].value_counts()
* 100
).sort_values(ascending=False).plot.bar(
figsize=(14, 7),
color="g",
title="Percentage of Completed orders based on Price Range",
)
# completed_price_range.describe(include="object")
# completed_price_range.describe()
# **Let's examine the percentage of cancelled orders in the price range columns in the dataset.**
cancelled_price_range = new_df.loc[
(
(new_df["price_range"] == "Less than 1000 Rupees")
| (new_df["price_range"] == "Between 1001 to 5000 Rupees")
| (new_df["price_range"] == "Between 10001 to 100k Rupees")
| (new_df["price_range"] == "Between 5001 to 10000 Rupees")
| (new_df["price_range"] == "No price mention")
| (new_df["price_range"] == "More than 100k")
)
& (new_df["order_status"] == "Cancelled")
]
(
cancelled_price_range["price_range"].value_counts()
/ new_df["price_range"].value_counts()
* 100
).sort_values(ascending=False).plot.bar(
figsize=(14, 7),
color="r",
title="Percentage of Cancelled orders based on Price Range",
)
# cancelled_price_range.describe(include="object")
# cancelled_price_range.describe()
# **Let's examine the percentage of refunded orders in the price range columns in the dataset.**
refund_price_range = new_df.loc[
(
(new_df["price_range"] == "Less than 1000 Rupees")
| (new_df["price_range"] == "Between 1001 to 5000 Rupees")
| (new_df["price_range"] == "Between 10001 to 100k Rupees")
| (new_df["price_range"] == "Between 5001 to 10000 Rupees")
| (new_df["price_range"] == "No price mention")
| (new_df["price_range"] == "More than 100k")
)
& (new_df["order_status"] == "Refund")
]
(
refund_price_range["price_range"].value_counts()
/ new_df["price_range"].value_counts()
* 100
).sort_values(ascending=False).plot.bar(
figsize=(14, 7),
color="black",
title="Percentage of Refunded orders based on Price Range",
)
# refund_price_range.describe(include="object")
# refund_price_range.describe()
# **Let's find the date with the highest number of orders.**
new_df["order_date"].mode()
# **Let's create a bar plot showing the top 10 dates with the highest number of orders.**
new_df["order_date"].value_counts().head(10).plot.bar(
figsize=(14, 7), color="g", title="Top 10 Dates with the Highest number of Orders"
)
# **Let's find the month with the highest number of orders.**
new_df["month"].mode()
# **Let's determine the months with the highest number of orders.**
new_df["month"].value_counts().plot.bar(
figsize=(14, 7),
color="g",
title="Determine the Months with the Highest number of Orders",
)
# **Let's examine the price range of all categories in the dataset.**
price_range = (
new_df.groupby(["category_name", "price_range"])["order_status"]
.agg(["count"])
.reset_index()
.sort_values(by=["count", "price_range"], ascending=False)
)
price_range.head()
# **Let's create a plot to visualize the price range of all categories in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Categories with Price Range")
sns.barplot(x="category_name", y="count", data=price_range, hue="price_range")
# **Let's analyze the price range across different order statuses in the dataset.**
price_range_order_status = (
new_df.groupby(["order_status", "price_range"])["order_status"]
.agg(["count"])
.reset_index()
.sort_values(by=["count", "price_range"], ascending=False)
)
price_range_order_status.head()
# **Let's create a plot to visulize the price range across different order statuses in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Order Status with Price Range")
sns.barplot(
x="price_range", y="count", data=price_range_order_status, hue="order_status"
)
# # **Observation Q1:** What is the Best Selling Category?
# * The **Mobile & Tablets category has a high number of orders, but it also has cancelled orders**. In contrast, the **Men's Fashion category has a high number of completed orders, especially in the price range below 1000, making it the best-selling category**.
# * The number of **completed orders in all categories within the price range below 5000**
# * The number of cancelled orders in all categories with a price range above 10000 is higher, possibly **because Mobiles & Tablets, which have a price range above 10000, contribute significantly to the overall cancelled orders.**
# * Most of the orders are placed in **November due to the sales/discount offered to customers.**
# **Q: Visualize payment method and order status frequency**
# **Let's analyze the payment method across different order statuses in the dataset.**
payment_method = (
new_df.groupby(["payment_method", "order_status"])["order_status"]
.agg(["count"])
.reset_index()
.sort_values(by=["count", "order_status"], ascending=False)
)
payment_method.head()
# **Let's create a plot of payment method across different order statuses in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Payment Methods with Order Status")
sns.barplot(x="payment_method", y="count", data=payment_method, hue="order_status")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/690/129690177.ipynb
|
pakistans-largest-ecommerce-dataset
|
zusmani
|
[{"Id": 129690177, "ScriptId": 38111863, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10579837, "CreationDate": "05/15/2023 19:17:00", "VersionNumber": 1.0, "Title": "Unleash Insights with a 6-Step Data-Driven Guide", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 419.0, "LinesInsertedFromPrevious": 419.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 4}]
|
[{"Id": 186015454, "KernelVersionId": 129690177, "SourceDatasetVersionId": 1859332}]
|
[{"Id": 1859332, "DatasetId": 1106196, "DatasourceVersionId": 1897227, "CreatorUserId": 590653, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "01/19/2021 11:42:57", "VersionNumber": 2.0, "Title": "Pakistan's Largest E-Commerce Dataset", "Slug": "pakistans-largest-ecommerce-dataset", "Subtitle": "Half a Million Online Orders", "Description": "### Context\n\nThis is the largest retail e-commerce orders dataset from Pakistan. It contains half a million transaction records from March 2016 to August 2018. The data was collected from various e-commerce merchants as part of a research study. I am releasing this dataset as a capstone project for my data science course at Alnafi (alnafi.com/zusmani). \nThere is a dire need for such dataset to learn about Pakistan\u2019s emerging e-commerce potential and I hope this will help many startups in many ways.\n\n### Content\n\nGeography: Pakistan\n\nTime period: 03/2016 \u2013 08/2018\n\nUnit of analysis: E-Commerce Orders\n\nDataset: The dataset contains detailed information of half a million e-commerce orders in Pakistan from March 2016 to August 2018. It contains item details, shipping method, payment method like credit card, Easy-Paisa, Jazz-Cash, cash-on-delivery, product categories like fashion, mobile, electronics, appliance etc., date of order, SKU, price, quantity, total and customer ID. This is the most detailed dataset about e-commerce in Pakistan that you can find in the Public domain.\n\nVariables: The dataset contains Item ID, Order Status (Completed, Cancelled, Refund), Date of Order, SKU, Price, Quantity, Grand Total, Category, Payment Method and Customer ID. \n\nSize: 101 MB\n\nFile Type: CSV\n\n### Acknowledgements\n\nI like to thank all the startups who are trying to make their mark in Pakistan despite the unavailability of research data.\n\n### Inspiration\n\nI\u2019d like to call the attention of my fellow Kagglers to use Machine Learning and Data Sciences to help me explore these ideas:\n\n\u2022 What is the best-selling category?\n\u2022 Visualize payment method and order status frequency\n\u2022 Find a correlation between payment method and order status\n\u2022 Find a correlation between order date and item category\n\u2022 Find any hidden patterns that are counter-intuitive for a layman\n\u2022 Can we predict number of orders, or item category or number of customers/amount in advance?", "VersionNotes": "CSV", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1106196, "CreatorUserId": 590653, "OwnerUserId": 590653.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1859332.0, "CurrentDatasourceVersionId": 1897227.0, "ForumId": 1123457, "Type": 2, "CreationDate": "01/19/2021 10:46:13", "LastActivityDate": "01/19/2021", "TotalViews": 68973, "TotalDownloads": 7237, "TotalVotes": 271, "TotalKernels": 66}]
|
[{"Id": 590653, "UserName": "zusmani", "DisplayName": "Zeeshan-ul-hassan Usmani", "RegisterDate": "04/19/2016", "PerformanceTier": 4}]
|
# ## **"Unleashing the Power of Exploratory Data Analysis in Pakistan's Ecommerce Industry: A 6-Step Guide to Unlock Insights and Drive Data-Driven Decision Making"**
# The data was gathered during the month of March 2016 to August 2018.
# **Written by: Faisal Mehmood**\
# **Date: 04-05-2023**\
# **Email:** [email protected]
# [](https://www.facebook.com/FMGillani01)
# [](https://www.instagram.com/fmgillani/)
# [](https://www.linkedin.com/in/faisalmehmood1122/)
# [](https://twitter.com/FMGillani)
# [](https://github.com/faisalmehmood2013)
# [](https://medium.com/@shahfaisal1122)
# [](https://www.kaggle.com/faisalmehmood2022)
# [](https://www.youtube.com/channel/UCcAduSDM92_Jk05ZZXnQhfQ)
# [](https://www.tiktok.com/@faisalgillani1070)
# [](https://www.quora.com/profile/Faisal-Gillani-6)
# # **Step 1: Import the Liabraries**
# Basic Liabraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import plotly as px
# Statistic Liabrary
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# Others Liabraries
import missingno as msno
# To look the entire columns we can use pandas function pd.set_option()
pd.set_option("display.max_columns", None)
# # **Step 2: Load the Data**
df = pd.read_csv(
"/kaggle/input/pakistans-largest-ecommerce-dataset/Pakistan Largest Ecommerce Dataset.csv",
low_memory=False,
)
# # **Step 3: Explore the Data**
# ## **Understanding the Data**
df.head()
df.tail()
df.info()
df.shape
df.columns
df.dtypes
# # **Observaion 1:**
# * The dataset consists of **1048575 rows** and **26 columns**.
# * There are **5 columns** and **464051 rows** that contain no data and should be removed, leaving **21 columns** and **584524 rows**.
# * Out of the 26 columns, **13 columns contain numerical data** and **13 columns contain categorical data**.
# * There are some columns with incorrect data types that need to be corrected.
# * **26 Columns are:** item_id, status, created_at, sku, price, qty_ordered, grand_total, increment_id, category_name_1, sales_commission_code, discount_amount, payment_method, Working Date, BI Status, MV, Year, Month, Customer Since, M-Y, FY, Customer ID, Unnamed: 21, Unnamed: 22, Unnamed: 23, Unnamed: 24, Unnamed: 25
# # **Descriptive Statistics**
# **To describe numerical data in a dataset**
with pd.option_context("float_format", "{:.2f}".format):
display(df.describe())
# **To describe categorical data in a dataset**
df.describe(include="object")
# # **Observation 2:**
# * The **maximum price in the dataset is 1012625.90** and the **highest quantity ordered was 1000**.
# * Out of 584509 **orders, 233685 were completed** successfully.
# * The majority of items were purchased in the **Mobile and Tablets category**.
# * The maximum orders were delivered using **Cash on Delivery (COD)** as the payment method.
# * The highest number of purchases occurred on **25-11-2016**.
# # **Step 4: Identifying and Handling Missing Values**
df.drop(
columns=["Unnamed: 25", "Unnamed: 24", "Unnamed: 23", "Unnamed: 22", "Unnamed: 21"],
inplace=True,
)
df.dropna(inplace=True, how="all")
# **Let's see the missing value in the data**
df.isnull().sum().sort_values(ascending=False)
# **Let's plot the missing values**
msno.matrix(df, sparkline=False)
plt.rcParams["figure.figsize"] = (20, 6)
sns.heatmap(df.isnull(), yticklabels=False, cbar=False, cmap="viridis")
plt.title("Missing Null Values")
# **Let's see the percentage of missing values in the data**
missing_percentage = df.isnull().sum().sort_values(ascending=False) / len(df) * 100
missing_percentage
# **Let's plot the percentagethe of missing values**
missing_percentage = missing_percentage[missing_percentage != 0]
plt.rcParams["figure.figsize"] = (20, 6)
missing_percentage.plot(kind="bar", align="center")
plt.title("Missing Percentage of Null Values")
# ## Duplicated Values
# **Checking any duplicates in the item_id**
duplicated = df["item_id"].duplicated().any()
duplicated
# # **Observation 3:**
# * The column with the highest percentage of null values is **sales_commission_code**. This column may not be very helpful, so we can consider dropping it from the dataset.
# * For columns with a **small number of null values** such as **SKU, Customer ID, and Customer Since**, we can simply drop the corresponding rows.
# * Since **category_name_1 and status are important columns**, it's better to fill in the null values using appropriate methods such as imputation or interpolation.
# * **No duplicate values** found in the dataset.
# **Dropping and Filling the Null Values**
df.drop(columns=["sales_commission_code"], inplace=True)
df.dropna(subset=["sku", "Customer ID", "Customer Since"], inplace=True)
df["status"].fillna(df["status"].mode()[0], inplace=True)
df["category_name_1"].fillna(df["category_name_1"].mode()[0], inplace=True)
df.isnull().sum()
# # **Step 5: Understanding the Variables:**
# **As we can see from the above, some columns in the dataset are not in the correct data type. Therefore, we need to perform casting to correct the data types of these columns.**
# Convert the datatype to strring and int
df["Customer ID"] = df["Customer ID"].astype(str)
df["item_id"] = df["item_id"].astype(str)
df["qty_ordered"] = df["qty_ordered"].astype(int)
df["Year"] = df["Year"].astype(int)
df["Month"] = df["Month"].astype(int)
# Convert the datatype to datetime
df["created_at"] = pd.to_datetime(df["created_at"])
# Rename columns
df.rename(
columns={
"category_name_1": "category_name",
"created_at": "order_date",
"Customer ID": "customer_id",
"Customer Since": "customer_since",
"Year": "year",
"Month": "month",
},
inplace=True,
)
# Remove the - Symbols in the discount_amount
df["discount_amount"].replace(-599.5, 599.5, regex=True, inplace=True)
df["discount_amount"].replace(-2.0, 2.0, regex=True, inplace=True)
df.info()
# **Let's explore the numerical columns**
for col in df.describe().columns:
print(f"Column Name: ", col)
print(f"Maximum Value: ", df[col].max())
print(f"Unique Values:\n", df[col].unique())
print(f"Unique Values Counts:\n", df[col].value_counts())
print("-" * 100)
# **Let's explore the categorical columns**
for col in df.describe(include="object").columns:
print(f"Column Name: ", col)
print(f"Number of frequencies: ", df[col].mode()[0])
print(f"Unique Values:\n", df[col].unique())
print(f"Unique Values Counts:\n", df[col].value_counts())
print("-" * 100)
# # **Observation 4:**
# * The majority of products in the dataset fall within the price range of 01 and 1000.**
# * Most of the people ordered in **single quantity.**
# * People made purchases **in November, May, March, and August**.
# * **Order Status are:** complete, canceled, order_refunded, received, refund, closed, fraud, holded, exchange, pending_paypal, paid, N, cod, pending, processing, payment_review
# * **Category Names are:** Mobiles & Tablets, Men's Fashion, Women's Fashion, Appliances, Superstore, Beauty & Grooming, Soghaat, Others, Home & Living, Entertainment, Health & Sports, Kids & Baby, Computing, N, School & Education, Books
# * **Payment methods are:** cod, Payaxis, Easypay, jazzwallet, easypay_voucher, bankalfalah, jazzvoucher, Easypay_MA, customercredit, apg, ublcreditcard, cashatdoorstep, mcblite, mygateway, internetbanking, productcredit, marketingexpense, financesettlement,
# # **Step No 06: Data preprocessing**
# **Let's see the status columns in the dataset**
# * In this dataset, the "complete," "received," "closed," "COD," and "paid" statuses indicate completed orders.
# * In this dataset, the "canceled," "fraud," "holded," and "pending_paypal," statuses indicate cancelled orders.
# * In this dataset, the statuses "order_refunded," "refund," "exchange," "N," "pending," "processing," and "payment_review" represent Refund order statuses.
df["order_status"] = "Refund"
df.loc[
(df["status"] == "complete")
| (df["status"] == "received")
| (df["status"] == "closed")
| (df["status"] == "cod")
| (df["status"] == "paid"),
"order_status",
] = "Completed"
df.loc[
(df["status"] == "canceled")
| (df["status"] == "fraud")
| (df["status"] == "holded")
| (df["status"] == "pending_paypal"),
"order_status",
] = "Cancelled"
df["order_status"].value_counts()
# **Let's see the price column in the dataset**
df["price_range"] = "No price mention"
df.loc[
(df["price"] > 0) & (df["price"] <= 1000), "price_range"
] = "Less than 1000 Rupees"
df.loc[
(df["price"] > 1000) & (df["price"] <= 5000), "price_range"
] = "Between 1001 to 5000 Rupees"
df.loc[
(df["price"] > 5000) & (df["price"] <= 10000), "price_range"
] = "Between 5001 to 10000 Rupees"
df.loc[
(df["price"] > 10000) & (df["price"] <= 100000), "price_range"
] = "Between 10001 to 100k Rupees"
df.loc[(df["price"] > 100000), "price_range"] = "More than 100k"
df["price_range"].value_counts(normalize=True) * 100
# * There are 2215 missing values in the price column, which are represented as zeros.
# * Maximum price range less than 1000 means that the highest value of prices in the dataset is below 1000.
# **Let's see the grand_total column in the dataset**
# * The grand_total column is calculated by multiplying the number of quantities ordered by the price and then subtracting the discount amount. However, in this dataset, it appears that the grand_total values were not calculated using this formula.
df["before_discount_total_amount"] = (df["qty_ordered"] * df["price"]).astype(float)
df["after_discount_total_amount"] = (
(df["qty_ordered"] * df["price"]) - df["discount_amount"]
).astype(float)
# **Create a new dataset after performing preprocessing steps.**
new_df = df[
[
"customer_id",
"order_date",
"year",
"month",
"category_name",
"qty_ordered",
"price",
"before_discount_total_amount",
"discount_amount",
"after_discount_total_amount",
"order_status",
"payment_method",
"price_range",
]
]
new_df.head(5)
# # **Questions and Answers:**
# * What is the best-selling category?
# * Visualize payment method and order status frequency
# **Q: What is the best-selling category?**
# **Let's explore the top 10 best-selling categories in the dataset.**
top_category = (
new_df["category_name"]
.value_counts()
.reset_index()
.rename(columns={"category_name": "count", "index": "category_name"})
)
top_10_category = top_category.head(10)
top_10_category
# **Let's create a bar plot of the top 10 categories in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.xlabel("Category")
plt.ylabel("Category Count")
plt.title("Top 10 Categories")
sns.barplot(x=top_10_category["category_name"], y=top_10_category["count"])
# **Let's explore the percentage of best-selling categories in the dataset.**
top_category_percentage = (
(new_df["category_name"].value_counts(normalize=True) * 100)
.reset_index()
.rename(columns={"category_name": "count", "index": "category_name"})
)
top_category_percentage
# **Let's create a bar plot to visualize the percentage of best-selling categories in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.xlabel("Category")
plt.ylabel("Category Count")
plt.title("Percentage of Best-Selling Categories")
sns.barplot(
x=top_category_percentage["category_name"], y=top_category_percentage["count"]
)
# **Let's examine the top 10 categories in terms of value before the discount amount.**
top_10_categories_by_value = (
new_df.groupby(["category_name"])
.sum()[["before_discount_total_amount"]]
.sort_values(by="before_discount_total_amount", ascending=False)
.head(10)
)
with pd.option_context("float_format", "{:.2f}".format):
display(top_10_categories_by_value)
# **Let's create a bar plot to visualize the best-selling categories by their value in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Top 10 Best-Selling Categories by their Value")
sns.barplot(
x=top_10_categories_by_value.index,
y=top_10_categories_by_value["before_discount_total_amount"],
)
# **Let's analyze the distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status = pd.crosstab(
new_df["category_name"], new_df["order_status"]
).sort_values(by="Completed", ascending=False)
category_order_status
# **Let's create a plot to visualize the distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status.plot(kind="bar", figsize=(14, 7))
plt.xlabel("Category")
plt.ylabel("Count")
plt.title("Best Selling Category with order status")
# **Let's analyze the percentage distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status_percentage = (
pd.crosstab(new_df["category_name"], new_df["order_status"])
.apply(lambda x: round(x / x.sum() * 100, 1), axis=1)
.sort_values(by="Completed", ascending=False)
)
category_order_status_percentage
# **Let's plot the percentage distribution of Cancelled, Completed, and Refund orders across different categories in the dataset.**
category_order_status_percentage.plot(kind="bar", figsize=(14, 7))
plt.xlabel("Category")
plt.ylabel("Percentage")
plt.title("Percentage of Best Selling Category with order status")
# **Let's examine the percentage of completed orders based on the price range in the dataset.**
completed_price_range = new_df.loc[
(
(new_df["price_range"] == "Less than 1000 Rupees")
| (new_df["price_range"] == "Between 1001 to 5000 Rupees")
| (new_df["price_range"] == "Between 10001 to 100k Rupees")
| (new_df["price_range"] == "Between 5001 to 10000 Rupees")
| (new_df["price_range"] == "No price mention")
| (new_df["price_range"] == "More than 100k")
)
& (new_df["order_status"] == "Completed")
]
(
completed_price_range["price_range"].value_counts()
/ new_df["price_range"].value_counts()
* 100
).sort_values(ascending=False).plot.bar(
figsize=(14, 7),
color="g",
title="Percentage of Completed orders based on Price Range",
)
# completed_price_range.describe(include="object")
# completed_price_range.describe()
# **Let's examine the percentage of cancelled orders in the price range columns in the dataset.**
cancelled_price_range = new_df.loc[
(
(new_df["price_range"] == "Less than 1000 Rupees")
| (new_df["price_range"] == "Between 1001 to 5000 Rupees")
| (new_df["price_range"] == "Between 10001 to 100k Rupees")
| (new_df["price_range"] == "Between 5001 to 10000 Rupees")
| (new_df["price_range"] == "No price mention")
| (new_df["price_range"] == "More than 100k")
)
& (new_df["order_status"] == "Cancelled")
]
(
cancelled_price_range["price_range"].value_counts()
/ new_df["price_range"].value_counts()
* 100
).sort_values(ascending=False).plot.bar(
figsize=(14, 7),
color="r",
title="Percentage of Cancelled orders based on Price Range",
)
# cancelled_price_range.describe(include="object")
# cancelled_price_range.describe()
# **Let's examine the percentage of refunded orders in the price range columns in the dataset.**
refund_price_range = new_df.loc[
(
(new_df["price_range"] == "Less than 1000 Rupees")
| (new_df["price_range"] == "Between 1001 to 5000 Rupees")
| (new_df["price_range"] == "Between 10001 to 100k Rupees")
| (new_df["price_range"] == "Between 5001 to 10000 Rupees")
| (new_df["price_range"] == "No price mention")
| (new_df["price_range"] == "More than 100k")
)
& (new_df["order_status"] == "Refund")
]
(
refund_price_range["price_range"].value_counts()
/ new_df["price_range"].value_counts()
* 100
).sort_values(ascending=False).plot.bar(
figsize=(14, 7),
color="black",
title="Percentage of Refunded orders based on Price Range",
)
# refund_price_range.describe(include="object")
# refund_price_range.describe()
# **Let's find the date with the highest number of orders.**
new_df["order_date"].mode()
# **Let's create a bar plot showing the top 10 dates with the highest number of orders.**
new_df["order_date"].value_counts().head(10).plot.bar(
figsize=(14, 7), color="g", title="Top 10 Dates with the Highest number of Orders"
)
# **Let's find the month with the highest number of orders.**
new_df["month"].mode()
# **Let's determine the months with the highest number of orders.**
new_df["month"].value_counts().plot.bar(
figsize=(14, 7),
color="g",
title="Determine the Months with the Highest number of Orders",
)
# **Let's examine the price range of all categories in the dataset.**
price_range = (
new_df.groupby(["category_name", "price_range"])["order_status"]
.agg(["count"])
.reset_index()
.sort_values(by=["count", "price_range"], ascending=False)
)
price_range.head()
# **Let's create a plot to visualize the price range of all categories in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Categories with Price Range")
sns.barplot(x="category_name", y="count", data=price_range, hue="price_range")
# **Let's analyze the price range across different order statuses in the dataset.**
price_range_order_status = (
new_df.groupby(["order_status", "price_range"])["order_status"]
.agg(["count"])
.reset_index()
.sort_values(by=["count", "price_range"], ascending=False)
)
price_range_order_status.head()
# **Let's create a plot to visulize the price range across different order statuses in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Order Status with Price Range")
sns.barplot(
x="price_range", y="count", data=price_range_order_status, hue="order_status"
)
# # **Observation Q1:** What is the Best Selling Category?
# * The **Mobile & Tablets category has a high number of orders, but it also has cancelled orders**. In contrast, the **Men's Fashion category has a high number of completed orders, especially in the price range below 1000, making it the best-selling category**.
# * The number of **completed orders in all categories within the price range below 5000**
# * The number of cancelled orders in all categories with a price range above 10000 is higher, possibly **because Mobiles & Tablets, which have a price range above 10000, contribute significantly to the overall cancelled orders.**
# * Most of the orders are placed in **November due to the sales/discount offered to customers.**
# **Q: Visualize payment method and order status frequency**
# **Let's analyze the payment method across different order statuses in the dataset.**
payment_method = (
new_df.groupby(["payment_method", "order_status"])["order_status"]
.agg(["count"])
.reset_index()
.sort_values(by=["count", "order_status"], ascending=False)
)
payment_method.head()
# **Let's create a plot of payment method across different order statuses in the dataset.**
plt.figure(figsize=(14, 7))
plt.xticks(rotation=60)
plt.title("Payment Methods with Order Status")
sns.barplot(x="payment_method", y="count", data=payment_method, hue="order_status")
|
[{"pakistans-largest-ecommerce-dataset/Pakistan Largest Ecommerce Dataset.csv": {"column_names": "[\"item_id\", \"status\", \"created_at\", \"sku\", \"price\", \"qty_ordered\", \"grand_total\", \"increment_id\", \"category_name_1\", \"sales_commission_code\", \"discount_amount\", \"payment_method\", \"Working Date\", \"BI Status\", \" MV \", \"Year\", \"Month\", \"Customer Since\", \"M-Y\", \"FY\", \"Customer ID\", \"Unnamed: 21\", \"Unnamed: 22\", \"Unnamed: 23\", \"Unnamed: 24\", \"Unnamed: 25\"]", "column_data_types": "{\"item_id\": \"float64\", \"status\": \"object\", \"created_at\": \"object\", \"sku\": \"object\", \"price\": \"float64\", \"qty_ordered\": \"float64\", \"grand_total\": \"float64\", \"increment_id\": \"object\", \"category_name_1\": \"object\", \"sales_commission_code\": \"object\", \"discount_amount\": \"float64\", \"payment_method\": \"object\", \"Working Date\": \"object\", \"BI Status\": \"object\", \" MV \": \"object\", \"Year\": \"float64\", \"Month\": \"float64\", \"Customer Since\": \"object\", \"M-Y\": \"object\", \"FY\": \"object\", \"Customer ID\": \"float64\", \"Unnamed: 21\": \"float64\", \"Unnamed: 22\": \"float64\", \"Unnamed: 23\": \"float64\", \"Unnamed: 24\": \"float64\", \"Unnamed: 25\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1048575 entries, 0 to 1048574\nData columns (total 26 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 item_id 584524 non-null float64\n 1 status 584509 non-null object \n 2 created_at 584524 non-null object \n 3 sku 584504 non-null object \n 4 price 584524 non-null float64\n 5 qty_ordered 584524 non-null float64\n 6 grand_total 584524 non-null float64\n 7 increment_id 584524 non-null object \n 8 category_name_1 584360 non-null object \n 9 sales_commission_code 447349 non-null object \n 10 discount_amount 584524 non-null float64\n 11 payment_method 584524 non-null object \n 12 Working Date 584524 non-null object \n 13 BI Status 584524 non-null object \n 14 MV 584524 non-null object \n 15 Year 584524 non-null float64\n 16 Month 584524 non-null float64\n 17 Customer Since 584513 non-null object \n 18 M-Y 584524 non-null object \n 19 FY 584524 non-null object \n 20 Customer ID 584513 non-null float64\n 21 Unnamed: 21 0 non-null float64\n 22 Unnamed: 22 0 non-null float64\n 23 Unnamed: 23 0 non-null float64\n 24 Unnamed: 24 0 non-null float64\n 25 Unnamed: 25 0 non-null float64\ndtypes: float64(13), object(13)\nmemory usage: 208.0+ MB\n", "summary": "{\"item_id\": {\"count\": 584524.0, \"mean\": 565667.0742176541, \"std\": 200121.17364816953, \"min\": 211131.0, \"25%\": 395000.75, \"50%\": 568424.5, \"75%\": 739106.25, \"max\": 905208.0}, \"price\": {\"count\": 584524.0, \"mean\": 6348.7475310337995, \"std\": 14949.269515296899, \"min\": 0.0, \"25%\": 360.0, \"50%\": 899.0, \"75%\": 4070.0, \"max\": 1012625.9}, \"qty_ordered\": {\"count\": 584524.0, \"mean\": 1.2963881722564001, \"std\": 3.9960610758179334, \"min\": 1.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1000.0}, \"grand_total\": {\"count\": 584524.0, \"mean\": 8530.618570950894, \"std\": 61320.81462544805, \"min\": -1594.0, \"25%\": 945.0, \"50%\": 1960.4, \"75%\": 6999.0, \"max\": 17888000.0}, \"discount_amount\": {\"count\": 584524.0, \"mean\": 499.4927751936617, \"std\": 1506.9430462490568, \"min\": -599.5, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 160.5, \"max\": 90300.0}, \"Year\": {\"count\": 584524.0, \"mean\": 2017.0441145273762, \"std\": 0.707354684490064, \"min\": 2016.0, \"25%\": 2017.0, \"50%\": 2017.0, \"75%\": 2018.0, \"max\": 2018.0}, \"Month\": {\"count\": 584524.0, \"mean\": 7.167654364919148, \"std\": 3.4863047855274605, \"min\": 1.0, \"25%\": 4.0, \"50%\": 7.0, \"75%\": 11.0, \"max\": 12.0}, \"Customer ID\": {\"count\": 584513.0, \"mean\": 45790.51196466118, \"std\": 34414.96238932246, \"min\": 1.0, \"25%\": 13516.0, \"50%\": 42856.0, \"75%\": 73536.0, \"max\": 115326.0}, \"Unnamed: 21\": {\"count\": 0.0, \"mean\": NaN, \"std\": NaN, \"min\": NaN, \"25%\": NaN, \"50%\": NaN, \"75%\": NaN, \"max\": NaN}, \"Unnamed: 22\": {\"count\": 0.0, \"mean\": NaN, \"std\": NaN, \"min\": NaN, \"25%\": NaN, \"50%\": NaN, \"75%\": NaN, \"max\": NaN}, \"Unnamed: 23\": {\"count\": 0.0, \"mean\": NaN, \"std\": NaN, \"min\": NaN, \"25%\": NaN, \"50%\": NaN, \"75%\": NaN, \"max\": NaN}, \"Unnamed: 24\": {\"count\": 0.0, \"mean\": NaN, \"std\": NaN, \"min\": NaN, \"25%\": NaN, \"50%\": NaN, \"75%\": NaN, \"max\": NaN}, \"Unnamed: 25\": {\"count\": 0.0, \"mean\": NaN, \"std\": NaN, \"min\": NaN, \"25%\": NaN, \"50%\": NaN, \"75%\": NaN, \"max\": NaN}}", "examples": "{\"item_id\":{\"0\":211131.0,\"1\":211133.0,\"2\":211134.0,\"3\":211135.0},\"status\":{\"0\":\"complete\",\"1\":\"canceled\",\"2\":\"canceled\",\"3\":\"complete\"},\"created_at\":{\"0\":\"7\\/1\\/2016\",\"1\":\"7\\/1\\/2016\",\"2\":\"7\\/1\\/2016\",\"3\":\"7\\/1\\/2016\"},\"sku\":{\"0\":\"kreations_YI 06-L\",\"1\":\"kcc_Buy 2 Frey Air Freshener & Get 1 Kasual Body Spray Free\",\"2\":\"Ego_UP0017-999-MR0\",\"3\":\"kcc_krone deal\"},\"price\":{\"0\":1950.0,\"1\":240.0,\"2\":2450.0,\"3\":360.0},\"qty_ordered\":{\"0\":1.0,\"1\":1.0,\"2\":1.0,\"3\":1.0},\"grand_total\":{\"0\":1950.0,\"1\":240.0,\"2\":2450.0,\"3\":60.0},\"increment_id\":{\"0\":100147443,\"1\":100147444,\"2\":100147445,\"3\":100147446},\"category_name_1\":{\"0\":\"Women's Fashion\",\"1\":\"Beauty & Grooming\",\"2\":\"Women's Fashion\",\"3\":\"Beauty & Grooming\"},\"sales_commission_code\":{\"0\":\"\\\\N\",\"1\":\"\\\\N\",\"2\":\"\\\\N\",\"3\":\"R-FSD-52352\"},\"discount_amount\":{\"0\":0.0,\"1\":0.0,\"2\":0.0,\"3\":300.0},\"payment_method\":{\"0\":\"cod\",\"1\":\"cod\",\"2\":\"cod\",\"3\":\"cod\"},\"Working Date\":{\"0\":\"7\\/1\\/2016\",\"1\":\"7\\/1\\/2016\",\"2\":\"7\\/1\\/2016\",\"3\":\"7\\/1\\/2016\"},\"BI Status\":{\"0\":\"#REF!\",\"1\":\"Gross\",\"2\":\"Gross\",\"3\":\"Net\"},\" MV \":{\"0\":\" 1,950 \",\"1\":\" 240 \",\"2\":\" 2,450 \",\"3\":\" 360 \"},\"Year\":{\"0\":2016.0,\"1\":2016.0,\"2\":2016.0,\"3\":2016.0},\"Month\":{\"0\":7.0,\"1\":7.0,\"2\":7.0,\"3\":7.0},\"Customer Since\":{\"0\":\"2016-7\",\"1\":\"2016-7\",\"2\":\"2016-7\",\"3\":\"2016-7\"},\"M-Y\":{\"0\":\"7-2016\",\"1\":\"7-2016\",\"2\":\"7-2016\",\"3\":\"7-2016\"},\"FY\":{\"0\":\"FY17\",\"1\":\"FY17\",\"2\":\"FY17\",\"3\":\"FY17\"},\"Customer ID\":{\"0\":1.0,\"1\":2.0,\"2\":3.0,\"3\":4.0},\"Unnamed: 21\":{\"0\":null,\"1\":null,\"2\":null,\"3\":null},\"Unnamed: 22\":{\"0\":null,\"1\":null,\"2\":null,\"3\":null},\"Unnamed: 23\":{\"0\":null,\"1\":null,\"2\":null,\"3\":null},\"Unnamed: 24\":{\"0\":null,\"1\":null,\"2\":null,\"3\":null},\"Unnamed: 25\":{\"0\":null,\"1\":null,\"2\":null,\"3\":null}}"}}]
| true | 1 |
<start_data_description><data_path>pakistans-largest-ecommerce-dataset/Pakistan Largest Ecommerce Dataset.csv:
<column_names>
['item_id', 'status', 'created_at', 'sku', 'price', 'qty_ordered', 'grand_total', 'increment_id', 'category_name_1', 'sales_commission_code', 'discount_amount', 'payment_method', 'Working Date', 'BI Status', ' MV ', 'Year', 'Month', 'Customer Since', 'M-Y', 'FY', 'Customer ID', 'Unnamed: 21', 'Unnamed: 22', 'Unnamed: 23', 'Unnamed: 24', 'Unnamed: 25']
<column_types>
{'item_id': 'float64', 'status': 'object', 'created_at': 'object', 'sku': 'object', 'price': 'float64', 'qty_ordered': 'float64', 'grand_total': 'float64', 'increment_id': 'object', 'category_name_1': 'object', 'sales_commission_code': 'object', 'discount_amount': 'float64', 'payment_method': 'object', 'Working Date': 'object', 'BI Status': 'object', ' MV ': 'object', 'Year': 'float64', 'Month': 'float64', 'Customer Since': 'object', 'M-Y': 'object', 'FY': 'object', 'Customer ID': 'float64', 'Unnamed: 21': 'float64', 'Unnamed: 22': 'float64', 'Unnamed: 23': 'float64', 'Unnamed: 24': 'float64', 'Unnamed: 25': 'float64'}
<dataframe_Summary>
{'item_id': {'count': 584524.0, 'mean': 565667.0742176541, 'std': 200121.17364816953, 'min': 211131.0, '25%': 395000.75, '50%': 568424.5, '75%': 739106.25, 'max': 905208.0}, 'price': {'count': 584524.0, 'mean': 6348.7475310337995, 'std': 14949.269515296899, 'min': 0.0, '25%': 360.0, '50%': 899.0, '75%': 4070.0, 'max': 1012625.9}, 'qty_ordered': {'count': 584524.0, 'mean': 1.2963881722564001, 'std': 3.9960610758179334, 'min': 1.0, '25%': 1.0, '50%': 1.0, '75%': 1.0, 'max': 1000.0}, 'grand_total': {'count': 584524.0, 'mean': 8530.618570950894, 'std': 61320.81462544805, 'min': -1594.0, '25%': 945.0, '50%': 1960.4, '75%': 6999.0, 'max': 17888000.0}, 'discount_amount': {'count': 584524.0, 'mean': 499.4927751936617, 'std': 1506.9430462490568, 'min': -599.5, '25%': 0.0, '50%': 0.0, '75%': 160.5, 'max': 90300.0}, 'Year': {'count': 584524.0, 'mean': 2017.0441145273762, 'std': 0.707354684490064, 'min': 2016.0, '25%': 2017.0, '50%': 2017.0, '75%': 2018.0, 'max': 2018.0}, 'Month': {'count': 584524.0, 'mean': 7.167654364919148, 'std': 3.4863047855274605, 'min': 1.0, '25%': 4.0, '50%': 7.0, '75%': 11.0, 'max': 12.0}, 'Customer ID': {'count': 584513.0, 'mean': 45790.51196466118, 'std': 34414.96238932246, 'min': 1.0, '25%': 13516.0, '50%': 42856.0, '75%': 73536.0, 'max': 115326.0}, 'Unnamed: 21': {'count': 0.0, 'mean': nan, 'std': nan, 'min': nan, '25%': nan, '50%': nan, '75%': nan, 'max': nan}, 'Unnamed: 22': {'count': 0.0, 'mean': nan, 'std': nan, 'min': nan, '25%': nan, '50%': nan, '75%': nan, 'max': nan}, 'Unnamed: 23': {'count': 0.0, 'mean': nan, 'std': nan, 'min': nan, '25%': nan, '50%': nan, '75%': nan, 'max': nan}, 'Unnamed: 24': {'count': 0.0, 'mean': nan, 'std': nan, 'min': nan, '25%': nan, '50%': nan, '75%': nan, 'max': nan}, 'Unnamed: 25': {'count': 0.0, 'mean': nan, 'std': nan, 'min': nan, '25%': nan, '50%': nan, '75%': nan, 'max': nan}}
<dataframe_info>
RangeIndex: 1048575 entries, 0 to 1048574
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 item_id 584524 non-null float64
1 status 584509 non-null object
2 created_at 584524 non-null object
3 sku 584504 non-null object
4 price 584524 non-null float64
5 qty_ordered 584524 non-null float64
6 grand_total 584524 non-null float64
7 increment_id 584524 non-null object
8 category_name_1 584360 non-null object
9 sales_commission_code 447349 non-null object
10 discount_amount 584524 non-null float64
11 payment_method 584524 non-null object
12 Working Date 584524 non-null object
13 BI Status 584524 non-null object
14 MV 584524 non-null object
15 Year 584524 non-null float64
16 Month 584524 non-null float64
17 Customer Since 584513 non-null object
18 M-Y 584524 non-null object
19 FY 584524 non-null object
20 Customer ID 584513 non-null float64
21 Unnamed: 21 0 non-null float64
22 Unnamed: 22 0 non-null float64
23 Unnamed: 23 0 non-null float64
24 Unnamed: 24 0 non-null float64
25 Unnamed: 25 0 non-null float64
dtypes: float64(13), object(13)
memory usage: 208.0+ MB
<some_examples>
{'item_id': {'0': 211131.0, '1': 211133.0, '2': 211134.0, '3': 211135.0}, 'status': {'0': 'complete', '1': 'canceled', '2': 'canceled', '3': 'complete'}, 'created_at': {'0': '7/1/2016', '1': '7/1/2016', '2': '7/1/2016', '3': '7/1/2016'}, 'sku': {'0': 'kreations_YI 06-L', '1': 'kcc_Buy 2 Frey Air Freshener & Get 1 Kasual Body Spray Free', '2': 'Ego_UP0017-999-MR0', '3': 'kcc_krone deal'}, 'price': {'0': 1950.0, '1': 240.0, '2': 2450.0, '3': 360.0}, 'qty_ordered': {'0': 1.0, '1': 1.0, '2': 1.0, '3': 1.0}, 'grand_total': {'0': 1950.0, '1': 240.0, '2': 2450.0, '3': 60.0}, 'increment_id': {'0': 100147443, '1': 100147444, '2': 100147445, '3': 100147446}, 'category_name_1': {'0': "Women's Fashion", '1': 'Beauty & Grooming', '2': "Women's Fashion", '3': 'Beauty & Grooming'}, 'sales_commission_code': {'0': '\\N', '1': '\\N', '2': '\\N', '3': 'R-FSD-52352'}, 'discount_amount': {'0': 0.0, '1': 0.0, '2': 0.0, '3': 300.0}, 'payment_method': {'0': 'cod', '1': 'cod', '2': 'cod', '3': 'cod'}, 'Working Date': {'0': '7/1/2016', '1': '7/1/2016', '2': '7/1/2016', '3': '7/1/2016'}, 'BI Status': {'0': '#REF!', '1': 'Gross', '2': 'Gross', '3': 'Net'}, ' MV ': {'0': ' 1,950 ', '1': ' 240 ', '2': ' 2,450 ', '3': ' 360 '}, 'Year': {'0': 2016.0, '1': 2016.0, '2': 2016.0, '3': 2016.0}, 'Month': {'0': 7.0, '1': 7.0, '2': 7.0, '3': 7.0}, 'Customer Since': {'0': '2016-7', '1': '2016-7', '2': '2016-7', '3': '2016-7'}, 'M-Y': {'0': '7-2016', '1': '7-2016', '2': '7-2016', '3': '7-2016'}, 'FY': {'0': 'FY17', '1': 'FY17', '2': 'FY17', '3': 'FY17'}, 'Customer ID': {'0': 1.0, '1': 2.0, '2': 3.0, '3': 4.0}, 'Unnamed: 21': {'0': None, '1': None, '2': None, '3': None}, 'Unnamed: 22': {'0': None, '1': None, '2': None, '3': None}, 'Unnamed: 23': {'0': None, '1': None, '2': None, '3': None}, 'Unnamed: 24': {'0': None, '1': None, '2': None, '3': None}, 'Unnamed: 25': {'0': None, '1': None, '2': None, '3': None}}
<end_description>
| 6,107 | 4 | 8,179 | 6,107 |
129690591
|
<jupyter_start><jupyter_text>netflix_shows
Kaggle dataset identifier: netflix-shows
<jupyter_script># # **Exploratory Data Analysis of Netflix Shows**
# 
# ## **Introduction**
# Welcome to the exploratory data analysis (EDA) of Netflix shows! In this notebook, we will dive into the dataset containing information about various shows available on Netflix. By performing EDA, we aim to gain insights, discover patterns, and uncover interesting trends within the data.
# Netflix has become one of the leading streaming platforms, offering a vast library of TV shows and movies across different genres. As a Netflix user or someone interested in the entertainment industry, this EDA will provide you with a better understanding of the shows available on the platform.
# ## **Data Loading and Preparation**
# In this section, we will import the necessary libraries for data manipulation and visualization, load the dataset into a Pandas DataFrame, and perform initial data cleaning and preprocessing steps. The dataset used for this analysis is titled "*Netflix Shows*"
# ### Import the required libraries for data manipulation and visualization
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# ### Load the "Netflix Shows" dataset
df = pd.read_csv("/kaggle/input/netflix-shows/netflix_titles.csv")
# ### Data Understanding
df.columns
df.head()
df.dtypes
# ### Data Cleaning
# To calculate the total number of missing values in each column of the DataFrame, we can use the following code:
df.isna().sum()
# To handle these missing values, the following plan will be implemented:
# 1. For the column '**director**', which has **2634** missing values, we will delete the entire column as it contains a significant number of missing values.
# 2. For the columns '**cast**', '**country**', '**date_added**', '**rating**', and '**duration**', which have **825**, **831**, **10**, **4**, and **3** missing values respectively, we will delete the rows that contain missing values in these columns. By removing these rows, we can ensure that the analysis is based on complete and reliable data.
# #### Handling Missing Values
# Removing Director column
df = df.drop(columns=["director"])
df.isna().sum()
# Removing rows with null values in Duration, Rating, Date Added, Country and Cast
df = df.dropna(axis=0, subset=["duration"])
df = df.dropna(axis=0, subset=["rating"])
df = df.dropna(axis=0, subset=["date_added"])
df = df.dropna(axis=0, subset=["country"])
df = df.dropna(axis=0, subset=["cast"])
df.isna().sum()
# #### Converting 'date_added' Column
# To enhance the readability and consistency of the 'date_added' column in our dataset, we will be converting the existing format from "MMMM DD, YYYY" to the format of "YYYY-MM-DD". This change will allow for easier interpretation and standardize the date representation across the column.
df["date_added"] = pd.to_datetime(
df["date_added"].str.strip(), format="%B %d, %Y"
).dt.strftime("%Y-%m-%d")
df.head()
# ## **Exploratory Data Analysis**
# ### Basic Data Exploration
# #### Total number of rows and columns in the dataset
df.shape
# #### Summary statistics of numerical variables (e.g., release year)
df.describe()
# #### Count of unique values in categorical variables (e.g., show types, countries)
show_types_count = df["type"].value_counts()
countries_count = df["country"].value_counts()
print("Count of unique values in 'type' column:")
print(show_types_count)
print("\nCount of unique values in 'country' column:")
print(countries_count)
# ***In the dataset, some rows in the 'country' column contain multiple countries separated by commas. To ensure consistency and facilitate analysis, we will modify these rows to include only the first country from the list.***
for index, row in df.iterrows():
countries = row["country"].split(",")[0]
df.at[index, "country"] = countries
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/690/129690591.ipynb
|
netflix-shows
|
poojasomavanshi
|
[{"Id": 129690591, "ScriptId": 38564701, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15085779, "CreationDate": "05/15/2023 19:21:28", "VersionNumber": 2.0, "Title": "Exploratory Data Analysis of Netflix Shows", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 114.0, "LinesInsertedFromPrevious": 71.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 43.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186015979, "KernelVersionId": 129690591, "SourceDatasetVersionId": 3187143}]
|
[{"Id": 3187143, "DatasetId": 1935486, "DatasourceVersionId": 3236748, "CreatorUserId": 9422867, "LicenseName": "Unknown", "CreationDate": "02/15/2022 09:15:13", "VersionNumber": 1.0, "Title": "netflix_shows", "Slug": "netflix-shows", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1935486, "CreatorUserId": 9422867, "OwnerUserId": 9422867.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3187143.0, "CurrentDatasourceVersionId": 3236748.0, "ForumId": 1959216, "Type": 2, "CreationDate": "02/15/2022 09:15:13", "LastActivityDate": "02/15/2022", "TotalViews": 903, "TotalDownloads": 26, "TotalVotes": 1, "TotalKernels": 8}]
|
[{"Id": 9422867, "UserName": "poojasomavanshi", "DisplayName": "Pooja Somavanshi", "RegisterDate": "01/17/2022", "PerformanceTier": 0}]
|
# # **Exploratory Data Analysis of Netflix Shows**
# 
# ## **Introduction**
# Welcome to the exploratory data analysis (EDA) of Netflix shows! In this notebook, we will dive into the dataset containing information about various shows available on Netflix. By performing EDA, we aim to gain insights, discover patterns, and uncover interesting trends within the data.
# Netflix has become one of the leading streaming platforms, offering a vast library of TV shows and movies across different genres. As a Netflix user or someone interested in the entertainment industry, this EDA will provide you with a better understanding of the shows available on the platform.
# ## **Data Loading and Preparation**
# In this section, we will import the necessary libraries for data manipulation and visualization, load the dataset into a Pandas DataFrame, and perform initial data cleaning and preprocessing steps. The dataset used for this analysis is titled "*Netflix Shows*"
# ### Import the required libraries for data manipulation and visualization
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# ### Load the "Netflix Shows" dataset
df = pd.read_csv("/kaggle/input/netflix-shows/netflix_titles.csv")
# ### Data Understanding
df.columns
df.head()
df.dtypes
# ### Data Cleaning
# To calculate the total number of missing values in each column of the DataFrame, we can use the following code:
df.isna().sum()
# To handle these missing values, the following plan will be implemented:
# 1. For the column '**director**', which has **2634** missing values, we will delete the entire column as it contains a significant number of missing values.
# 2. For the columns '**cast**', '**country**', '**date_added**', '**rating**', and '**duration**', which have **825**, **831**, **10**, **4**, and **3** missing values respectively, we will delete the rows that contain missing values in these columns. By removing these rows, we can ensure that the analysis is based on complete and reliable data.
# #### Handling Missing Values
# Removing Director column
df = df.drop(columns=["director"])
df.isna().sum()
# Removing rows with null values in Duration, Rating, Date Added, Country and Cast
df = df.dropna(axis=0, subset=["duration"])
df = df.dropna(axis=0, subset=["rating"])
df = df.dropna(axis=0, subset=["date_added"])
df = df.dropna(axis=0, subset=["country"])
df = df.dropna(axis=0, subset=["cast"])
df.isna().sum()
# #### Converting 'date_added' Column
# To enhance the readability and consistency of the 'date_added' column in our dataset, we will be converting the existing format from "MMMM DD, YYYY" to the format of "YYYY-MM-DD". This change will allow for easier interpretation and standardize the date representation across the column.
df["date_added"] = pd.to_datetime(
df["date_added"].str.strip(), format="%B %d, %Y"
).dt.strftime("%Y-%m-%d")
df.head()
# ## **Exploratory Data Analysis**
# ### Basic Data Exploration
# #### Total number of rows and columns in the dataset
df.shape
# #### Summary statistics of numerical variables (e.g., release year)
df.describe()
# #### Count of unique values in categorical variables (e.g., show types, countries)
show_types_count = df["type"].value_counts()
countries_count = df["country"].value_counts()
print("Count of unique values in 'type' column:")
print(show_types_count)
print("\nCount of unique values in 'country' column:")
print(countries_count)
# ***In the dataset, some rows in the 'country' column contain multiple countries separated by commas. To ensure consistency and facilitate analysis, we will modify these rows to include only the first country from the list.***
for index, row in df.iterrows():
countries = row["country"].split(",")[0]
df.at[index, "country"] = countries
| false | 1 | 1,053 | 0 | 1,073 | 1,053 |
||
129690592
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
train.head()
train.isnull().sum()
test.isnull().sum()
test.head()
test1 = test.iloc[:, 1:17]
test1.head()
X = train.iloc[:, 1:17]
Y = train["yield"]
X.head()
Y.head()
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
X_train.shape, X_test.shape
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, Y_train)
y_pred = lr.predict(X_test)
y_pred
Y_test.shape, y_pred.shape
X_test.shape, test1.shape
y_pred1 = lr.predict(test1)
y_pred1
sample = pd.read_csv("/kaggle/input/playground-series-s3e14/sample_submission.csv")
sample.head()
sample["yield"] = y_pred1
sample.head()
sample.to_csv("submission.csv", index=False)
from sklearn.metrics import mean_squared_error
acc = mean_squared_error(Y_test, y_pred)
print(acc)
train.columns
column_names = [
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
]
import seaborn as sns
sns.pairplot(
train[
[
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"RainingDays",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
]
],
diag_kind="kde",
)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
seeds = np.array(train["seeds"])
horsepower_normalizer = layers.Normalization(
input_shape=[
1,
],
axis=None,
)
horsepower_normalizer.adapt(seeds)
horsepower_model = tf.keras.Sequential([horsepower_normalizer, layers.Dense(units=1)])
horsepower_model.summary()
horsepower_model.predict(seeds[:10])
horsepower_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), loss="mean_absolute_error"
)
history = horsepower_model.fit(
X_train["seeds"],
Y_train,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split=0.2,
)
test_results = {}
test_results["horsepower_model"] = horsepower_model.evaluate(
X_test["seeds"], Y_test, verbose=0
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/690/129690592.ipynb
| null | null |
[{"Id": 129690592, "ScriptId": 38563657, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7351911, "CreationDate": "05/15/2023 19:21:30", "VersionNumber": 3.0, "Title": "Simple Linear Regression PS3E14_Prediction of Wild", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 125.0, "LinesInsertedFromPrevious": 54.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 71.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
train.head()
train.isnull().sum()
test.isnull().sum()
test.head()
test1 = test.iloc[:, 1:17]
test1.head()
X = train.iloc[:, 1:17]
Y = train["yield"]
X.head()
Y.head()
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
X_train.shape, X_test.shape
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, Y_train)
y_pred = lr.predict(X_test)
y_pred
Y_test.shape, y_pred.shape
X_test.shape, test1.shape
y_pred1 = lr.predict(test1)
y_pred1
sample = pd.read_csv("/kaggle/input/playground-series-s3e14/sample_submission.csv")
sample.head()
sample["yield"] = y_pred1
sample.head()
sample.to_csv("submission.csv", index=False)
from sklearn.metrics import mean_squared_error
acc = mean_squared_error(Y_test, y_pred)
print(acc)
train.columns
column_names = [
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
]
import seaborn as sns
sns.pairplot(
train[
[
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"RainingDays",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
]
],
diag_kind="kde",
)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
seeds = np.array(train["seeds"])
horsepower_normalizer = layers.Normalization(
input_shape=[
1,
],
axis=None,
)
horsepower_normalizer.adapt(seeds)
horsepower_model = tf.keras.Sequential([horsepower_normalizer, layers.Dense(units=1)])
horsepower_model.summary()
horsepower_model.predict(seeds[:10])
horsepower_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), loss="mean_absolute_error"
)
history = horsepower_model.fit(
X_train["seeds"],
Y_train,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split=0.2,
)
test_results = {}
test_results["horsepower_model"] = horsepower_model.evaluate(
X_test["seeds"], Y_test, verbose=0
)
| false | 0 | 1,042 | 3 | 1,042 | 1,042 |
||
129690825
|
<jupyter_start><jupyter_text>Diabetes Dataset
### Context
This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective is to predict based on diagnostic measurements whether a patient has diabetes.
### Content
Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
- Pregnancies: Number of times pregnant
- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test
- BloodPressure: Diastolic blood pressure (mm Hg)
- SkinThickness: Triceps skin fold thickness (mm)
- Insulin: 2-Hour serum insulin (mu U/ml)
- BMI: Body mass index (weight in kg/(height in m)^2)
- DiabetesPedigreeFunction: Diabetes pedigree function
- Age: Age (years)
- Outcome: Class variable (0 or 1)
#### Sources:
(a) Original owners: National Institute of Diabetes and Digestive and
Kidney Diseases
(b) Donor of database: Vincent Sigillito ([email protected])
Research Center, RMI Group Leader
Applied Physics Laboratory
The Johns Hopkins University
Johns Hopkins Road
Laurel, MD 20707
(301) 953-6231
(c) Date received: 9 May 1990
#### Past Usage:
1. Smith,~J.~W., Everhart,~J.~E., Dickson,~W.~C., Knowler,~W.~C., \&
Johannes,~R.~S. (1988). Using the ADAP learning algorithm to forecast
the onset of diabetes mellitus. In {\it Proceedings of the Symposium
on Computer Applications and Medical Care} (pp. 261--265). IEEE
Computer Society Press.
The diagnostic, binary-valued variable investigated is whether the
patient shows signs of diabetes according to World Health Organization
criteria (i.e., if the 2 hour post-load plasma glucose was at least
200 mg/dl at any survey examination or if found during routine medical
care). The population lives near Phoenix, Arizona, USA.
Results: Their ADAP algorithm makes a real-valued prediction between
0 and 1. This was transformed into a binary decision using a cutoff of
0.448. Using 576 training instances, the sensitivity and specificity
of their algorithm was 76% on the remaining 192 instances.
#### Relevant Information:
Several constraints were placed on the selection of these instances from
a larger database. In particular, all patients here are females at
least 21 years old of Pima Indian heritage. ADAP is an adaptive learning
routine that generates and executes digital analogs of perceptron-like
devices. It is a unique algorithm; see the paper for details.
#### Number of Instances: 768
#### Number of Attributes: 8 plus class
#### For Each Attribute: (all numeric-valued)
1. Number of times pregnant
2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test
3. Diastolic blood pressure (mm Hg)
4. Triceps skin fold thickness (mm)
5. 2-Hour serum insulin (mu U/ml)
6. Body mass index (weight in kg/(height in m)^2)
7. Diabetes pedigree function
8. Age (years)
9. Class variable (0 or 1)
#### Missing Attribute Values: Yes
#### Class Distribution: (class value 1 is interpreted as "tested positive for
diabetes")
Kaggle dataset identifier: diabetes-data-set
<jupyter_script># # Predict Diabetes using with Machine Learnin
# Import Packages
import pandas as pd # Used to work with datasets
import numpy as np # Used to work with arrays
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.neighbors import (
KNeighborsClassifier,
) # Classifier implementing the k-nearest neighbors vote
from sklearn.tree import (
DecisionTreeClassifier,
) ## is a class capable of performing multiclass classification on a dataset.
from sklearn.svm import SVC
from sklearn.neural_network import (
MLPClassifier,
) # Iteratively trains because at each time step the partial derivatives of the loss function with respect to the model parameters are computed.
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, KFold, GridSearchCV
import sklearn
from sklearn.preprocessing import (
StandardScaler,
) ## Removes the average and scales each feature/variable for unit variance. This process is carried out in an independent manner
from sklearn.model_selection import (
train_test_split,
) # divide the data into training data and test data
from sklearn.metrics import (
accuracy_score,
precision_score,
recall_score,
f1_score,
roc_auc_score,
confusion_matrix,
classification_report,
)
import warnings
warnings.filterwarnings("ignore")
# Data
# Pregnancies: Number of times pregnant
# Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test
# BloodPressure: Diastolic blood pressure (mm Hg)
# SkinThickness: Triceps skin fold thickness (mm)
# Insulin: 2-Hour serum insulin (mu U/ml)
# BMI: Body mass index (weight in kg/(height in m)^2)
# DiabetesPedigreeFunction: Diabetes pedigree function
# Age: Age (years)
# Outcome: Class variable (0 or 1)
# read data
diabetes = pd.read_csv("data/diabetes.csv")
# name columns
print(diabetes.columns)
diabetes.head() # Show part of the data
# shape data
print("dimension of data: {}".format(diabetes.shape))
# The diabetes dataset consists of 768 data points, with 9 features each:
## print about information
diabetes.info()
# check is null data
diabetes.isnull().sum()
## print describtion
diabetes.describe()
# "outcome" Is the feature that I will expect, 0 means no diabetes, 1 means presence
print(diabetes.groupby("Outcome").size())
# 500 is rated as 0 and 268 as 1:
# create datarame in Outcome =0 and Outcome=1
diabetes_0 = diabetes[diabetes["Outcome"] == 0]
diabetes_1 = diabetes[diabetes["Outcome"] == 1]
## The number of views in each categorical basket using bars.
sns.countplot(data=diabetes, x="Outcome", label="Count")
# visualization count plot Pregnancies
sns.countplot(data=diabetes, x="Pregnancies", hue="Outcome")
plt.xlabel("Pregnancies")
plt.ylabel("count")
plt.show()
# histogram of the "Age" variable in the "Outcome=0" dataset
plt.hist(diabetes_0["Age"])
plt.xlabel("Age")
plt.ylabel("Count")
plt.show()
# histogram of the "Age" variable in the "Outcome=1" dataset
plt.hist(diabetes_1["Age"])
plt.xlabel("Age")
plt.ylabel("Count")
plt.show()
# histogram of the "Age"
sns.histplot(data=diabetes, x="Age", hue="Outcome")
plt.xlabel("Age")
plt.ylabel("Count")
plt.show()
diabetes_0["Age"].mean()
diabetes_1["Age"].mean()
# ###### The incidence of diabetes increases from the age of 38
# histogram of the "SkinThickness"
sns.histplot(data=diabetes, x="SkinThickness", hue="Outcome")
plt.xlabel("SkinThickness")
plt.ylabel("Count")
plt.show()
# average healthy people SkinThickness
diabetes_0["SkinThickness"].mean()
# max healthy people SkinThickness
diabetes_0["SkinThickness"].max()
# average diabetics SkinThickness
diabetes_1["SkinThickness"].mean()
# max diabetics SkinThickness
diabetes_1["SkinThickness"].max()
# ###### The thickness of the skin of diabetics is higher than that of healthy people
## histogram of the "BMi"
sns.histplot(data=diabetes, x="BMI", hue="Outcome")
plt.xlabel("BMI")
plt.ylabel("Count")
plt.show()
# average healthy people BMI
diabetes_0["BMI"].mean()
# max healthy people BMI
diabetes_0["BMI"].max()
# average healthy people BMI
diabetes_1["BMI"].mean()
# max healthy people BMI
diabetes_1["BMI"].max()
# ###### BMI is more common in affected patients than in healthy people.
## histogram of the "Pregnancies"
sns.histplot(data=diabetes, x="Pregnancies", hue="Outcome")
plt.xlabel("Pregnancies")
plt.ylabel("Count")
plt.xticks([1, 3, 5, 7, 9])
plt.show()
# average healthy people Pregnancies
diabetes_0["Pregnancies"].mean()
# max healthy people Pregnancies
diabetes_0["Pregnancies"].max()
# average healthy people Pregnancies
diabetes_1["Pregnancies"].mean()
# max healthy people Pregnancies
diabetes_1["Pregnancies"].max()
# ###### The Number of times pregnant of diabetics is higher than that of healthy people
# scateer plot relationship between Age with BMI
plt.scatter(diabetes["BMI"], diabetes["Age"])
plt.title("The relationship between Age with BMI ")
plt.xlabel("BMI")
plt.ylabel("Age")
plt.show()
# to compare correlation between a target and other features in absolute
correlations = diabetes.corr()["Outcome"].drop("Outcome")
sorted_correlations = correlations.abs().sort_values(ascending=False)
sorted_correlations
# show bar to compare correlation between a target and other features in absolute
# to be organized and easy to compare
sns.barplot(x=sorted_correlations.index, y=sorted_correlations)
plt.xticks(rotation=90)
plt.xlabel("Features")
plt.ylabel("Absolute Correlation")
plt.show()
# ###### drop Outlier noise data
# Calculate the interquartile range (IQR) for each column
Q1 = diabetes.quantile(0.25)
Q3 = diabetes.quantile(0.75)
IQR = Q3 - Q1
# Identify dataoutliers
outliers = diabetes[
((diabetes < (Q1 - 1.5 * IQR)) | (diabetes > (Q3 + 1.5 * IQR))).any(axis=1)
]
# drop the outliers from the data
train_clean = diabetes.drop(outliers.index)
# visualizing the correlation between the variables in the diabetes
plt.figure(figsize=(15, 15))
sns.heatmap(np.abs(train_clean.corr()), annot=True)
plt.title("Correlation data ", fontsize=12)
# split data
X = train_clean.drop(columns=["Outcome"]) # data
y = train_clean["Outcome"] # target
# StandardScaler in dataframe mean=0 , Std=1
Stand = StandardScaler()
X = pd.DataFrame(Stand.fit_transform(X), columns=X.columns)
# function evaluation
def evaluate(model, X, target):
"""
Evaluate the performance of the model
Inputs:
Model ,
Data ,
Target .
Outputs:
Accuracy,
Precision
Recall
F1 Score
AUC-ROC
confusion matrix
"""
# split the data into training and testing
X_train, X_test, y_train, y_test = train_test_split(X, target, test_size=0.25)
model.fit(X_train, y_train) # fit model
y_pred = model.predict(X_test)
print("model: ", model)
# Accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
# Precision
precision = precision_score(y_test, y_pred)
print("Precision:", precision)
# Recall
recall = recall_score(y_test, y_pred)
print("Recall:", recall)
# F1 Score
f1 = f1_score(y_test, y_pred)
print("F1 Score:", f1)
# AUC-ROC
auc_roc = roc_auc_score(y_test, y_pred)
print("AUC-ROC:", auc_roc)
# Confusion Matrix
confusion = confusion_matrix(y_test, y_pred)
print("Confusion Matrix:\n", confusion)
report = classification_report(y_test, y_pred)
print(report)
# ## K Nearest Neighbour predicted
# It can be said that the Neighbors Nearest-k ,It is the simplest machine learning algorithm composed Build the model only from storing the training data set. To make a forecast for a new point in a group data, the algorithm finds the closest data points in the training data set
# First, let's see if we can confirm the relationship between model complexity and accuracy:
# split data into train ,split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=66)
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for n_neighbors in neighbors_settings:
# bulding nodel
knn = KNeighborsClassifier(n_neighbors=n_neighbors)
knn.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(knn.score(X_train, y_train))
# record test set accuracy
test_accuracy.append(knn.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
## We check accuracy of the k-nearest neighbors
evaluate(KNeighborsClassifier(n_neighbors=7), X, y)
# ## support vector machine
model = SVC()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
param_grid = {
"C": [0.1, 1, 10, 100, 1000, 10000],
"gamma": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000],
}
# Create an instance of the model
model = SVC()
# Create an instance of GridSearchCV
grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=2)
# Fit the GridSearchCV
grid_search.fit(X_train, y_train)
# Get the best parameter
best_params = grid_search.best_params_
best_accuracy = grid_search.best_score_
# Evaluate the model with the best parameters
best_model = grid_search.best_estimator_
test_accuracy = best_model.score(X_test, y_test)
# Print the results
print("Best Parameters: ", best_params)
print("Best Accuracy: ", best_accuracy)
print("Test Accuracy: ", test_accuracy)
evaluate(grid_search, X, y)
#
# ## Decision tree classifier
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
print(
"Accuracy on training set: {:.3f}".format(tree.score(X_train, y_train))
) # To calculate the accuracy of the training data
print(
"Accuracy on test set: {:.3f}".format(tree.score(X_test, y_test))
) # To calculate the accuracy of the test data
# The accuracy on the training set using the decision tree classifier is 100
# While the accuracy of the test set is much worse. This is an indication that the tree is suffering from over-adjustment
# overfitting , It does not generalize well to new data. Therefore, we need to apply pre-pruning
# on the tree
# Now I will do it again by setting
# 3 = depth_m
# Which reduces the depth of the tree.
# This leads to a lower accuracy in the training set, but improves the test set.
## We check accuracy of the Decision tree classifier algorithm for predicting diabetes
model_tree = DecisionTreeClassifier(
criterion="entropy", max_depth=3, ccp_alpha=2, min_samples_split=5
)
evaluate(model_tree, X, y)
param_grid = {
"max_depth": [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
"ccp_alpha": [2, 3, 4, 5, 6, 7, 8, 9],
"min_samples_split": [2, 3, 4, 5, 6, 7, 8, 9],
}
# Create an instance of the model
model = DecisionTreeClassifier()
# Create an instance of GridSearchCV
grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=2)
# Fit the GridSearchCV
grid_search.fit(X_train, y_train)
# Get the best parameter
best_params = grid_search.best_params_
best_accuracy = grid_search.best_score_
# Evaluate the model with the best parameters
best_model = grid_search.best_estimator_
test_accuracy = best_model.score(X_test, y_test)
# Print the results
print("Best Parameters: ", best_params)
print("Best Accuracy: ", best_accuracy)
print("Test Accuracy: ", test_accuracy)
# ## LogisticRegression for predicting diabetes
logistic = LogisticRegression(max_iter=100)
logistic.fit(X_train, y_train)
print(
"Accuracy on training set: {:.2f}".format(logistic.score(X_train, y_train))
) # To calculate the accuracy of the training data
print(
"Accuracy on test set: {:.2f}".format(logistic.score(X_test, y_test))
) # To calculate the accuracy of the testing data
## We check accuracy of the Logistic Regression algorithm for predicting diabetes
evaluate(LogisticRegression(), X, y)
# ## Neural networks for predicting diabetes
mlp = MLPClassifier(max_iter=100, alpha=0.001, random_state=0)
mlp.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test, y_test)))
## We check accuracy of the MLP Classi fier algorithm for predicting diabetes
evaluate(MLPClassifier(max_iter=100, alpha=1), X, y)
# Our model so far is the default neural network model after measurement. Now I will draw a heat map of the weights of the first layer of the learned neural network in order to predict diabetes using the dataset.
plt.figure(figsize=(20, 5)) #
plt.imshow(mlp.coefs_[0])
plt.yticks(range(8))
plt.xlabel("Columns in weight matrix")
plt.ylabel("Input feature")
plt.colorbar()
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/690/129690825.ipynb
|
diabetes-data-set
|
mathchi
|
[{"Id": 129690825, "ScriptId": 38488486, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7577589, "CreationDate": "05/15/2023 19:24:07", "VersionNumber": 2.0, "Title": "prediction Diabetes classification 99%", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 522.0, "LinesInsertedFromPrevious": 164.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 358.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186016254, "KernelVersionId": 129690825, "SourceDatasetVersionId": 1400440}]
|
[{"Id": 1400440, "DatasetId": 818300, "DatasourceVersionId": 1433199, "CreatorUserId": 3650837, "LicenseName": "CC0: Public Domain", "CreationDate": "08/05/2020 21:27:01", "VersionNumber": 1.0, "Title": "Diabetes Dataset", "Slug": "diabetes-data-set", "Subtitle": "This dataset is originally from the N. Inst. of Diabetes & Diges. & Kidney Dis.", "Description": "### Context\n\nThis dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective is to predict based on diagnostic measurements whether a patient has diabetes.\n\n\n### Content\n\nSeveral constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.\n\n- Pregnancies: Number of times pregnant \n- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test \n- BloodPressure: Diastolic blood pressure (mm Hg) \n- SkinThickness: Triceps skin fold thickness (mm) \n- Insulin: 2-Hour serum insulin (mu U/ml) \n- BMI: Body mass index (weight in kg/(height in m)^2) \n- DiabetesPedigreeFunction: Diabetes pedigree function \n- Age: Age (years) \n- Outcome: Class variable (0 or 1)\n\n#### Sources:\n (a) Original owners: National Institute of Diabetes and Digestive and\n Kidney Diseases\n (b) Donor of database: Vincent Sigillito ([email protected])\n Research Center, RMI Group Leader\n Applied Physics Laboratory\n The Johns Hopkins University\n Johns Hopkins Road\n Laurel, MD 20707\n (301) 953-6231\n (c) Date received: 9 May 1990\n\n#### Past Usage:\n 1. Smith,~J.~W., Everhart,~J.~E., Dickson,~W.~C., Knowler,~W.~C., \\&\n Johannes,~R.~S. (1988). Using the ADAP learning algorithm to forecast\n the onset of diabetes mellitus. In {\\it Proceedings of the Symposium\n on Computer Applications and Medical Care} (pp. 261--265). IEEE\n Computer Society Press.\n\n The diagnostic, binary-valued variable investigated is whether the\n patient shows signs of diabetes according to World Health Organization\n criteria (i.e., if the 2 hour post-load plasma glucose was at least \n 200 mg/dl at any survey examination or if found during routine medical\n care). The population lives near Phoenix, Arizona, USA.\n\n Results: Their ADAP algorithm makes a real-valued prediction between\n 0 and 1. This was transformed into a binary decision using a cutoff of \n 0.448. Using 576 training instances, the sensitivity and specificity\n of their algorithm was 76% on the remaining 192 instances.\n\n#### Relevant Information:\n Several constraints were placed on the selection of these instances from\n a larger database. In particular, all patients here are females at\n least 21 years old of Pima Indian heritage. ADAP is an adaptive learning\n routine that generates and executes digital analogs of perceptron-like\n devices. It is a unique algorithm; see the paper for details.\n\n#### Number of Instances: 768\n\n#### Number of Attributes: 8 plus class \n\n#### For Each Attribute: (all numeric-valued)\n 1. Number of times pregnant\n 2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test\n 3. Diastolic blood pressure (mm Hg)\n 4. Triceps skin fold thickness (mm)\n 5. 2-Hour serum insulin (mu U/ml)\n 6. Body mass index (weight in kg/(height in m)^2)\n 7. Diabetes pedigree function\n 8. Age (years)\n 9. Class variable (0 or 1)\n\n#### Missing Attribute Values: Yes\n\n#### Class Distribution: (class value 1 is interpreted as \"tested positive for\n diabetes\")", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 818300, "CreatorUserId": 3650837, "OwnerUserId": 3650837.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1400440.0, "CurrentDatasourceVersionId": 1433199.0, "ForumId": 833406, "Type": 2, "CreationDate": "08/05/2020 21:27:01", "LastActivityDate": "08/05/2020", "TotalViews": 440450, "TotalDownloads": 65613, "TotalVotes": 496, "TotalKernels": 245}]
|
[{"Id": 3650837, "UserName": "mathchi", "DisplayName": "Mehmet Akturk", "RegisterDate": "09/01/2019", "PerformanceTier": 3}]
|
# # Predict Diabetes using with Machine Learnin
# Import Packages
import pandas as pd # Used to work with datasets
import numpy as np # Used to work with arrays
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.neighbors import (
KNeighborsClassifier,
) # Classifier implementing the k-nearest neighbors vote
from sklearn.tree import (
DecisionTreeClassifier,
) ## is a class capable of performing multiclass classification on a dataset.
from sklearn.svm import SVC
from sklearn.neural_network import (
MLPClassifier,
) # Iteratively trains because at each time step the partial derivatives of the loss function with respect to the model parameters are computed.
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, KFold, GridSearchCV
import sklearn
from sklearn.preprocessing import (
StandardScaler,
) ## Removes the average and scales each feature/variable for unit variance. This process is carried out in an independent manner
from sklearn.model_selection import (
train_test_split,
) # divide the data into training data and test data
from sklearn.metrics import (
accuracy_score,
precision_score,
recall_score,
f1_score,
roc_auc_score,
confusion_matrix,
classification_report,
)
import warnings
warnings.filterwarnings("ignore")
# Data
# Pregnancies: Number of times pregnant
# Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test
# BloodPressure: Diastolic blood pressure (mm Hg)
# SkinThickness: Triceps skin fold thickness (mm)
# Insulin: 2-Hour serum insulin (mu U/ml)
# BMI: Body mass index (weight in kg/(height in m)^2)
# DiabetesPedigreeFunction: Diabetes pedigree function
# Age: Age (years)
# Outcome: Class variable (0 or 1)
# read data
diabetes = pd.read_csv("data/diabetes.csv")
# name columns
print(diabetes.columns)
diabetes.head() # Show part of the data
# shape data
print("dimension of data: {}".format(diabetes.shape))
# The diabetes dataset consists of 768 data points, with 9 features each:
## print about information
diabetes.info()
# check is null data
diabetes.isnull().sum()
## print describtion
diabetes.describe()
# "outcome" Is the feature that I will expect, 0 means no diabetes, 1 means presence
print(diabetes.groupby("Outcome").size())
# 500 is rated as 0 and 268 as 1:
# create datarame in Outcome =0 and Outcome=1
diabetes_0 = diabetes[diabetes["Outcome"] == 0]
diabetes_1 = diabetes[diabetes["Outcome"] == 1]
## The number of views in each categorical basket using bars.
sns.countplot(data=diabetes, x="Outcome", label="Count")
# visualization count plot Pregnancies
sns.countplot(data=diabetes, x="Pregnancies", hue="Outcome")
plt.xlabel("Pregnancies")
plt.ylabel("count")
plt.show()
# histogram of the "Age" variable in the "Outcome=0" dataset
plt.hist(diabetes_0["Age"])
plt.xlabel("Age")
plt.ylabel("Count")
plt.show()
# histogram of the "Age" variable in the "Outcome=1" dataset
plt.hist(diabetes_1["Age"])
plt.xlabel("Age")
plt.ylabel("Count")
plt.show()
# histogram of the "Age"
sns.histplot(data=diabetes, x="Age", hue="Outcome")
plt.xlabel("Age")
plt.ylabel("Count")
plt.show()
diabetes_0["Age"].mean()
diabetes_1["Age"].mean()
# ###### The incidence of diabetes increases from the age of 38
# histogram of the "SkinThickness"
sns.histplot(data=diabetes, x="SkinThickness", hue="Outcome")
plt.xlabel("SkinThickness")
plt.ylabel("Count")
plt.show()
# average healthy people SkinThickness
diabetes_0["SkinThickness"].mean()
# max healthy people SkinThickness
diabetes_0["SkinThickness"].max()
# average diabetics SkinThickness
diabetes_1["SkinThickness"].mean()
# max diabetics SkinThickness
diabetes_1["SkinThickness"].max()
# ###### The thickness of the skin of diabetics is higher than that of healthy people
## histogram of the "BMi"
sns.histplot(data=diabetes, x="BMI", hue="Outcome")
plt.xlabel("BMI")
plt.ylabel("Count")
plt.show()
# average healthy people BMI
diabetes_0["BMI"].mean()
# max healthy people BMI
diabetes_0["BMI"].max()
# average healthy people BMI
diabetes_1["BMI"].mean()
# max healthy people BMI
diabetes_1["BMI"].max()
# ###### BMI is more common in affected patients than in healthy people.
## histogram of the "Pregnancies"
sns.histplot(data=diabetes, x="Pregnancies", hue="Outcome")
plt.xlabel("Pregnancies")
plt.ylabel("Count")
plt.xticks([1, 3, 5, 7, 9])
plt.show()
# average healthy people Pregnancies
diabetes_0["Pregnancies"].mean()
# max healthy people Pregnancies
diabetes_0["Pregnancies"].max()
# average healthy people Pregnancies
diabetes_1["Pregnancies"].mean()
# max healthy people Pregnancies
diabetes_1["Pregnancies"].max()
# ###### The Number of times pregnant of diabetics is higher than that of healthy people
# scateer plot relationship between Age with BMI
plt.scatter(diabetes["BMI"], diabetes["Age"])
plt.title("The relationship between Age with BMI ")
plt.xlabel("BMI")
plt.ylabel("Age")
plt.show()
# to compare correlation between a target and other features in absolute
correlations = diabetes.corr()["Outcome"].drop("Outcome")
sorted_correlations = correlations.abs().sort_values(ascending=False)
sorted_correlations
# show bar to compare correlation between a target and other features in absolute
# to be organized and easy to compare
sns.barplot(x=sorted_correlations.index, y=sorted_correlations)
plt.xticks(rotation=90)
plt.xlabel("Features")
plt.ylabel("Absolute Correlation")
plt.show()
# ###### drop Outlier noise data
# Calculate the interquartile range (IQR) for each column
Q1 = diabetes.quantile(0.25)
Q3 = diabetes.quantile(0.75)
IQR = Q3 - Q1
# Identify dataoutliers
outliers = diabetes[
((diabetes < (Q1 - 1.5 * IQR)) | (diabetes > (Q3 + 1.5 * IQR))).any(axis=1)
]
# drop the outliers from the data
train_clean = diabetes.drop(outliers.index)
# visualizing the correlation between the variables in the diabetes
plt.figure(figsize=(15, 15))
sns.heatmap(np.abs(train_clean.corr()), annot=True)
plt.title("Correlation data ", fontsize=12)
# split data
X = train_clean.drop(columns=["Outcome"]) # data
y = train_clean["Outcome"] # target
# StandardScaler in dataframe mean=0 , Std=1
Stand = StandardScaler()
X = pd.DataFrame(Stand.fit_transform(X), columns=X.columns)
# function evaluation
def evaluate(model, X, target):
"""
Evaluate the performance of the model
Inputs:
Model ,
Data ,
Target .
Outputs:
Accuracy,
Precision
Recall
F1 Score
AUC-ROC
confusion matrix
"""
# split the data into training and testing
X_train, X_test, y_train, y_test = train_test_split(X, target, test_size=0.25)
model.fit(X_train, y_train) # fit model
y_pred = model.predict(X_test)
print("model: ", model)
# Accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
# Precision
precision = precision_score(y_test, y_pred)
print("Precision:", precision)
# Recall
recall = recall_score(y_test, y_pred)
print("Recall:", recall)
# F1 Score
f1 = f1_score(y_test, y_pred)
print("F1 Score:", f1)
# AUC-ROC
auc_roc = roc_auc_score(y_test, y_pred)
print("AUC-ROC:", auc_roc)
# Confusion Matrix
confusion = confusion_matrix(y_test, y_pred)
print("Confusion Matrix:\n", confusion)
report = classification_report(y_test, y_pred)
print(report)
# ## K Nearest Neighbour predicted
# It can be said that the Neighbors Nearest-k ,It is the simplest machine learning algorithm composed Build the model only from storing the training data set. To make a forecast for a new point in a group data, the algorithm finds the closest data points in the training data set
# First, let's see if we can confirm the relationship between model complexity and accuracy:
# split data into train ,split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=66)
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for n_neighbors in neighbors_settings:
# bulding nodel
knn = KNeighborsClassifier(n_neighbors=n_neighbors)
knn.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(knn.score(X_train, y_train))
# record test set accuracy
test_accuracy.append(knn.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
## We check accuracy of the k-nearest neighbors
evaluate(KNeighborsClassifier(n_neighbors=7), X, y)
# ## support vector machine
model = SVC()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
param_grid = {
"C": [0.1, 1, 10, 100, 1000, 10000],
"gamma": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000],
}
# Create an instance of the model
model = SVC()
# Create an instance of GridSearchCV
grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=2)
# Fit the GridSearchCV
grid_search.fit(X_train, y_train)
# Get the best parameter
best_params = grid_search.best_params_
best_accuracy = grid_search.best_score_
# Evaluate the model with the best parameters
best_model = grid_search.best_estimator_
test_accuracy = best_model.score(X_test, y_test)
# Print the results
print("Best Parameters: ", best_params)
print("Best Accuracy: ", best_accuracy)
print("Test Accuracy: ", test_accuracy)
evaluate(grid_search, X, y)
#
# ## Decision tree classifier
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
print(
"Accuracy on training set: {:.3f}".format(tree.score(X_train, y_train))
) # To calculate the accuracy of the training data
print(
"Accuracy on test set: {:.3f}".format(tree.score(X_test, y_test))
) # To calculate the accuracy of the test data
# The accuracy on the training set using the decision tree classifier is 100
# While the accuracy of the test set is much worse. This is an indication that the tree is suffering from over-adjustment
# overfitting , It does not generalize well to new data. Therefore, we need to apply pre-pruning
# on the tree
# Now I will do it again by setting
# 3 = depth_m
# Which reduces the depth of the tree.
# This leads to a lower accuracy in the training set, but improves the test set.
## We check accuracy of the Decision tree classifier algorithm for predicting diabetes
model_tree = DecisionTreeClassifier(
criterion="entropy", max_depth=3, ccp_alpha=2, min_samples_split=5
)
evaluate(model_tree, X, y)
param_grid = {
"max_depth": [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
"ccp_alpha": [2, 3, 4, 5, 6, 7, 8, 9],
"min_samples_split": [2, 3, 4, 5, 6, 7, 8, 9],
}
# Create an instance of the model
model = DecisionTreeClassifier()
# Create an instance of GridSearchCV
grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=2)
# Fit the GridSearchCV
grid_search.fit(X_train, y_train)
# Get the best parameter
best_params = grid_search.best_params_
best_accuracy = grid_search.best_score_
# Evaluate the model with the best parameters
best_model = grid_search.best_estimator_
test_accuracy = best_model.score(X_test, y_test)
# Print the results
print("Best Parameters: ", best_params)
print("Best Accuracy: ", best_accuracy)
print("Test Accuracy: ", test_accuracy)
# ## LogisticRegression for predicting diabetes
logistic = LogisticRegression(max_iter=100)
logistic.fit(X_train, y_train)
print(
"Accuracy on training set: {:.2f}".format(logistic.score(X_train, y_train))
) # To calculate the accuracy of the training data
print(
"Accuracy on test set: {:.2f}".format(logistic.score(X_test, y_test))
) # To calculate the accuracy of the testing data
## We check accuracy of the Logistic Regression algorithm for predicting diabetes
evaluate(LogisticRegression(), X, y)
# ## Neural networks for predicting diabetes
mlp = MLPClassifier(max_iter=100, alpha=0.001, random_state=0)
mlp.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test, y_test)))
## We check accuracy of the MLP Classi fier algorithm for predicting diabetes
evaluate(MLPClassifier(max_iter=100, alpha=1), X, y)
# Our model so far is the default neural network model after measurement. Now I will draw a heat map of the weights of the first layer of the learned neural network in order to predict diabetes using the dataset.
plt.figure(figsize=(20, 5)) #
plt.imshow(mlp.coefs_[0])
plt.yticks(range(8))
plt.xlabel("Columns in weight matrix")
plt.ylabel("Input feature")
plt.colorbar()
plt.show()
| false | 0 | 4,042 | 0 | 5,020 | 4,042 |
||
129690852
|
<jupyter_start><jupyter_text>Knn algorithms
KNN Algorithm is used to find the class of point by the class of nearest neighbour.
KNN Algorithm can be used for both classification as well as Regression! but here we will be using to solve Classification problem.
Here, in the dataset, We are having 4 features which are Gender, Age, Salary, Purchase Iphone.
Kaggle dataset identifier: knn-algorithms
<jupyter_script>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("/kaggle/input/knn-algorithms/iphone_purchase_records.csv")
df.info()
df.isnull().sum()
df.columns
sns.scatterplot(x="Salary", y="Age", data=df, size="Gender", hue="Purchase Iphone")
gender_dict = {"Male": 1, "Female": 1}
df = df.replace(gender_dict)
# Looks Like datas are forming a clauster, we can apply KNN algorithm around the clauster
# #Let's Split the dataset into train and test set
X = df.drop("Purchase Iphone", axis=1)
y = df["Purchase Iphone"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.30, random_state=101
)
# We need to scale the data for better performance
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train, y_train)
scaled_X_train = scaler.transform(X_train)
scaled_X_test = scaler.transform(X_test)
from sklearn.neighbors import KNeighborsClassifier
knn_basic = KNeighborsClassifier(n_neighbors=1)
# n_neighbors=1, here it a arbitary value of KNeighborsClassifier model,
# We always try to choose a odd value so for a particular value we can classify the neighbor element
knn_basic.fit(scaled_X_train, y_train)
knn_basic_prediction = knn_basic.predict(scaled_X_test)
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
confusion_matrix(knn_basic_prediction, y_test)
print(classification_report(knn_basic_prediction, y_test))
# In the basic knn model we achieve 84% of accuracy, Now let's see the model behave on increasing the k-value
knn_test_score = []
for k in range(1, 20):
knn_model = KNeighborsClassifier(n_neighbors=k)
knn_model.fit(scaled_X_train, y_train)
y_pred = knn_model.predict(scaled_X_test)
knn_test_score.append(1 - accuracy_score(y_pred, y_test))
plt.plot(knn_test_score)
plt.axhline(y=0.075, color="grey", linestyle="--")
plt.axhline(y=0.06, color="grey", linestyle="--")
plt.xlabel("K Values")
plt.ylabel("Test Error Rate")
# Look like the best optimize error lies in between those line
# From the graph we can say that the best performing k-value willbe around 14
# Let's find the best performing k values for the model through GridSearchCv
scaler = StandardScaler()
knn = KNeighborsClassifier()
operations = [("scaler", scaler), ("knn", knn)]
# Let's build a pipeline for the GridSearchCV
from sklearn.pipeline import Pipeline
pipe = Pipeline(steps=operations)
from sklearn.model_selection import GridSearchCV
k_values = list(range(1, 20))
param_grid = {
"knn__n_neighbors": k_values,
"knn__weights": ["uniform", "distance"],
"knn__metric": ["euclidean", "manhattan"],
}
knn_cv_classifier = GridSearchCV(pipe, param_grid, cv=10, scoring="accuracy")
knn_cv_classifier.fit(X_train, y_train)
knn_cv_classifier.get_params()
knn_cv_classifier.best_params_
# Here, we got the best k-value for minimizing the error is 11
knn_cv_prediction = knn_cv_classifier.predict(X_test)
confusion_matrix(y_test, knn_cv_prediction)
print(classification_report(y_test, knn_cv_prediction))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/690/129690852.ipynb
|
knn-algorithms
|
piyushborhade
|
[{"Id": 129690852, "ScriptId": 38567299, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14347190, "CreationDate": "05/15/2023 19:24:21", "VersionNumber": 1.0, "Title": "KNN Algo - Iphone Price(93%)", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 121.0, "LinesInsertedFromPrevious": 121.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 186016328, "KernelVersionId": 129690852, "SourceDatasetVersionId": 5667398}]
|
[{"Id": 5667398, "DatasetId": 3257752, "DatasourceVersionId": 5742881, "CreatorUserId": 10259664, "LicenseName": "Unknown", "CreationDate": "05/12/2023 05:18:49", "VersionNumber": 1.0, "Title": "Knn algorithms", "Slug": "knn-algorithms", "Subtitle": "Great csv for Beginners who want to practice KNN algorithm", "Description": "KNN Algorithm is used to find the class of point by the class of nearest neighbour.\n\nKNN Algorithm can be used for both classification as well as Regression! but here we will be using to solve Classification problem.\n\nHere, in the dataset, We are having 4 features which are Gender, Age, Salary, Purchase Iphone.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3257752, "CreatorUserId": 10259664, "OwnerUserId": 10259664.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5667398.0, "CurrentDatasourceVersionId": 5742881.0, "ForumId": 3323264, "Type": 2, "CreationDate": "05/12/2023 05:18:49", "LastActivityDate": "05/12/2023", "TotalViews": 146, "TotalDownloads": 28, "TotalVotes": 0, "TotalKernels": 1}]
|
[{"Id": 10259664, "UserName": "piyushborhade", "DisplayName": "Piyush Borhade", "RegisterDate": "04/16/2022", "PerformanceTier": 1}]
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("/kaggle/input/knn-algorithms/iphone_purchase_records.csv")
df.info()
df.isnull().sum()
df.columns
sns.scatterplot(x="Salary", y="Age", data=df, size="Gender", hue="Purchase Iphone")
gender_dict = {"Male": 1, "Female": 1}
df = df.replace(gender_dict)
# Looks Like datas are forming a clauster, we can apply KNN algorithm around the clauster
# #Let's Split the dataset into train and test set
X = df.drop("Purchase Iphone", axis=1)
y = df["Purchase Iphone"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.30, random_state=101
)
# We need to scale the data for better performance
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train, y_train)
scaled_X_train = scaler.transform(X_train)
scaled_X_test = scaler.transform(X_test)
from sklearn.neighbors import KNeighborsClassifier
knn_basic = KNeighborsClassifier(n_neighbors=1)
# n_neighbors=1, here it a arbitary value of KNeighborsClassifier model,
# We always try to choose a odd value so for a particular value we can classify the neighbor element
knn_basic.fit(scaled_X_train, y_train)
knn_basic_prediction = knn_basic.predict(scaled_X_test)
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
confusion_matrix(knn_basic_prediction, y_test)
print(classification_report(knn_basic_prediction, y_test))
# In the basic knn model we achieve 84% of accuracy, Now let's see the model behave on increasing the k-value
knn_test_score = []
for k in range(1, 20):
knn_model = KNeighborsClassifier(n_neighbors=k)
knn_model.fit(scaled_X_train, y_train)
y_pred = knn_model.predict(scaled_X_test)
knn_test_score.append(1 - accuracy_score(y_pred, y_test))
plt.plot(knn_test_score)
plt.axhline(y=0.075, color="grey", linestyle="--")
plt.axhline(y=0.06, color="grey", linestyle="--")
plt.xlabel("K Values")
plt.ylabel("Test Error Rate")
# Look like the best optimize error lies in between those line
# From the graph we can say that the best performing k-value willbe around 14
# Let's find the best performing k values for the model through GridSearchCv
scaler = StandardScaler()
knn = KNeighborsClassifier()
operations = [("scaler", scaler), ("knn", knn)]
# Let's build a pipeline for the GridSearchCV
from sklearn.pipeline import Pipeline
pipe = Pipeline(steps=operations)
from sklearn.model_selection import GridSearchCV
k_values = list(range(1, 20))
param_grid = {
"knn__n_neighbors": k_values,
"knn__weights": ["uniform", "distance"],
"knn__metric": ["euclidean", "manhattan"],
}
knn_cv_classifier = GridSearchCV(pipe, param_grid, cv=10, scoring="accuracy")
knn_cv_classifier.fit(X_train, y_train)
knn_cv_classifier.get_params()
knn_cv_classifier.best_params_
# Here, we got the best k-value for minimizing the error is 11
knn_cv_prediction = knn_cv_classifier.predict(X_test)
confusion_matrix(y_test, knn_cv_prediction)
print(classification_report(y_test, knn_cv_prediction))
| false | 1 | 992 | 2 | 1,087 | 992 |
||
129704940
|
<jupyter_start><jupyter_text>MNIST as .jpg
# Context
The [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition uses the popular MNIST dataset to challenge Kagglers to classify digits correctly. In this dataset, the images are represented as strings of pixel values in `train.csv` and `test.csv`. Often, it is beneficial for image data to be in an image format rather than a string format. Therefore, I have converted the aforementioned datasets from text in .csv files to organized .jpg files.
# Content
This dataset is composed of four files:
1. `trainingSet.tar.gz` (10.2 MB) - This file contains ten sub folders labeled 0 to 9. Each of the sub folders contains .jpg images from the [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition's `train.csv` dataset, corresponding to the folder name (ie. folder 2 contains images of 2's, etc.). In total, there are 42,000 images in the training set.
2. `testSet.tar.gz` (6.8 MB) - This file contains the .jpg images from the [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition's `test.csv` dataset. In total, there are 28,000 images in the test set.
3. `trainingSample.zip` (407 KB) - This file contains ten sub folders labeled 0 to 9. Each sub folder contains 60 .jpg images from the training set, for a total of 600 images.
4. `testSample.zip` (233 KB) - This file contains a 350 image sample from the test set.
# Acknowledgements
As previously mentioned, all data presented here is simply a cleaned version of the data presented in Kaggle's [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition. The division of the MNIST dataset into training and test sets exactly mirrors that presented in the competition.
# Inspiration
I created this dataset when exploring TensorFlow's Inception model. Inception is a massive CNN built by Google to compete in the ImageNet competition. By way of Transfer Learning, the final layer of Inception can be retrained, rendering the model useful for general classification tasks. In retraining the model, .jpg images must be used, thereby necessitating to the creation of this dataset.
My hope in experimenting with Inception was to achieve an accuracy of around 98.5% or higher on the MNIST dataset. Unfortunately, the maximum accuracy I reached with Inception was only 95.314%. If you are interested in my code for said attempt, it is available on my GitHub repository [Kaggle MNIST Inception CNN](https://github.com/scoliann/Kaggle-MNIST-Inception-CNN).
To learn more about retraining Inception, check out [TensorFlow for Poets](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html?index=..%2F..%2Findex#0).
Kaggle dataset identifier: mnistasjpg
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
Dense,
Conv2D,
Flatten,
Dropout,
MaxPooling2D,
BatchNormalization,
)
from tensorflow.keras.preprocessing.image import (
ImageDataGenerator,
img_to_array,
load_img,
)
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
filenames = os.listdir("train/")
categories = []
for filename in filenames:
category = filename.split(".")[0]
if category == "dog":
categories.append("dog")
else:
categories.append("cat")
df = pd.DataFrame({"filename": filenames, "category": categories})
train_df, val_df = train_test_split(df, test_size=0.20, random_state=42)
train_df = train_df.reset_index(drop=True)
validate_df = val_df.reset_index(drop=True)
image_size = (128, 128)
train_image_generator = ImageDataGenerator(rescale=1.0 / 255, horizontal_flip=True)
train_generator = train_image_generator.flow_from_dataframe(
train_df,
"train/",
x_col="filename",
y_col="category",
target_size=image_size,
class_mode="categorical",
batch_size=16,
)
val_image_generator = ImageDataGenerator(rescale=1.0 / 255)
val_generator = val_image_generator.flow_from_dataframe(
val_df,
"train/",
x_col="filename",
y_col="category",
target_size=image_size,
class_mode="categorical",
batch_size=16,
)
def build_model():
width, height, channels = 128, 128, 3
model = Sequential()
model.add(
Conv2D(32, (3, 3), activation="relu", input_shape=(width, height, channels))
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation="relu"))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(128, (3, 3), activation="relu"))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation="relu"))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Dense(2, activation="softmax"))
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
cnn = build_model()
history = cnn.fit(
train_generator,
epochs=10,
validation_data=val_generator,
validation_steps=5000 // 16,
steps_per_epoch=20000 // 16,
)
plt.plot(history.history["accuracy"], color="b", label="Training accuracy")
plt.plot(history.history["val_accuracy"], color="r", label="Validation accuracy")
plt.xlim(0, 9)
plt.ylim(0, 1)
legend = plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/704/129704940.ipynb
|
mnistasjpg
|
scolianni
|
[{"Id": 129704940, "ScriptId": 38571450, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3871351, "CreationDate": "05/15/2023 22:34:17", "VersionNumber": 1.0, "Title": "Cats vs Dogs CNN", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 116.0, "LinesInsertedFromPrevious": 116.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
|
[{"Id": 186036372, "KernelVersionId": 129704940, "SourceDatasetVersionId": 2280}]
|
[{"Id": 2280, "DatasetId": 1272, "DatasourceVersionId": 2280, "CreatorUserId": 289999, "LicenseName": "CC0: Public Domain", "CreationDate": "05/15/2017 09:10:04", "VersionNumber": 1.0, "Title": "MNIST as .jpg", "Slug": "mnistasjpg", "Subtitle": "Kaggle Digit Recognizer Competition Dataset as .jpg Image Files", "Description": "# Context \n\nThe [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition uses the popular MNIST dataset to challenge Kagglers to classify digits correctly. In this dataset, the images are represented as strings of pixel values in `train.csv` and `test.csv`. Often, it is beneficial for image data to be in an image format rather than a string format. Therefore, I have converted the aforementioned datasets from text in .csv files to organized .jpg files.\n\n# Content\n\nThis dataset is composed of four files:\n\n1. `trainingSet.tar.gz` (10.2 MB) - This file contains ten sub folders labeled 0 to 9. Each of the sub folders contains .jpg images from the [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition's `train.csv` dataset, corresponding to the folder name (ie. folder 2 contains images of 2's, etc.). In total, there are 42,000 images in the training set.\n2. `testSet.tar.gz` (6.8 MB) - This file contains the .jpg images from the [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition's `test.csv` dataset. In total, there are 28,000 images in the test set.\n3. `trainingSample.zip` (407 KB) - This file contains ten sub folders labeled 0 to 9. Each sub folder contains 60 .jpg images from the training set, for a total of 600 images.\n4. `testSample.zip` (233 KB) - This file contains a 350 image sample from the test set.\n\n# Acknowledgements\n\nAs previously mentioned, all data presented here is simply a cleaned version of the data presented in Kaggle's [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) competition. The division of the MNIST dataset into training and test sets exactly mirrors that presented in the competition.\n\n# Inspiration\n\nI created this dataset when exploring TensorFlow's Inception model. Inception is a massive CNN built by Google to compete in the ImageNet competition. By way of Transfer Learning, the final layer of Inception can be retrained, rendering the model useful for general classification tasks. In retraining the model, .jpg images must be used, thereby necessitating to the creation of this dataset.\n\nMy hope in experimenting with Inception was to achieve an accuracy of around 98.5% or higher on the MNIST dataset. Unfortunately, the maximum accuracy I reached with Inception was only 95.314%. If you are interested in my code for said attempt, it is available on my GitHub repository [Kaggle MNIST Inception CNN](https://github.com/scoliann/Kaggle-MNIST-Inception-CNN).\n\nTo learn more about retraining Inception, check out [TensorFlow for Poets](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html?index=..%2F..%2Findex#0).", "VersionNotes": "Initial release", "TotalCompressedBytes": 18413932.0, "TotalUncompressedBytes": 18413932.0}]
|
[{"Id": 1272, "CreatorUserId": 289999, "OwnerUserId": 289999.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2280.0, "CurrentDatasourceVersionId": 2280.0, "ForumId": 3500, "Type": 2, "CreationDate": "05/15/2017 09:10:04", "LastActivityDate": "02/05/2018", "TotalViews": 149486, "TotalDownloads": 38069, "TotalVotes": 315, "TotalKernels": 47}]
|
[{"Id": 289999, "UserName": "scolianni", "DisplayName": "Stuart Colianni", "RegisterDate": "02/04/2015", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
Dense,
Conv2D,
Flatten,
Dropout,
MaxPooling2D,
BatchNormalization,
)
from tensorflow.keras.preprocessing.image import (
ImageDataGenerator,
img_to_array,
load_img,
)
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
filenames = os.listdir("train/")
categories = []
for filename in filenames:
category = filename.split(".")[0]
if category == "dog":
categories.append("dog")
else:
categories.append("cat")
df = pd.DataFrame({"filename": filenames, "category": categories})
train_df, val_df = train_test_split(df, test_size=0.20, random_state=42)
train_df = train_df.reset_index(drop=True)
validate_df = val_df.reset_index(drop=True)
image_size = (128, 128)
train_image_generator = ImageDataGenerator(rescale=1.0 / 255, horizontal_flip=True)
train_generator = train_image_generator.flow_from_dataframe(
train_df,
"train/",
x_col="filename",
y_col="category",
target_size=image_size,
class_mode="categorical",
batch_size=16,
)
val_image_generator = ImageDataGenerator(rescale=1.0 / 255)
val_generator = val_image_generator.flow_from_dataframe(
val_df,
"train/",
x_col="filename",
y_col="category",
target_size=image_size,
class_mode="categorical",
batch_size=16,
)
def build_model():
width, height, channels = 128, 128, 3
model = Sequential()
model.add(
Conv2D(32, (3, 3), activation="relu", input_shape=(width, height, channels))
)
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation="relu"))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(128, (3, 3), activation="relu"))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation="relu"))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Dense(2, activation="softmax"))
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
cnn = build_model()
history = cnn.fit(
train_generator,
epochs=10,
validation_data=val_generator,
validation_steps=5000 // 16,
steps_per_epoch=20000 // 16,
)
plt.plot(history.history["accuracy"], color="b", label="Training accuracy")
plt.plot(history.history["val_accuracy"], color="r", label="Validation accuracy")
plt.xlim(0, 9)
plt.ylim(0, 1)
legend = plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.show()
| false | 0 | 1,081 | 5 | 1,845 | 1,081 |
||
129704898
|
<jupyter_start><jupyter_text>MovieLens 20M Dataset
## Context
The datasets describe ratings and free-text tagging activities from MovieLens, a movie recommendation service. It contains 20000263 ratings and 465564 tag applications across 27278 movies. These data were created by 138493 users between January 09, 1995 and March 31, 2015. This dataset was generated on October 17, 2016.
Users were selected at random for inclusion. All selected users had rated at least 20 movies.
## Content
No demographic information is included. Each user is represented by an id, and no other information is provided.
The data are contained in six files.
**tag.csv** that contains tags applied to movies by users:
* **userId**
* **movieId**
* **tag**
* **timestamp**
**rating.csv** that contains ratings of movies by users:
* **userId**
* **movieId**
* **rating**
* **timestamp**
**movie.csv** that contains movie information:
* **movieId**
* **title**
* **genres**
**link.csv** that contains identifiers that can be used to link to other sources:
* **movieId**
* **imdbId**
* **tmbdId**
**genome_scores.csv** that contains movie-tag relevance data:
* **movieId**
* **tagId**
* **relevance**
**genome_tags.csv** that contains tag descriptions:
* **tagId**
* **tag**
## Acknowledgements
The original datasets can be found [here](http://grouplens.org/datasets/movielens/). To acknowledge use of the dataset in publications, please cite the following paper:
F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages. DOI=http://dx.doi.org/10.1145/2827872
## Inspiration
Some ideas worth exploring:
* Which genres receive the highest ratings? How does this change over time?
* Determine the temporal trends in the genres/tagging activity of the movies released
Kaggle dataset identifier: movielens-20m-dataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
print(os.listdir("../input/movielens-20m-dataset"))
# read movie data and look columns
movies = pd.read_csv("../input/movielens-20m-dataset/movie.csv")
movies.columns
# We will use movieId and title
movies = movies.loc[:, ["movieId", "title"]]
movies.head(10)
# read rating data and look columns
ratings = pd.read_csv("../input/movielens-20m-dataset/rating.csv")
ratings.columns
# We will use movieId, userId and rating
ratings = ratings.loc[:, ["userId", "movieId", "rating"]]
ratings.head(10)
# merging ratings and movies
data = pd.merge(movies, ratings)
data.head(10)
# * We have 4 features = movieId, title, userId and rating
# * We will make item based recommendation system
# * Note: The number of sample in data frame is 20 million that is too much for kaggle. Lets use 1 million of sample in data.
data.shape
data = data.iloc[:1000000, :]
# lets make a pivot table in order to make rows are users and columns are movies. And values are rating
pivot_table = data.pivot_table(index=["userId"], columns=["title"], values="rating")
pivot_table.head(10)
# * The question is that which movie do we recommend these people who watched "Bad Boys (1995)" movie.
# * In order to answer this question we will find similarities between "Bad Boys (1995)" movie and other movies.
movie_watched = pivot_table["Bad Boys (1995)"]
similarity_with_other_movies = pivot_table.corrwith(
movie_watched
) # find correlation between "Bad Boys (1995)" and other movies
similarity_with_other_movies = similarity_with_other_movies.sort_values(ascending=False)
similarity_with_other_movies.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/704/129704898.ipynb
|
movielens-20m-dataset
| null |
[{"Id": 129704898, "ScriptId": 38571634, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11798227, "CreationDate": "05/15/2023 22:33:28", "VersionNumber": 1.0, "Title": "Recommendation Systems", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 65.0, "LinesInsertedFromPrevious": 65.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186036312, "KernelVersionId": 129704898, "SourceDatasetVersionId": 77759}]
|
[{"Id": 77759, "DatasetId": 339, "DatasourceVersionId": 80217, "CreatorUserId": 495305, "LicenseName": "Unknown", "CreationDate": "08/15/2018 23:09:34", "VersionNumber": 1.0, "Title": "MovieLens 20M Dataset", "Slug": "movielens-20m-dataset", "Subtitle": "Over 20 Million Movie Ratings and Tagging Activities Since 1995", "Description": "## Context\n\nThe datasets describe ratings and free-text tagging activities from MovieLens, a movie recommendation service. It contains 20000263 ratings and 465564 tag applications across 27278 movies. These data were created by 138493 users between January 09, 1995 and March 31, 2015. This dataset was generated on October 17, 2016.\n\nUsers were selected at random for inclusion. All selected users had rated at least 20 movies. \n\n## Content\n\nNo demographic information is included. Each user is represented by an id, and no other information is provided.\n\nThe data are contained in six files.\n\n**tag.csv** that contains tags applied to movies by users:\n\n* **userId**\n\n* **movieId**\n\n* **tag**\n\n* **timestamp**\n\n**rating.csv** that contains ratings of movies by users:\n\n* **userId**\n\n* **movieId**\n\n* **rating**\n\n* **timestamp**\n\n**movie.csv** that contains movie information:\n\n* **movieId**\n\n* **title**\n\n* **genres**\n\n**link.csv** that contains identifiers that can be used to link to other sources:\n\n* **movieId**\n\n* **imdbId**\n\n* **tmbdId**\n\n**genome_scores.csv** that contains movie-tag relevance data:\n\n* **movieId**\n\n* **tagId**\n\n* **relevance**\n\n**genome_tags.csv** that contains tag descriptions:\n\n* **tagId**\n\n* **tag**\n\n## Acknowledgements\n\nThe original datasets can be found [here](http://grouplens.org/datasets/movielens/). To acknowledge use of the dataset in publications, please cite the following paper:\n\nF. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages. DOI=http://dx.doi.org/10.1145/2827872\n\n## Inspiration\n\nSome ideas worth exploring:\n\n* Which genres receive the highest ratings? How does this change over time?\n\n* Determine the temporal trends in the genres/tagging activity of the movies released", "VersionNotes": "Uploading new version", "TotalCompressedBytes": 928454686.0, "TotalUncompressedBytes": 204038999.0}]
|
[{"Id": 339, "CreatorUserId": 395512, "OwnerUserId": NaN, "OwnerOrganizationId": 190.0, "CurrentDatasetVersionId": 77759.0, "CurrentDatasourceVersionId": 80217.0, "ForumId": 1883, "Type": 2, "CreationDate": "11/07/2016 06:57:40", "LastActivityDate": "02/05/2018", "TotalViews": 259215, "TotalDownloads": 50418, "TotalVotes": 543, "TotalKernels": 297}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
print(os.listdir("../input/movielens-20m-dataset"))
# read movie data and look columns
movies = pd.read_csv("../input/movielens-20m-dataset/movie.csv")
movies.columns
# We will use movieId and title
movies = movies.loc[:, ["movieId", "title"]]
movies.head(10)
# read rating data and look columns
ratings = pd.read_csv("../input/movielens-20m-dataset/rating.csv")
ratings.columns
# We will use movieId, userId and rating
ratings = ratings.loc[:, ["userId", "movieId", "rating"]]
ratings.head(10)
# merging ratings and movies
data = pd.merge(movies, ratings)
data.head(10)
# * We have 4 features = movieId, title, userId and rating
# * We will make item based recommendation system
# * Note: The number of sample in data frame is 20 million that is too much for kaggle. Lets use 1 million of sample in data.
data.shape
data = data.iloc[:1000000, :]
# lets make a pivot table in order to make rows are users and columns are movies. And values are rating
pivot_table = data.pivot_table(index=["userId"], columns=["title"], values="rating")
pivot_table.head(10)
# * The question is that which movie do we recommend these people who watched "Bad Boys (1995)" movie.
# * In order to answer this question we will find similarities between "Bad Boys (1995)" movie and other movies.
movie_watched = pivot_table["Bad Boys (1995)"]
similarity_with_other_movies = pivot_table.corrwith(
movie_watched
) # find correlation between "Bad Boys (1995)" and other movies
similarity_with_other_movies = similarity_with_other_movies.sort_values(ascending=False)
similarity_with_other_movies.head()
| false | 0 | 662 | 0 | 1,272 | 662 |
||
129773256
|
# 🎧 MUSIC RECOMMENDATION SYSTEM USING SPOTIFY 🎧
# Import all require library
import os
import numpy as np
import pandas as pd
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
from sklearn.metrics import euclidean_distances
from scipy.spatial.distance import cdist
import warnings
warnings.filterwarnings("ignore")
# # Read Data
# Now we read the data for analysis.
data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data.csv")
data
genre_data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data_by_genres.csv")
genre_data
year_data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data_by_year.csv")
year_data
data.info()
genre_data.info()
year_data.info()
from yellowbrick.target import FeatureCorrelation
feature_names = [
"acousticness",
"danceability",
"energy",
"instrumentalness",
"liveness",
"loudness",
"speechiness",
"tempo",
"valence",
"duration_ms",
"explicit",
"key",
"mode",
"year",
]
X, y = data[feature_names], data["popularity"]
# Create a list of the feature names
features = np.array(feature_names)
# Instantite the visualizer
visualizer = FeatureCorrelation(labels=features)
plt.rcParams["figure.figsize"] = (9, 9)
visualizer.fit(X, y)
visualizer.show()
#
# # Data understanding by visualization and EDA
# # Music over time
# Using the data grouped by year , We can understand how the overall sound of the music changed from 1921 to 2020.
def get_decade(year):
period_start = int(year / 10) * 10
decade = "{}s".format(period_start)
return decade
data["decade"] = data["year"].apply(get_decade)
sns.set(rc={"figure.figsize": (11, 6)})
sns.countplot(data["decade"])
sound_features = [
"acousticness",
"danceability",
"energy",
"instrumentalness",
"liveness",
"valence",
]
fig = px.line(year_data, x="year", y=sound_features)
fig.show()
fig = px.line(year_data, x="year", y="loudness", title="Trend of loudness over decades")
fig.show()
# # Characteristics of different genres
# This dataset contains the audio features for different songs along with the audio features for different genres. We can use the information to compare different geners and understand their unique differences in sound.
top10_genres = genre_data.nlargest(10, "popularity")
fig = px.bar(
top10_genres,
x="genres",
y=["valence", "energy", "danceability", "acousticness"],
barmode="group",
)
fig.show()
# # Wordcloud
from wordcloud import WordCloud, STOPWORDS
stopwords = set(STOPWORDS)
comment_words = " ".join(genre_data["genres"]) + " "
wordcloud = WordCloud(
width=800,
height=800,
background_color="black",
stopwords=stopwords,
max_words=40,
min_font_size=10,
).generate(comment_words)
plt.figure(figsize=(8, 8), facecolor=None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad=0)
plt.title("Genres Wordcloud")
plt.show()
artist_data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data_by_artist.csv")
stopwords = set(STOPWORDS)
comment_words = " ".join(artist_data["artists"]) + " "
wordcloud = WordCloud(
width=800,
height=800,
background_color="black",
stopwords=stopwords,
min_word_length=3,
max_words=40,
min_font_size=10,
).generate(comment_words)
plt.figure(figsize=(8, 8), facecolor=None)
plt.imshow(wordcloud)
plt.axis("off")
plt.title("Artists Wordcloud")
plt.tight_layout(pad=0)
plt.show()
# # Clustering genres with k_means
# Here, the simple K_means clustering algorithm is used to divide the genres in this dataset into ten clusters based on the numerical audio features of each genres.
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
# try upgrading threadpoolctl
# !pip install -U threadpoolctl
cluster_pipeline = Pipeline(
[("scaler", StandardScaler()), ("kmeans", KMeans(n_clusters=10))]
)
X = genre_data.select_dtypes(np.number)
cluster_pipeline.fit(X)
predicted_clusters = cluster_pipeline.predict(X)
print("X shape:", X.shape)
print("predicted clusters shape:", predicted_clusters.shape)
genre_data["cluster"] = predicted_clusters
X = genre_data.select_dtypes(np.number).dropna()
X = X.set_index(genre_data.index)
# Visualizing the Clusters with t-SNE
from sklearn.manifold import TSNE
tsne_pipeline = Pipeline(
[("scaler", StandardScaler()), ("tsne", TSNE(n_components=2, verbose=1))]
)
genre_embedding = tsne_pipeline.fit_transform(X)
projection = pd.DataFrame(columns=["x", "y"], data=genre_embedding)
projection["genres"] = genre_data["genres"]
projection["cluster"] = genre_data["cluster"]
fig = px.scatter(
projection, x="x", y="y", color="cluster", hover_data=["x", "y", "genres"]
)
fig.show()
# # Clustering Songs with K-Means
song_cluster_pipeline = Pipeline(
[("scaler", StandardScaler()), ("kmeans", KMeans(n_clusters=20, verbose=False))],
verbose=False,
)
X = data.select_dtypes(np.number)
number_cols = list(X.columns)
song_cluster_pipeline.fit(X)
song_cluster_labels = song_cluster_pipeline.predict(X)
data["cluster_label"] = song_cluster_labels
# Visualizing the Clusters with PCA
from sklearn.decomposition import PCA
pca_pipeline = Pipeline([("scaler", StandardScaler()), ("PCA", PCA(n_components=2))])
song_embedding = pca_pipeline.fit_transform(X)
projection = pd.DataFrame(columns=["x", "y"], data=song_embedding)
projection["title"] = data["name"]
projection["cluster"] = data["cluster_label"]
fig = px.scatter(
projection, x="x", y="y", color="cluster", hover_data=["x", "y", "title"]
)
fig.show()
# # Build Recommender System
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
from collections import defaultdict
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
# Set your Spotify API credentials
client_id = "CLIENT_ID"
client_secret = "CLIENT_SECRET"
# Authenticate with the Spotify API
auth_manager = SpotifyClientCredentials(
client_id=client_id, client_secret=client_secret
)
sp = spotipy.Spotify(auth_manager=auth_manager)
# Use the Spotify API to search for tracks
results = sp.search(q="Come As You Are Nirvana", type="track")
print(results)
"""
Finds song details from spotify dataset. If song is unavailable in dataset, it returns none.
"""
def find_song(name, year):
song_data = defaultdict()
results = sp.search(q="track: {} year: {}".format(name, year), limit=1)
if results["tracks"]["items"] == []:
return None
results = results["tracks"]["items"][0]
track_id = results["id"]
audio_features = sp.audio_features(track_id)[0]
song_data["name"] = [name]
song_data["year"] = [year]
song_data["explicit"] = [int(results["explicit"])]
song_data["duration_ms"] = [results["duration_ms"]]
song_data["popularity"] = [results["popularity"]]
for key, value in audio_features.items():
song_data[key] = value
return pd.DataFrame(song_data)
from collections import defaultdict
from sklearn.metrics import euclidean_distances
from scipy.spatial.distance import cdist
import difflib
number_cols = [
"valence",
"year",
"acousticness",
"danceability",
"duration_ms",
"energy",
"explicit",
"instrumentalness",
"key",
"liveness",
"loudness",
"mode",
"popularity",
"speechiness",
"tempo",
]
"""
Fetches song details from dataset. If info is unavailable in dataset, it will search details from the spotify dataset.
"""
def get_song_data(song, spotify_data):
try:
song_data = spotify_data[
(spotify_data["name"] == song["name"])
& (spotify_data["year"] == song["year"])
].iloc[0]
print("Fetching song information from local dataset")
return song_data
except IndexError:
print("Fetching song information from spotify dataset")
return find_song(song["name"], song["year"])
"""
Fetches song info from dataset and does the mean of all numerical features of the song-data.
"""
def get_mean_vector(song_list, spotify_data):
song_vectors = []
for song in song_list:
song_data = get_song_data(song, spotify_data)
if song_data is None:
print(
"Warning: {} does not exist in Spotify or in database".format(
song["name"]
)
)
continue
song_vector = song_data[number_cols].values
song_vectors.append(song_vector)
song_matrix = np.array(
list(song_vectors)
) # nd-array where n is number of songs in list. It contains all numerical vals of songs in sep list.
# print(f'song_matrix {song_matrix}')
return np.mean(song_matrix, axis=0) # mean of each ele in list, returns 1-d array
"""
Flattenning the dictionary by grouping the key and forming a list of values for respective key.
"""
def flatten_dict_list(dict_list):
flattened_dict = defaultdict()
for key in dict_list[0].keys():
flattened_dict[key] = [] # 'name', 'year'
for dic in dict_list:
for key, value in dic.items():
flattened_dict[key].append(value) # creating list of values
return flattened_dict
"""
Gets song list as input.
Get mean vectors of numerical features of the input.
Scale the mean-input as well as dataset numerical features.
calculate eculidean distance b/w mean-input and dataset.
Fetch the top 10 songs with maximum similarity.
"""
def recommend_songs(song_list, spotify_data, n_songs=10):
metadata_cols = ["name", "year", "artists"]
song_dict = flatten_dict_list(song_list)
song_center = get_mean_vector(song_list, spotify_data)
# print(f'song_center {song_center}')
scaler = song_cluster_pipeline.steps[0][1] # StandardScalar()
scaled_data = scaler.transform(spotify_data[number_cols])
scaled_song_center = scaler.transform(song_center.reshape(1, -1))
distances = cdist(scaled_song_center, scaled_data, "cosine")
# print(f'distances {distances}')
index = list(np.argsort(distances)[:, :n_songs][0])
rec_songs = spotify_data.iloc[index]
rec_songs = rec_songs[~rec_songs["name"].isin(song_dict["name"])]
return rec_songs[metadata_cols].to_dict(orient="records")
recommend_songs([{"name": "Blinding Lights", "year": 2019}], data)
recommend_songs(
[
{"name": "Come As You Are", "year": 1991},
{"name": "Smells Like Teen Spirit", "year": 1991},
{"name": "Lithium", "year": 1992},
{"name": "All Apologies", "year": 1993},
{"name": "Stay Away", "year": 1993},
],
data,
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/773/129773256.ipynb
| null | null |
[{"Id": 129773256, "ScriptId": 38594779, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12696049, "CreationDate": "05/16/2023 11:20:05", "VersionNumber": 1.0, "Title": "MUSIC RECOMMENDATION SYSTEM USING SPOTIFY", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 331.0, "LinesInsertedFromPrevious": 331.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# 🎧 MUSIC RECOMMENDATION SYSTEM USING SPOTIFY 🎧
# Import all require library
import os
import numpy as np
import pandas as pd
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
from sklearn.metrics import euclidean_distances
from scipy.spatial.distance import cdist
import warnings
warnings.filterwarnings("ignore")
# # Read Data
# Now we read the data for analysis.
data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data.csv")
data
genre_data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data_by_genres.csv")
genre_data
year_data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data_by_year.csv")
year_data
data.info()
genre_data.info()
year_data.info()
from yellowbrick.target import FeatureCorrelation
feature_names = [
"acousticness",
"danceability",
"energy",
"instrumentalness",
"liveness",
"loudness",
"speechiness",
"tempo",
"valence",
"duration_ms",
"explicit",
"key",
"mode",
"year",
]
X, y = data[feature_names], data["popularity"]
# Create a list of the feature names
features = np.array(feature_names)
# Instantite the visualizer
visualizer = FeatureCorrelation(labels=features)
plt.rcParams["figure.figsize"] = (9, 9)
visualizer.fit(X, y)
visualizer.show()
#
# # Data understanding by visualization and EDA
# # Music over time
# Using the data grouped by year , We can understand how the overall sound of the music changed from 1921 to 2020.
def get_decade(year):
period_start = int(year / 10) * 10
decade = "{}s".format(period_start)
return decade
data["decade"] = data["year"].apply(get_decade)
sns.set(rc={"figure.figsize": (11, 6)})
sns.countplot(data["decade"])
sound_features = [
"acousticness",
"danceability",
"energy",
"instrumentalness",
"liveness",
"valence",
]
fig = px.line(year_data, x="year", y=sound_features)
fig.show()
fig = px.line(year_data, x="year", y="loudness", title="Trend of loudness over decades")
fig.show()
# # Characteristics of different genres
# This dataset contains the audio features for different songs along with the audio features for different genres. We can use the information to compare different geners and understand their unique differences in sound.
top10_genres = genre_data.nlargest(10, "popularity")
fig = px.bar(
top10_genres,
x="genres",
y=["valence", "energy", "danceability", "acousticness"],
barmode="group",
)
fig.show()
# # Wordcloud
from wordcloud import WordCloud, STOPWORDS
stopwords = set(STOPWORDS)
comment_words = " ".join(genre_data["genres"]) + " "
wordcloud = WordCloud(
width=800,
height=800,
background_color="black",
stopwords=stopwords,
max_words=40,
min_font_size=10,
).generate(comment_words)
plt.figure(figsize=(8, 8), facecolor=None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad=0)
plt.title("Genres Wordcloud")
plt.show()
artist_data = pd.read_csv(r"D:\Ekeeda_Python_notes\MINI PROJECT\data_by_artist.csv")
stopwords = set(STOPWORDS)
comment_words = " ".join(artist_data["artists"]) + " "
wordcloud = WordCloud(
width=800,
height=800,
background_color="black",
stopwords=stopwords,
min_word_length=3,
max_words=40,
min_font_size=10,
).generate(comment_words)
plt.figure(figsize=(8, 8), facecolor=None)
plt.imshow(wordcloud)
plt.axis("off")
plt.title("Artists Wordcloud")
plt.tight_layout(pad=0)
plt.show()
# # Clustering genres with k_means
# Here, the simple K_means clustering algorithm is used to divide the genres in this dataset into ten clusters based on the numerical audio features of each genres.
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
# try upgrading threadpoolctl
# !pip install -U threadpoolctl
cluster_pipeline = Pipeline(
[("scaler", StandardScaler()), ("kmeans", KMeans(n_clusters=10))]
)
X = genre_data.select_dtypes(np.number)
cluster_pipeline.fit(X)
predicted_clusters = cluster_pipeline.predict(X)
print("X shape:", X.shape)
print("predicted clusters shape:", predicted_clusters.shape)
genre_data["cluster"] = predicted_clusters
X = genre_data.select_dtypes(np.number).dropna()
X = X.set_index(genre_data.index)
# Visualizing the Clusters with t-SNE
from sklearn.manifold import TSNE
tsne_pipeline = Pipeline(
[("scaler", StandardScaler()), ("tsne", TSNE(n_components=2, verbose=1))]
)
genre_embedding = tsne_pipeline.fit_transform(X)
projection = pd.DataFrame(columns=["x", "y"], data=genre_embedding)
projection["genres"] = genre_data["genres"]
projection["cluster"] = genre_data["cluster"]
fig = px.scatter(
projection, x="x", y="y", color="cluster", hover_data=["x", "y", "genres"]
)
fig.show()
# # Clustering Songs with K-Means
song_cluster_pipeline = Pipeline(
[("scaler", StandardScaler()), ("kmeans", KMeans(n_clusters=20, verbose=False))],
verbose=False,
)
X = data.select_dtypes(np.number)
number_cols = list(X.columns)
song_cluster_pipeline.fit(X)
song_cluster_labels = song_cluster_pipeline.predict(X)
data["cluster_label"] = song_cluster_labels
# Visualizing the Clusters with PCA
from sklearn.decomposition import PCA
pca_pipeline = Pipeline([("scaler", StandardScaler()), ("PCA", PCA(n_components=2))])
song_embedding = pca_pipeline.fit_transform(X)
projection = pd.DataFrame(columns=["x", "y"], data=song_embedding)
projection["title"] = data["name"]
projection["cluster"] = data["cluster_label"]
fig = px.scatter(
projection, x="x", y="y", color="cluster", hover_data=["x", "y", "title"]
)
fig.show()
# # Build Recommender System
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
from collections import defaultdict
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
# Set your Spotify API credentials
client_id = "CLIENT_ID"
client_secret = "CLIENT_SECRET"
# Authenticate with the Spotify API
auth_manager = SpotifyClientCredentials(
client_id=client_id, client_secret=client_secret
)
sp = spotipy.Spotify(auth_manager=auth_manager)
# Use the Spotify API to search for tracks
results = sp.search(q="Come As You Are Nirvana", type="track")
print(results)
"""
Finds song details from spotify dataset. If song is unavailable in dataset, it returns none.
"""
def find_song(name, year):
song_data = defaultdict()
results = sp.search(q="track: {} year: {}".format(name, year), limit=1)
if results["tracks"]["items"] == []:
return None
results = results["tracks"]["items"][0]
track_id = results["id"]
audio_features = sp.audio_features(track_id)[0]
song_data["name"] = [name]
song_data["year"] = [year]
song_data["explicit"] = [int(results["explicit"])]
song_data["duration_ms"] = [results["duration_ms"]]
song_data["popularity"] = [results["popularity"]]
for key, value in audio_features.items():
song_data[key] = value
return pd.DataFrame(song_data)
from collections import defaultdict
from sklearn.metrics import euclidean_distances
from scipy.spatial.distance import cdist
import difflib
number_cols = [
"valence",
"year",
"acousticness",
"danceability",
"duration_ms",
"energy",
"explicit",
"instrumentalness",
"key",
"liveness",
"loudness",
"mode",
"popularity",
"speechiness",
"tempo",
]
"""
Fetches song details from dataset. If info is unavailable in dataset, it will search details from the spotify dataset.
"""
def get_song_data(song, spotify_data):
try:
song_data = spotify_data[
(spotify_data["name"] == song["name"])
& (spotify_data["year"] == song["year"])
].iloc[0]
print("Fetching song information from local dataset")
return song_data
except IndexError:
print("Fetching song information from spotify dataset")
return find_song(song["name"], song["year"])
"""
Fetches song info from dataset and does the mean of all numerical features of the song-data.
"""
def get_mean_vector(song_list, spotify_data):
song_vectors = []
for song in song_list:
song_data = get_song_data(song, spotify_data)
if song_data is None:
print(
"Warning: {} does not exist in Spotify or in database".format(
song["name"]
)
)
continue
song_vector = song_data[number_cols].values
song_vectors.append(song_vector)
song_matrix = np.array(
list(song_vectors)
) # nd-array where n is number of songs in list. It contains all numerical vals of songs in sep list.
# print(f'song_matrix {song_matrix}')
return np.mean(song_matrix, axis=0) # mean of each ele in list, returns 1-d array
"""
Flattenning the dictionary by grouping the key and forming a list of values for respective key.
"""
def flatten_dict_list(dict_list):
flattened_dict = defaultdict()
for key in dict_list[0].keys():
flattened_dict[key] = [] # 'name', 'year'
for dic in dict_list:
for key, value in dic.items():
flattened_dict[key].append(value) # creating list of values
return flattened_dict
"""
Gets song list as input.
Get mean vectors of numerical features of the input.
Scale the mean-input as well as dataset numerical features.
calculate eculidean distance b/w mean-input and dataset.
Fetch the top 10 songs with maximum similarity.
"""
def recommend_songs(song_list, spotify_data, n_songs=10):
metadata_cols = ["name", "year", "artists"]
song_dict = flatten_dict_list(song_list)
song_center = get_mean_vector(song_list, spotify_data)
# print(f'song_center {song_center}')
scaler = song_cluster_pipeline.steps[0][1] # StandardScalar()
scaled_data = scaler.transform(spotify_data[number_cols])
scaled_song_center = scaler.transform(song_center.reshape(1, -1))
distances = cdist(scaled_song_center, scaled_data, "cosine")
# print(f'distances {distances}')
index = list(np.argsort(distances)[:, :n_songs][0])
rec_songs = spotify_data.iloc[index]
rec_songs = rec_songs[~rec_songs["name"].isin(song_dict["name"])]
return rec_songs[metadata_cols].to_dict(orient="records")
recommend_songs([{"name": "Blinding Lights", "year": 2019}], data)
recommend_songs(
[
{"name": "Come As You Are", "year": 1991},
{"name": "Smells Like Teen Spirit", "year": 1991},
{"name": "Lithium", "year": 1992},
{"name": "All Apologies", "year": 1993},
{"name": "Stay Away", "year": 1993},
],
data,
)
| false | 0 | 3,253 | 0 | 3,253 | 3,253 |
||
129773799
|
# # *Loading libraries*
# loading libraries
import os
import tempfile
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
colors = plt.rcParams["axes.prop_cycle"].by_key()["color"]
import seaborn as sns
from tqdm.auto import tqdm
tqdm.pandas()
from sklearn import metrics
from sklearn import model_selection
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from tensorflow import keras
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import activations
# loading datasets
path_train = "/kaggle/input/icr-identify-age-related-conditions/train.csv"
path_test = "/kaggle/input/icr-identify-age-related-conditions/test.csv"
path_submis = "/kaggle/input/icr-identify-age-related-conditions/sample_submission.csv"
path_greeks = "/kaggle/input/icr-identify-age-related-conditions/greeks.csv"
train = pd.read_csv(path_train).drop(columns="Id")
test = pd.read_csv(path_test).drop(columns="Id")
greeks = pd.read_csv(path_greeks)
train["EJ"] = train["EJ"].map({"A": 0, "B": 1})
test["EJ"] = test["EJ"].map({"A": 0, "B": 1})
# # Explaration Data Analysis
# shape for each datasets
print(f"Shape of the train data : {train.shape}")
print(f"Shape of the test data : {test.shape}")
# checking missing values train dataset
train_miss = train.isnull().sum()
print(f"Column Count")
for index, row in train_miss[train_miss > 0].items():
print(f"{index} {row}")
train.describe().transpose()
train.info()
# ***We can use visualization techniques to discover missing values. The heatmap is appropriate for visualization. Each line indicates missing data in a row.***
plt.figure(figsize=(16, 14))
sns.heatmap(train.isnull(), yticklabels=False, cbar=False, cmap="PuBuGn")
plt.show()
# ***There are some common methods for handling missing values in a Pandas DataFrame: fillna(), interpolate() and SimpleImputer from sklearn.impute***
# fill missing values with the mean of the column
train_mean_filled = train.copy()
train_mean_filled.fillna(train_mean_filled.mean(), inplace=True)
# correlation coefficent columns for target
corr_target = train_mean_filled.corrwith(train_mean_filled["Class"])[:-1].sort_values(
ascending=False
)
plt.figure(figsize=(10, 10))
sns.barplot(y=corr_target.index, x=corr_target.values)
plt.show()
# interpolate missing values using linear interpolation
train_interpolate = train.copy()
train_interpolate.interpolate(method="polynomial", order=5)
# correlation coefficent columns for target
corr_target = train_interpolate.corrwith(train_interpolate["Class"])[:-1].sort_values(
ascending=False
)
plt.figure(figsize=(10, 10))
sns.barplot(y=corr_target.index, x=corr_target.values)
plt.show()
from sklearn.impute import SimpleImputer
# create an imputer object and fit it to the data
imputer = SimpleImputer(strategy="mean")
imputer.fit(train)
# transform the data and replace missing values
train_imputed = pd.DataFrame(imputer.transform(train), columns=train.columns)
# correlation coefficent columns for target
corr_target = train_imputed.corrwith(train_imputed["Class"])[:-1].sort_values(
ascending=False
)
plt.figure(figsize=(10, 10))
sns.barplot(y=corr_target.index, x=corr_target.values)
plt.show()
corr = train.iloc[:, 1:].corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
plt.figure(figsize=(16, 14))
ax = sns.heatmap(
corr,
vmin=-1,
vmax=1,
center=0,
cmap=sns.diverging_palette(20, 220, n=200),
square=True,
mask=mask,
)
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, horizontalalignment="right")
labels = ["Class 0", "Class 1"]
sizes = [train["Class"].tolist().count(0), train["Class"].tolist().count(1)]
explode = (0, 0.1)
fig, ax = plt.subplots()
ax.pie(
sizes,
explode=explode,
labels=labels,
autopct="%1.2f%%",
shadow=True,
startangle=180,
)
plt.show()
# multiple plots with seaborn
for x, y in zip(
train_mean_filled.iloc[:, :-29].columns.tolist(),
train_mean_filled.iloc[:, -29:-1].columns.tolist(),
):
fig, axs = plt.subplots(ncols=3, figsize=(15, 5))
sns.scatterplot(data=train_mean_filled, x=x, y=y, hue="Class", ax=axs[0])
sns.rugplot(data=train_mean_filled, x=x, y=y, hue="Class", ax=axs[0])
sns.histplot(
data=train_mean_filled, x=x, hue="Class", color="blue", kde=True, ax=axs[1]
)
sns.histplot(
data=train_mean_filled, x=y, hue="Class", color="green", kde=True, ax=axs[2]
)
plt.show()
# Condition the regression fit on another variable and represent it using color
plt.figure(figsize=(12, 10), edgecolor="blue", frameon=False)
for x, y in zip(
train_mean_filled.iloc[:, :-29].columns.tolist(),
train_mean_filled.iloc[:, -29:-1].columns.tolist(),
):
sns.lmplot(data=train_mean_filled, x=x, y=y, hue="Class")
plt.title("Regression plot with " + x + " and " + y + " columns.")
plt.show()
# # **Building models**
from sklearn.model_selection import KFold, StratifiedKFold, GridSearchCV
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# split data into X and y
X = train_mean_filled.iloc[:, :-1]
Y = train_mean_filled.iloc[:, -1]
# split data into train and test sets
seed = 7
test_size = 0.15
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=test_size, random_state=seed
)
# fit model no training data
model = XGBClassifier(
eval_metric="logloss",
learning_rate=0.05,
max_delta_step=6,
booster="gbtree",
early_stopping_rounds=15, # set early stopping rounds in constructor
n_estimators=500,
)
# XGBClassifier(base_score = 0.5, booster = 'gbtree', colsample_bylevel = 1,
# colsample_bytree=1, gamma = 0, learning_rate = 0.01, max_delta_step = 6,
# max_depth=3, min_child_weight=1, missing=None, n_estimators = 500,
# n_jobs = 1, nthread = None, objective = 'binary:logistic', random_state = 0,
# reg_alpha = 0, reg_lambda = 1, scale_pos_weight = 23.4, seed = None,
# silent = True, subsample = 1)
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, eval_set=eval_set, verbose=True)
# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
# evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
# ====================================================
# Metric
# ====================================================
def balanced_log_loss(y_true, y_pred):
y_pred = np.clip(y_pred, 1e-15, 1 - 1e-15)
nc = np.bincount(y_true)
w0, w1 = 1 / (nc[0] / y_true.shape[0]), 1 / (nc[1] / y_true.shape[0])
balanced_log_loss_score = (
-w0 / nc[0] * (np.sum(np.where(y_true == 0, 1, 0) * np.log(1 - y_pred)))
- w1 / nc[1] * (np.sum(np.where(y_true != 0, 1, 0) * np.log(y_pred)))
) / (w0 + w1)
return balanced_log_loss_score
# # ***CNN model***
neg, pos = np.bincount(train_mean_filled["Class"])
total = neg + pos
print(
"Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n".format(
total, pos, 100 * pos / total
)
)
# The dataset is in a single pandas DataFrame. Split it into training, validation, and test sets using a, for example, 80:10:10 ratio, respectively:
# Use a utility from sklearn to split and shuffle your dataset.
train_df, test_df = train_test_split(train_mean_filled, test_size=0.2)
train_df, val_df = train_test_split(train_mean_filled, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop("Class"))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop("Class"))
test_labels = np.array(test_df.pop("Class"))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
# Normalize the input features using the sklearn StandardScaler. This will set the mean to 0 and standard deviation to 1.
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print("Training labels shape:", train_labels.shape)
print("Validation labels shape:", val_labels.shape)
print("Test labels shape:", test_labels.shape)
print("Training features shape:", train_features.shape)
print("Validation features shape:", val_features.shape)
print("Test features shape:", test_features.shape)
# Look at the data distribution
pos_df = pd.DataFrame(train_features[bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(x=pos_df["AB"], y=pos_df["AF"], kind="hex", xlim=(-5, 5), ylim=(-5, 5))
plt.suptitle("Positive distribution")
sns.jointplot(x=neg_df["AB"], y=neg_df["AF"], kind="hex", xlim=(-5, 5), ylim=(-5, 5))
_ = plt.suptitle("Negative distribution")
# **Define the model and metrics**
# *Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:*
METRICS = [
keras.metrics.TruePositives(name="tp"),
keras.metrics.FalsePositives(name="fp"),
keras.metrics.TrueNegatives(name="tn"),
keras.metrics.FalseNegatives(name="fn"),
keras.metrics.BinaryAccuracy(name="accuracy"),
keras.metrics.Precision(name="precision"),
keras.metrics.Recall(name="recall"),
keras.metrics.AUC(name="auc"),
keras.metrics.AUC(name="prc", curve="PR"), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential(
[
keras.layers.Dense(
16, activation="relu", input_shape=(train_features.shape[-1],)
),
keras.layers.Dropout(0.2),
keras.layers.Dense(1, activation="sigmoid", bias_initializer=output_bias),
]
)
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics,
)
return model
import tensorflow as tf
EPOCHS = 1000
BATCH_SIZE = 2048
early_stopping = keras.callbacks.EarlyStopping(
monitor="log_loss",
min_delta=0.05,
verbose=2,
patience=10,
mode="max",
restore_best_weights=True,
)
model = make_model()
model.summary()
model.predict(train_features[:10])
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=2)
print("Loss: {:0.4f}".format(results[0]))
initial_bias = np.log([pos / neg])
initial_bias
# Set that as the initial bias, and the model will give much more reasonable initial guesses.
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
# With this initialization the initial loss should be approximately:
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
initial_weights = os.path.join(tempfile.mkdtemp(), "initial_weights")
model.save_weights(initial_weights)
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=400,
validation_data=(val_features, val_labels),
verbose=0,
)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=400,
validation_data=(val_features, val_labels),
verbose=0,
)
def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(
history.epoch, history.history["loss"], color=colors[n], label="Train " + label
)
plt.semilogy(
history.epoch,
history.history["val_loss"],
color=colors[n],
label="Val " + label,
linestyle="--",
)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
# # **Train the model**
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=300,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
)
# **Check training history**
# *In this section, we will produce plots of our model's accuracy and loss on the training and validation set.*
# *These are useful to check for overfitting, which we can learn more about in the Overfit and underfit tutorial.*
# *Additionally, we can produce these plots for any of the metrics you created above. False negatives are included as an example.*
def plot_metrics(history):
metrics = ["loss", "prc", "precision", "recall"]
plt.figure(figsize=(10, 8))
for n, metric in enumerate(metrics):
name = metric.replace("_", " ").capitalize()
plt.subplot(2, 2, n + 1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label="Train")
plt.plot(
history.epoch,
history.history["val_" + metric],
color=colors[0],
linestyle="--",
label="Val",
)
plt.xlabel("Epoch")
plt.ylabel(name)
if metric == "loss":
plt.ylim([0, plt.ylim()[1]])
elif metric == "auc":
plt.ylim([0.8, 1])
else:
plt.ylim([0, 1])
plt.legend()
plot_metrics(baseline_history)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/773/129773799.ipynb
| null | null |
[{"Id": 129773799, "ScriptId": 38471728, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9401530, "CreationDate": "05/16/2023 11:24:33", "VersionNumber": 5.0, "Title": "Age-Related Conditions EDA and Classification", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 451.0, "LinesInsertedFromPrevious": 242.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 209.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # *Loading libraries*
# loading libraries
import os
import tempfile
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
colors = plt.rcParams["axes.prop_cycle"].by_key()["color"]
import seaborn as sns
from tqdm.auto import tqdm
tqdm.pandas()
from sklearn import metrics
from sklearn import model_selection
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from tensorflow import keras
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import activations
# loading datasets
path_train = "/kaggle/input/icr-identify-age-related-conditions/train.csv"
path_test = "/kaggle/input/icr-identify-age-related-conditions/test.csv"
path_submis = "/kaggle/input/icr-identify-age-related-conditions/sample_submission.csv"
path_greeks = "/kaggle/input/icr-identify-age-related-conditions/greeks.csv"
train = pd.read_csv(path_train).drop(columns="Id")
test = pd.read_csv(path_test).drop(columns="Id")
greeks = pd.read_csv(path_greeks)
train["EJ"] = train["EJ"].map({"A": 0, "B": 1})
test["EJ"] = test["EJ"].map({"A": 0, "B": 1})
# # Explaration Data Analysis
# shape for each datasets
print(f"Shape of the train data : {train.shape}")
print(f"Shape of the test data : {test.shape}")
# checking missing values train dataset
train_miss = train.isnull().sum()
print(f"Column Count")
for index, row in train_miss[train_miss > 0].items():
print(f"{index} {row}")
train.describe().transpose()
train.info()
# ***We can use visualization techniques to discover missing values. The heatmap is appropriate for visualization. Each line indicates missing data in a row.***
plt.figure(figsize=(16, 14))
sns.heatmap(train.isnull(), yticklabels=False, cbar=False, cmap="PuBuGn")
plt.show()
# ***There are some common methods for handling missing values in a Pandas DataFrame: fillna(), interpolate() and SimpleImputer from sklearn.impute***
# fill missing values with the mean of the column
train_mean_filled = train.copy()
train_mean_filled.fillna(train_mean_filled.mean(), inplace=True)
# correlation coefficent columns for target
corr_target = train_mean_filled.corrwith(train_mean_filled["Class"])[:-1].sort_values(
ascending=False
)
plt.figure(figsize=(10, 10))
sns.barplot(y=corr_target.index, x=corr_target.values)
plt.show()
# interpolate missing values using linear interpolation
train_interpolate = train.copy()
train_interpolate.interpolate(method="polynomial", order=5)
# correlation coefficent columns for target
corr_target = train_interpolate.corrwith(train_interpolate["Class"])[:-1].sort_values(
ascending=False
)
plt.figure(figsize=(10, 10))
sns.barplot(y=corr_target.index, x=corr_target.values)
plt.show()
from sklearn.impute import SimpleImputer
# create an imputer object and fit it to the data
imputer = SimpleImputer(strategy="mean")
imputer.fit(train)
# transform the data and replace missing values
train_imputed = pd.DataFrame(imputer.transform(train), columns=train.columns)
# correlation coefficent columns for target
corr_target = train_imputed.corrwith(train_imputed["Class"])[:-1].sort_values(
ascending=False
)
plt.figure(figsize=(10, 10))
sns.barplot(y=corr_target.index, x=corr_target.values)
plt.show()
corr = train.iloc[:, 1:].corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
plt.figure(figsize=(16, 14))
ax = sns.heatmap(
corr,
vmin=-1,
vmax=1,
center=0,
cmap=sns.diverging_palette(20, 220, n=200),
square=True,
mask=mask,
)
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, horizontalalignment="right")
labels = ["Class 0", "Class 1"]
sizes = [train["Class"].tolist().count(0), train["Class"].tolist().count(1)]
explode = (0, 0.1)
fig, ax = plt.subplots()
ax.pie(
sizes,
explode=explode,
labels=labels,
autopct="%1.2f%%",
shadow=True,
startangle=180,
)
plt.show()
# multiple plots with seaborn
for x, y in zip(
train_mean_filled.iloc[:, :-29].columns.tolist(),
train_mean_filled.iloc[:, -29:-1].columns.tolist(),
):
fig, axs = plt.subplots(ncols=3, figsize=(15, 5))
sns.scatterplot(data=train_mean_filled, x=x, y=y, hue="Class", ax=axs[0])
sns.rugplot(data=train_mean_filled, x=x, y=y, hue="Class", ax=axs[0])
sns.histplot(
data=train_mean_filled, x=x, hue="Class", color="blue", kde=True, ax=axs[1]
)
sns.histplot(
data=train_mean_filled, x=y, hue="Class", color="green", kde=True, ax=axs[2]
)
plt.show()
# Condition the regression fit on another variable and represent it using color
plt.figure(figsize=(12, 10), edgecolor="blue", frameon=False)
for x, y in zip(
train_mean_filled.iloc[:, :-29].columns.tolist(),
train_mean_filled.iloc[:, -29:-1].columns.tolist(),
):
sns.lmplot(data=train_mean_filled, x=x, y=y, hue="Class")
plt.title("Regression plot with " + x + " and " + y + " columns.")
plt.show()
# # **Building models**
from sklearn.model_selection import KFold, StratifiedKFold, GridSearchCV
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# split data into X and y
X = train_mean_filled.iloc[:, :-1]
Y = train_mean_filled.iloc[:, -1]
# split data into train and test sets
seed = 7
test_size = 0.15
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=test_size, random_state=seed
)
# fit model no training data
model = XGBClassifier(
eval_metric="logloss",
learning_rate=0.05,
max_delta_step=6,
booster="gbtree",
early_stopping_rounds=15, # set early stopping rounds in constructor
n_estimators=500,
)
# XGBClassifier(base_score = 0.5, booster = 'gbtree', colsample_bylevel = 1,
# colsample_bytree=1, gamma = 0, learning_rate = 0.01, max_delta_step = 6,
# max_depth=3, min_child_weight=1, missing=None, n_estimators = 500,
# n_jobs = 1, nthread = None, objective = 'binary:logistic', random_state = 0,
# reg_alpha = 0, reg_lambda = 1, scale_pos_weight = 23.4, seed = None,
# silent = True, subsample = 1)
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, eval_set=eval_set, verbose=True)
# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
# evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
# ====================================================
# Metric
# ====================================================
def balanced_log_loss(y_true, y_pred):
y_pred = np.clip(y_pred, 1e-15, 1 - 1e-15)
nc = np.bincount(y_true)
w0, w1 = 1 / (nc[0] / y_true.shape[0]), 1 / (nc[1] / y_true.shape[0])
balanced_log_loss_score = (
-w0 / nc[0] * (np.sum(np.where(y_true == 0, 1, 0) * np.log(1 - y_pred)))
- w1 / nc[1] * (np.sum(np.where(y_true != 0, 1, 0) * np.log(y_pred)))
) / (w0 + w1)
return balanced_log_loss_score
# # ***CNN model***
neg, pos = np.bincount(train_mean_filled["Class"])
total = neg + pos
print(
"Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n".format(
total, pos, 100 * pos / total
)
)
# The dataset is in a single pandas DataFrame. Split it into training, validation, and test sets using a, for example, 80:10:10 ratio, respectively:
# Use a utility from sklearn to split and shuffle your dataset.
train_df, test_df = train_test_split(train_mean_filled, test_size=0.2)
train_df, val_df = train_test_split(train_mean_filled, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop("Class"))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop("Class"))
test_labels = np.array(test_df.pop("Class"))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
# Normalize the input features using the sklearn StandardScaler. This will set the mean to 0 and standard deviation to 1.
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print("Training labels shape:", train_labels.shape)
print("Validation labels shape:", val_labels.shape)
print("Test labels shape:", test_labels.shape)
print("Training features shape:", train_features.shape)
print("Validation features shape:", val_features.shape)
print("Test features shape:", test_features.shape)
# Look at the data distribution
pos_df = pd.DataFrame(train_features[bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(x=pos_df["AB"], y=pos_df["AF"], kind="hex", xlim=(-5, 5), ylim=(-5, 5))
plt.suptitle("Positive distribution")
sns.jointplot(x=neg_df["AB"], y=neg_df["AF"], kind="hex", xlim=(-5, 5), ylim=(-5, 5))
_ = plt.suptitle("Negative distribution")
# **Define the model and metrics**
# *Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:*
METRICS = [
keras.metrics.TruePositives(name="tp"),
keras.metrics.FalsePositives(name="fp"),
keras.metrics.TrueNegatives(name="tn"),
keras.metrics.FalseNegatives(name="fn"),
keras.metrics.BinaryAccuracy(name="accuracy"),
keras.metrics.Precision(name="precision"),
keras.metrics.Recall(name="recall"),
keras.metrics.AUC(name="auc"),
keras.metrics.AUC(name="prc", curve="PR"), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential(
[
keras.layers.Dense(
16, activation="relu", input_shape=(train_features.shape[-1],)
),
keras.layers.Dropout(0.2),
keras.layers.Dense(1, activation="sigmoid", bias_initializer=output_bias),
]
)
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics,
)
return model
import tensorflow as tf
EPOCHS = 1000
BATCH_SIZE = 2048
early_stopping = keras.callbacks.EarlyStopping(
monitor="log_loss",
min_delta=0.05,
verbose=2,
patience=10,
mode="max",
restore_best_weights=True,
)
model = make_model()
model.summary()
model.predict(train_features[:10])
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=2)
print("Loss: {:0.4f}".format(results[0]))
initial_bias = np.log([pos / neg])
initial_bias
# Set that as the initial bias, and the model will give much more reasonable initial guesses.
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
# With this initialization the initial loss should be approximately:
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
initial_weights = os.path.join(tempfile.mkdtemp(), "initial_weights")
model.save_weights(initial_weights)
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=400,
validation_data=(val_features, val_labels),
verbose=0,
)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=400,
validation_data=(val_features, val_labels),
verbose=0,
)
def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(
history.epoch, history.history["loss"], color=colors[n], label="Train " + label
)
plt.semilogy(
history.epoch,
history.history["val_loss"],
color=colors[n],
label="Val " + label,
linestyle="--",
)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
# # **Train the model**
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=300,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
)
# **Check training history**
# *In this section, we will produce plots of our model's accuracy and loss on the training and validation set.*
# *These are useful to check for overfitting, which we can learn more about in the Overfit and underfit tutorial.*
# *Additionally, we can produce these plots for any of the metrics you created above. False negatives are included as an example.*
def plot_metrics(history):
metrics = ["loss", "prc", "precision", "recall"]
plt.figure(figsize=(10, 8))
for n, metric in enumerate(metrics):
name = metric.replace("_", " ").capitalize()
plt.subplot(2, 2, n + 1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label="Train")
plt.plot(
history.epoch,
history.history["val_" + metric],
color=colors[0],
linestyle="--",
label="Val",
)
plt.xlabel("Epoch")
plt.ylabel(name)
if metric == "loss":
plt.ylim([0, plt.ylim()[1]])
elif metric == "auc":
plt.ylim([0.8, 1])
else:
plt.ylim([0, 1])
plt.legend()
plot_metrics(baseline_history)
| false | 0 | 4,525 | 0 | 4,525 | 4,525 |
||
129239562
|
<jupyter_start><jupyter_text>HR Competency Scores for Screening
##### Context
Recruitment and candidate selection play a critical role in determining the success of an organization. An effective initial screening process can significantly improve the quality of the hiring pool and increase the chances of finding the right candidate for any given role. This dataset focuses on both behavioral and functional competency scores, which are essential aspects of a candidate's potential fit and contribution to the organization.
##### Sources
The data in this dataset has been collected from an anonymous company's internal HR department and published in a normalized form. The dataset combines the scores from two key assessments:
1. Functional competency test: Utilized to evaluate a candidate's hard skills and domain knowledge.
2. HR behavior test: An assessment tool focused on evaluating soft or behavior skills, crucial for teamwork and adaptability within an organization.
##### Young Researchers' Contribution
We were approached by a group of young researchers interested in the explainable AI (XAI) problem. They aimed to analyze HR data to understand why specific candidates were called for interviews while others were not. With their valuable input and help in preprocessing the data, we have made this dataset available for the wider research community.
##### Inspiration
The inspiration behind sharing this dataset was the growing need for insights into the hiring process and the importance of selecting candidates who possess a balance of functional and behavioral competencies. With the added value of XAI research, we hope to encourage researchers and data scientists to analyze the initial screening process, build models to optimize candidate selection, explain their decisions, and uncover new insights that can enhance recruitment strategies.
The dataset can be employed for a wide range of applications, including:
1. Identifying the most significant factors in determining a candidate's eligibility for an interview.
2. Developing machine learning models to predict and explain the likelihood of a candidate being called for an interview.
3. Analyzing the balance between functional competencies and behavioral skills required for a good fit in the organization.
4. Investigating the impact of different skill combinations on the overall competency scores.
We hope this dataset inspires researchers to explore new dimensions of the hiring process and contribute to building better and more transparent recruitment strategies.
Kaggle dataset identifier: hr-competency-scores-for-screening
<jupyter_script># HR Competency Scores for Screening
# Context
# Recruitment and candidate selection play a critical role in determining the success of an organization. An effective initial screening process can significantly improve the quality of the hiring pool and increase the chances of finding the right candidate for any given role. This dataset focuses on both behavioral and functional competency scores, which are essential aspects of a candidate's potential fit and contribution to the organization.
# Sources
# The data in this dataset has been collected from an anonymous company's internal HR department and published in a normalized form. The dataset combines the scores from two key assessments:
# Functional competency test: Utilized to evaluate a candidate's hard skills and domain knowledge.
# HR behavior test: An assessment tool focused on evaluating soft or behavior skills, crucial for teamwork and adaptability within an organization.
# Young Researchers' Contribution
# We were approached by a group of young researchers interested in the explainable AI (XAI) problem. They aimed to analyze HR data to understand why specific candidates were called for interviews while others were not. With their valuable input and help in preprocessing the data, we have made this dataset available for the wider research community.
# Inspiration
# The inspiration behind sharing this dataset was the growing need for insights into the hiring process and the importance of selecting candidates who possess a balance of functional and behavioral competencies. With the added value of XAI research, we hope to encourage researchers and data scientists to analyze the initial screening process, build models to optimize candidate selection, explain their decisions, and uncover new insights that can enhance recruitment strategies.
# The dataset can be employed for a wide range of applications, including:
# Identifying the most significant factors in determining a candidate's eligibility for an interview.
# Developing machine learning models to predict and explain the likelihood of a candidate being called for an interview.
# Analyzing the balance between functional competencies and behavioral skills required for a good fit in the organization.
# Investigating the impact of different skill combinations on the overall competency scores.
# We hope this dataset inspires researchers to explore new dimensions of the hiring process and contribute to building better and more transparent recruitment strategies.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# Import Libs
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve, RocCurveDisplay
from sklearn.metrics import (
confusion_matrix,
ConfusionMatrixDisplay,
accuracy_score,
classification_report,
)
# Read Data
df = pd.read_csv("/kaggle/input/hr-competency-scores-for-screening")
# View Head
df.head()
# Check if any Null Values
df.isnull().sum()
# CHeck Datatype
df.info()
# Check Unique Values
for i in df.columns:
print(i, "---->", df[i].unique())
# Check Shape
df.shape
# Columns Name
df.columns
# Plot Histogram
fig, axis = plt.subplots(1, 10, figsize=(20, 10))
df.hist(ax=axis, bins=5, grid=False)
plt.xticks(rotation=90)
plt.show()
# Heatmap
sns.heatmap(df.corr(), annot=True)
# Round years of experience with one decimal
df["years_of_experience"] = round(df["years_of_experience"], 1)
# Bar plot ( Years of experience - call of experience)
sns.barplot(
data=df,
x="years_of_experience",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line Chart (functional_competency_score - call_for_interview)
sns.lineplot(
data=df,
y="functional_competency_score",
x="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line chart (top1_skills_score - call_for_interview)
sns.lineplot(
data=df,
y="top1_skills_score",
x="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line Chart (top2_skills_score - call_for_interview)
sns.lineplot(
data=df,
y="top2_skills_score",
x="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Round Behaviour_competency_score for better graph view
df["behavior_competency_score"] = round(df["behavior_competency_score"], 1)
# Line plot (behaviour_competency_score - call_for_interview)
sns.lineplot(
data=df,
x="behavior_competency_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line plot (top1_behaviour_skill_score - call_for_interview)
sns.lineplot(
data=df,
x="top1_behavior_skill_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line plot (top2_behavior_skill_score - call_for_interview)
sns.lineplot(
data=df,
x="top2_behavior_skill_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line Plot (top3_behaviour_skill_score - call_for_interview)
sns.lineplot(
data=df,
x="top3_behavior_skill_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Boxplot
plt.figure(figsize=(10, 15))
ax = df.plot(kind="box", title="boxplot")
plt.xticks(rotation=90)
plt.show()
# Model Building
# We are going for Logistic Regression
X = df.drop("call_for_interview", axis=1)
y = df["call_for_interview"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
logistic_model = LogisticRegression()
logistic_model.fit(X_train, y_train)
logistic_prediction = logistic_model.predict(X_test)
confusion_matrix(y_test, logistic_prediction)
accuracy_score(y_test, logistic_prediction)
print(classification_report(y_test, logistic_prediction))
ConfusionMatrixDisplay(
confusion_matrix=confusion_matrix(logistic_prediction, y_test)
).plot()
from sklearn.metrics import roc_auc_score, roc_curve, roc_auc_score, auc
fpr, tpr, thresholds = roc_curve(y_test, logistic_prediction)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC curve (AUC = %0.2f)" % roc_auc)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver operating characteristic (ROC) curve")
plt.legend(loc="lower right")
##END
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/239/129239562.ipynb
|
hr-competency-scores-for-screening
|
muhammadjawwadismail
|
[{"Id": 129239562, "ScriptId": 38424137, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10864913, "CreationDate": "05/12/2023 05:24:20", "VersionNumber": 1.0, "Title": "HR Competency Scores for Screening (Logistic Reg)", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 219.0, "LinesInsertedFromPrevious": 219.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 185111348, "KernelVersionId": 129239562, "SourceDatasetVersionId": 5658852}]
|
[{"Id": 5658852, "DatasetId": 3252282, "DatasourceVersionId": 5734269, "CreatorUserId": 4429155, "LicenseName": "CC BY-SA 4.0", "CreationDate": "05/10/2023 21:38:25", "VersionNumber": 1.0, "Title": "HR Competency Scores for Screening", "Slug": "hr-competency-scores-for-screening", "Subtitle": "Anonymized HR Data for Evaluating Candidate Screening Processes", "Description": "##### Context\n\nRecruitment and candidate selection play a critical role in determining the success of an organization. An effective initial screening process can significantly improve the quality of the hiring pool and increase the chances of finding the right candidate for any given role. This dataset focuses on both behavioral and functional competency scores, which are essential aspects of a candidate's potential fit and contribution to the organization.\n\n##### Sources\n\nThe data in this dataset has been collected from an anonymous company's internal HR department and published in a normalized form. The dataset combines the scores from two key assessments:\n\n1. Functional competency test: Utilized to evaluate a candidate's hard skills and domain knowledge.\n2. HR behavior test: An assessment tool focused on evaluating soft or behavior skills, crucial for teamwork and adaptability within an organization.\n\n##### Young Researchers' Contribution\n\nWe were approached by a group of young researchers interested in the explainable AI (XAI) problem. They aimed to analyze HR data to understand why specific candidates were called for interviews while others were not. With their valuable input and help in preprocessing the data, we have made this dataset available for the wider research community.\n\n##### Inspiration\n\nThe inspiration behind sharing this dataset was the growing need for insights into the hiring process and the importance of selecting candidates who possess a balance of functional and behavioral competencies. With the added value of XAI research, we hope to encourage researchers and data scientists to analyze the initial screening process, build models to optimize candidate selection, explain their decisions, and uncover new insights that can enhance recruitment strategies.\n\nThe dataset can be employed for a wide range of applications, including:\n\n1. Identifying the most significant factors in determining a candidate's eligibility for an interview.\n2. Developing machine learning models to predict and explain the likelihood of a candidate being called for an interview.\n3. Analyzing the balance between functional competencies and behavioral skills required for a good fit in the organization.\n4. Investigating the impact of different skill combinations on the overall competency scores.\n\nWe hope this dataset inspires researchers to explore new dimensions of the hiring process and contribute to building better and more transparent recruitment strategies.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3252282, "CreatorUserId": 4429155, "OwnerUserId": 4429155.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5658852.0, "CurrentDatasourceVersionId": 5734269.0, "ForumId": 3317708, "Type": 2, "CreationDate": "05/10/2023 21:38:25", "LastActivityDate": "05/10/2023", "TotalViews": 5601, "TotalDownloads": 897, "TotalVotes": 37, "TotalKernels": 12}]
|
[{"Id": 4429155, "UserName": "muhammadjawwadismail", "DisplayName": "Muhammad Jawad", "RegisterDate": "02/03/2020", "PerformanceTier": 0}]
|
# HR Competency Scores for Screening
# Context
# Recruitment and candidate selection play a critical role in determining the success of an organization. An effective initial screening process can significantly improve the quality of the hiring pool and increase the chances of finding the right candidate for any given role. This dataset focuses on both behavioral and functional competency scores, which are essential aspects of a candidate's potential fit and contribution to the organization.
# Sources
# The data in this dataset has been collected from an anonymous company's internal HR department and published in a normalized form. The dataset combines the scores from two key assessments:
# Functional competency test: Utilized to evaluate a candidate's hard skills and domain knowledge.
# HR behavior test: An assessment tool focused on evaluating soft or behavior skills, crucial for teamwork and adaptability within an organization.
# Young Researchers' Contribution
# We were approached by a group of young researchers interested in the explainable AI (XAI) problem. They aimed to analyze HR data to understand why specific candidates were called for interviews while others were not. With their valuable input and help in preprocessing the data, we have made this dataset available for the wider research community.
# Inspiration
# The inspiration behind sharing this dataset was the growing need for insights into the hiring process and the importance of selecting candidates who possess a balance of functional and behavioral competencies. With the added value of XAI research, we hope to encourage researchers and data scientists to analyze the initial screening process, build models to optimize candidate selection, explain their decisions, and uncover new insights that can enhance recruitment strategies.
# The dataset can be employed for a wide range of applications, including:
# Identifying the most significant factors in determining a candidate's eligibility for an interview.
# Developing machine learning models to predict and explain the likelihood of a candidate being called for an interview.
# Analyzing the balance between functional competencies and behavioral skills required for a good fit in the organization.
# Investigating the impact of different skill combinations on the overall competency scores.
# We hope this dataset inspires researchers to explore new dimensions of the hiring process and contribute to building better and more transparent recruitment strategies.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# Import Libs
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve, RocCurveDisplay
from sklearn.metrics import (
confusion_matrix,
ConfusionMatrixDisplay,
accuracy_score,
classification_report,
)
# Read Data
df = pd.read_csv("/kaggle/input/hr-competency-scores-for-screening")
# View Head
df.head()
# Check if any Null Values
df.isnull().sum()
# CHeck Datatype
df.info()
# Check Unique Values
for i in df.columns:
print(i, "---->", df[i].unique())
# Check Shape
df.shape
# Columns Name
df.columns
# Plot Histogram
fig, axis = plt.subplots(1, 10, figsize=(20, 10))
df.hist(ax=axis, bins=5, grid=False)
plt.xticks(rotation=90)
plt.show()
# Heatmap
sns.heatmap(df.corr(), annot=True)
# Round years of experience with one decimal
df["years_of_experience"] = round(df["years_of_experience"], 1)
# Bar plot ( Years of experience - call of experience)
sns.barplot(
data=df,
x="years_of_experience",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line Chart (functional_competency_score - call_for_interview)
sns.lineplot(
data=df,
y="functional_competency_score",
x="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line chart (top1_skills_score - call_for_interview)
sns.lineplot(
data=df,
y="top1_skills_score",
x="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line Chart (top2_skills_score - call_for_interview)
sns.lineplot(
data=df,
y="top2_skills_score",
x="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Round Behaviour_competency_score for better graph view
df["behavior_competency_score"] = round(df["behavior_competency_score"], 1)
# Line plot (behaviour_competency_score - call_for_interview)
sns.lineplot(
data=df,
x="behavior_competency_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line plot (top1_behaviour_skill_score - call_for_interview)
sns.lineplot(
data=df,
x="top1_behavior_skill_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line plot (top2_behavior_skill_score - call_for_interview)
sns.lineplot(
data=df,
x="top2_behavior_skill_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Line Plot (top3_behaviour_skill_score - call_for_interview)
sns.lineplot(
data=df,
x="top3_behavior_skill_score",
y="call_for_interview",
)
plt.xticks(rotation=90)
plt.show()
# Boxplot
plt.figure(figsize=(10, 15))
ax = df.plot(kind="box", title="boxplot")
plt.xticks(rotation=90)
plt.show()
# Model Building
# We are going for Logistic Regression
X = df.drop("call_for_interview", axis=1)
y = df["call_for_interview"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
logistic_model = LogisticRegression()
logistic_model.fit(X_train, y_train)
logistic_prediction = logistic_model.predict(X_test)
confusion_matrix(y_test, logistic_prediction)
accuracy_score(y_test, logistic_prediction)
print(classification_report(y_test, logistic_prediction))
ConfusionMatrixDisplay(
confusion_matrix=confusion_matrix(logistic_prediction, y_test)
).plot()
from sklearn.metrics import roc_auc_score, roc_curve, roc_auc_score, auc
fpr, tpr, thresholds = roc_curve(y_test, logistic_prediction)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC curve (AUC = %0.2f)" % roc_auc)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver operating characteristic (ROC) curve")
plt.legend(loc="lower right")
##END
| false | 0 | 1,989 | 1 | 2,516 | 1,989 |
||
129239801
|
# # What is XAI?
# XAI, an abbreviation for Explainable Artificial Intelligence, involves the development and deployment of AI systems and algorithms that can provide clear and easily understandable explanations for their decision-making and predictions. Its main objective is to address the lack of transparency in conventional AI models, particularly those based on machine learning or deep learning. XAI enables users such as researchers, regulators, and end-users to comprehend the reasoning and factors behind the outputs of an AI system. By offering explanations, XAI enhances trust, accountability, and transparency in critical domains like healthcare, finance, and autonomous vehicles. XAI techniques encompass approaches such as post-hoc explanations, interpretable models, and various interpretability methods that facilitate the understanding and validation of AI systems.
# In many applications, understanding the rationale behind a model's predictions is just as crucial as the accuracy of those predictions. However, complex models like ensemble or deep learning models, which are highly accurate, pose challenges for experts trying to interpret them. This creates a trade-off between accuracy and interpretability, as the most accurate models are often the least interpretable.
# # What is the need of XAI?
# Explainable AI (XAI) addresses several important needs in the context of artificial intelligence:
# * Transparency and Trust: XAI instills trust in AI systems by offering understandable explanations for their decisions. Users, including individuals, organizations, and regulators, can have greater confidence when they can comprehend and verify the AI's reasoning.
# * Ethical and Legal Considerations: Various domains, such as healthcare and finance, have legal and ethical requirements for transparency and accountability. XAI ensures compliance by providing explanations for AI outputs, enabling stakeholders to ensure fairness, non-discrimination, and adherence to regulations.
# * Bias and Fairness Mitigation: AI models may inadvertently reflect biases present in the training data, resulting in unfair or discriminatory outcomes. XAI techniques help identify and address such biases, promoting more equitable AI systems.
# * Debugging and Error Detection: XAI assists in identifying errors, limitations, or unexpected behaviors in AI models. Through explanations, it becomes easier to recognize problematic patterns or inputs that may lead to incorrect predictions or undesired results.
# * User Understanding and Collaboration: XAI empowers users to comprehend and engage with AI systems effectively. In complex applications like medical diagnosis or autonomous vehicles, explanations aid users in making informed decisions, collaborating with AI systems, and potentially rectifying or overriding incorrect predictions.
# * Accountability and Regulatory Compliance: XAI supports accountability by enabling AI developers, operators, and users to understand the decision-making process. This is vital for ensuring responsible AI usage and meeting regulatory obligations, particularly in sectors with stringent guidelines.
# Overall, XAI is necessary to address the black box nature of AI systems, providing transparency, interpretability, and explainability to meet societal expectations, legal obligations, and ethical considerations associated with the deployment of AI technology.
# ## Two popular Methods
# LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) are two popular techniques used for interpreting and explaining the predictions of machine learning models. While they both aim to provide interpretability, they differ in their approaches and methodologies.
# LIME: LIME is a model-agnostic method that explains individual predictions by approximating the behavior of a complex model using simpler, interpretable models. It generates local explanations by perturbing the input features and observing the resulting changes in the model's output. LIME assigns importance weights to the features based on their impact on the prediction and builds a local linear model around the instance of interest.
# SHAP: SHAP is based on the concept of cooperative game theory and provides a unified framework for explaining predictions. It assigns a unique attribution value to each feature by quantifying the contribution of that feature to the prediction. SHAP values are derived by considering all possible feature combinations and computing the average contribution of each feature across different combinations. This ensures fairness and consistency in the attribution.
# While both LIME and SHAP are valuable tools for interpretability, they have some differences:
# * Model Agnosticism: LIME is explicitly designed to work with any machine learning model, regardless of its underlying algorithm. In contrast, SHAP is also model-agnostic, but it has extensions tailored for specific model types, such as SHAP for tree-based models (TreeSHAP) or SHAP for deep learning models (DeepSHAP).
# * Explanatory Power: SHAP provides a more comprehensive and theoretically grounded explanation by leveraging concepts from cooperative game theory. It offers global explanations that take into account the contribution of each feature across all possible combinations, while LIME primarily focuses on local explanations around individual instances.
# * Computational Complexity: Due to its reliance on computing all possible feature combinations, SHAP can be computationally expensive, especially for models with a large number of features. LIME, on the other hand, typically has lower computational complexity since it approximates the behavior of the model locally using simpler models.
# In summary, LIME and SHAP are both powerful tools for interpretability, with LIME focusing on local explanations using simplified models and SHAP providing comprehensive, theoretically grounded explanations. The choice between them depends on the specific requirements of the use case, the complexity of the model, and the desired level of interpretability.
# A example is given below
# ## Importing Libraries
# importing libraies
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
import shap
import lime
shap.initjs()
# # Importing Dataset
df = sns.load_dataset("iris")
df
# # Exploratory Data Analysis
df.info()
df.isna().sum()
df.describe()
df.plot(
kind="kde",
subplots=True,
layout=(6, 3),
figsize=(17, 22),
sharex=False,
sharey=False,
)
df.plot(
kind="box",
vert=False, # makes horizontal plots instead of vertical
subplots=True,
layout=(6, 3),
figsize=(17, 22),
sharex=False,
sharey=False,
)
# # Preprocessing
df["species"].value_counts()
sp = {"setosa": 0, "versicolor": 1, "virginica": 2}
df["species"] = df["species"].map(sp)
xTrain, xTest, yTrain, yTest = train_test_split(
df.drop("species", axis=1), df["species"], test_size=0.33, random_state=14
)
# # Baseline Modelling
model = RandomForestClassifier()
model.fit(xTrain, yTrain)
pred = model.predict(xTest)
print(classification_report(yTest, pred))
# # Explainable AI
# creating an instance of the lime tabular explainer
lime_explainer = lime.lime_tabular.LimeTabularExplainer(
training_data=np.array(xTrain),
feature_names=xTrain.columns,
class_names=["0", "1", "2"],
mode="classification",
)
# ### Plots Using Lime
# A LIME tabular explainer plot is a visual representation that helps explain the predictions made by a machine learning model for tabular data using the LIME (Local Interpretable Model-agnostic Explanations) technique. It provides insights into how individual features contribute to the model's prediction for a specific instance.
# The plot typically consists of a horizontal bar chart where each bar represents a feature. The length or height of the bar indicates the magnitude of the feature's contribution to the prediction. Positive values indicate that the feature positively influenced the prediction, while negative values suggest a negative influence.
# The LIME tabular explainer plot allows users to understand which features are the most important in driving the model's decision for a particular data point. By visually examining the bar lengths, users can easily identify the significant factors that influenced the prediction and gain valuable insights into the model's decision-making process for that specific instance.
# obtaining the explanation
explanation = lime_explainer.explain_instance(
data_row=xTest.iloc[38],
predict_fn=model.predict_proba,
top_labels=6,
num_features=13,
)
# printing out the explanation
explanation.show_in_notebook()
explanation.as_pyplot_figure()
# . -->
# ### Plots using SHAP
# shape explainer for some error debug
shap_explaner = shap.Explainer(model)(xTest)
# calculating shap values
shap_value = shap.Explainer(model).shap_values(xTest)
# ### Summary Plot
# The summary plot integrates feature importance and feature effects into a unified visualization. Each data point on the summary plot represents a Shapley value associated with a specific feature and instance. The vertical position of each point corresponds to the feature, while the horizontal position represents the Shapley value. By combining these two dimensions, the summary plot offers a comprehensive view of both the relative importance of features and their corresponding effects, allowing users to understand the contribution of each feature to the overall model output.
# It provide feature importance
shap.summary_plot(shap_value, xTest)
# ### Waterfall Plot
# The waterfall plot is a visual representation specifically created to illustrate the influence of SHAP values (evidence) for each feature on shifting the model output from our initial expectation based on the background data distribution to the ultimate model prediction considering the evidence from all features. In other words, the waterfall plot visually demonstrates the cumulative impact of each feature's SHAP values in driving the model's output, providing insights into how the evidence from different features contributes to the final prediction.
# Watefall plot
exp = shap.Explanation(
shap_explaner.values[38],
shap_explaner.base_values[1],
data=xTest.values,
feature_names=xTest.columns.values,
)
idx = 0
shap.plots.waterfall(exp[idx])
# ### Decision Plot
# Decision plots are useful for visualizing SHAP interaction values, which represent the first-order interactions derived from tree-based models. While SHAP dependence plots provide a great way to visualize individual interactions, decision plots offer a way to showcase the combined impact of main effects and interactions for one or more observations. In essence, decision plots provide a comprehensive view of how various factors interact and contribute to the overall outcome, allowing for a deeper understanding of the relationships within the data.
shap.decision_plot(
shap.TreeExplainer(model).expected_value[0], shap_value[0], xTrain.columns
)
# ### Force Plot
# The SHAP force plot provides a clear depiction of the specific features that exerted the greatest influence on the model's prediction for a particular observation. It highlights the relative importance of each feature, allowing users to identify the key factors that contributed significantly to the model's decision-making process for that specific instance. By visually presenting the influential features, the SHAP force plot helps users gain a precise understanding of the factors that drove the model's prediction for a single observation.
shap.plots.force(
shap.TreeExplainer(model).expected_value[0],
shap_explaner.values[:, :, 0],
xTest.values[0],
)
# Force plot for single tuple
shap.plots.force(
shap.TreeExplainer(model).expected_value[0],
shap_explaner.values[42, :, 0],
xTest.values[0],
)
# ### Dependence Plot
# SHAP dependence plots serve as an alternative to partial dependence plots (PDP) and accumulated local effects (ALE) plots. While PDP and ALE plots depict average effects, SHAP dependence plots go a step further by showcasing both the average effects and the variance on the y-axis. This is particularly useful when examining interactions between features. In such cases, the SHAP dependence plot will exhibit greater dispersion along the y-axis, providing a more comprehensive understanding of the varying effects and their ranges. Ultimately, the SHAP dependence plot offers a valuable visualization tool that captures both the average and varying impacts of features on the model's predictions.
shap.dependence_plot("petal_width", shap_value[0], xTest)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/239/129239801.ipynb
| null | null |
[{"Id": 129239801, "ScriptId": 38424211, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10298735, "CreationDate": "05/12/2023 05:27:06", "VersionNumber": 1.0, "Title": "notebook2fec57b7bb", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 203.0, "LinesInsertedFromPrevious": 203.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
| null | null | null | null |
# # What is XAI?
# XAI, an abbreviation for Explainable Artificial Intelligence, involves the development and deployment of AI systems and algorithms that can provide clear and easily understandable explanations for their decision-making and predictions. Its main objective is to address the lack of transparency in conventional AI models, particularly those based on machine learning or deep learning. XAI enables users such as researchers, regulators, and end-users to comprehend the reasoning and factors behind the outputs of an AI system. By offering explanations, XAI enhances trust, accountability, and transparency in critical domains like healthcare, finance, and autonomous vehicles. XAI techniques encompass approaches such as post-hoc explanations, interpretable models, and various interpretability methods that facilitate the understanding and validation of AI systems.
# In many applications, understanding the rationale behind a model's predictions is just as crucial as the accuracy of those predictions. However, complex models like ensemble or deep learning models, which are highly accurate, pose challenges for experts trying to interpret them. This creates a trade-off between accuracy and interpretability, as the most accurate models are often the least interpretable.
# # What is the need of XAI?
# Explainable AI (XAI) addresses several important needs in the context of artificial intelligence:
# * Transparency and Trust: XAI instills trust in AI systems by offering understandable explanations for their decisions. Users, including individuals, organizations, and regulators, can have greater confidence when they can comprehend and verify the AI's reasoning.
# * Ethical and Legal Considerations: Various domains, such as healthcare and finance, have legal and ethical requirements for transparency and accountability. XAI ensures compliance by providing explanations for AI outputs, enabling stakeholders to ensure fairness, non-discrimination, and adherence to regulations.
# * Bias and Fairness Mitigation: AI models may inadvertently reflect biases present in the training data, resulting in unfair or discriminatory outcomes. XAI techniques help identify and address such biases, promoting more equitable AI systems.
# * Debugging and Error Detection: XAI assists in identifying errors, limitations, or unexpected behaviors in AI models. Through explanations, it becomes easier to recognize problematic patterns or inputs that may lead to incorrect predictions or undesired results.
# * User Understanding and Collaboration: XAI empowers users to comprehend and engage with AI systems effectively. In complex applications like medical diagnosis or autonomous vehicles, explanations aid users in making informed decisions, collaborating with AI systems, and potentially rectifying or overriding incorrect predictions.
# * Accountability and Regulatory Compliance: XAI supports accountability by enabling AI developers, operators, and users to understand the decision-making process. This is vital for ensuring responsible AI usage and meeting regulatory obligations, particularly in sectors with stringent guidelines.
# Overall, XAI is necessary to address the black box nature of AI systems, providing transparency, interpretability, and explainability to meet societal expectations, legal obligations, and ethical considerations associated with the deployment of AI technology.
# ## Two popular Methods
# LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) are two popular techniques used for interpreting and explaining the predictions of machine learning models. While they both aim to provide interpretability, they differ in their approaches and methodologies.
# LIME: LIME is a model-agnostic method that explains individual predictions by approximating the behavior of a complex model using simpler, interpretable models. It generates local explanations by perturbing the input features and observing the resulting changes in the model's output. LIME assigns importance weights to the features based on their impact on the prediction and builds a local linear model around the instance of interest.
# SHAP: SHAP is based on the concept of cooperative game theory and provides a unified framework for explaining predictions. It assigns a unique attribution value to each feature by quantifying the contribution of that feature to the prediction. SHAP values are derived by considering all possible feature combinations and computing the average contribution of each feature across different combinations. This ensures fairness and consistency in the attribution.
# While both LIME and SHAP are valuable tools for interpretability, they have some differences:
# * Model Agnosticism: LIME is explicitly designed to work with any machine learning model, regardless of its underlying algorithm. In contrast, SHAP is also model-agnostic, but it has extensions tailored for specific model types, such as SHAP for tree-based models (TreeSHAP) or SHAP for deep learning models (DeepSHAP).
# * Explanatory Power: SHAP provides a more comprehensive and theoretically grounded explanation by leveraging concepts from cooperative game theory. It offers global explanations that take into account the contribution of each feature across all possible combinations, while LIME primarily focuses on local explanations around individual instances.
# * Computational Complexity: Due to its reliance on computing all possible feature combinations, SHAP can be computationally expensive, especially for models with a large number of features. LIME, on the other hand, typically has lower computational complexity since it approximates the behavior of the model locally using simpler models.
# In summary, LIME and SHAP are both powerful tools for interpretability, with LIME focusing on local explanations using simplified models and SHAP providing comprehensive, theoretically grounded explanations. The choice between them depends on the specific requirements of the use case, the complexity of the model, and the desired level of interpretability.
# A example is given below
# ## Importing Libraries
# importing libraies
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
import shap
import lime
shap.initjs()
# # Importing Dataset
df = sns.load_dataset("iris")
df
# # Exploratory Data Analysis
df.info()
df.isna().sum()
df.describe()
df.plot(
kind="kde",
subplots=True,
layout=(6, 3),
figsize=(17, 22),
sharex=False,
sharey=False,
)
df.plot(
kind="box",
vert=False, # makes horizontal plots instead of vertical
subplots=True,
layout=(6, 3),
figsize=(17, 22),
sharex=False,
sharey=False,
)
# # Preprocessing
df["species"].value_counts()
sp = {"setosa": 0, "versicolor": 1, "virginica": 2}
df["species"] = df["species"].map(sp)
xTrain, xTest, yTrain, yTest = train_test_split(
df.drop("species", axis=1), df["species"], test_size=0.33, random_state=14
)
# # Baseline Modelling
model = RandomForestClassifier()
model.fit(xTrain, yTrain)
pred = model.predict(xTest)
print(classification_report(yTest, pred))
# # Explainable AI
# creating an instance of the lime tabular explainer
lime_explainer = lime.lime_tabular.LimeTabularExplainer(
training_data=np.array(xTrain),
feature_names=xTrain.columns,
class_names=["0", "1", "2"],
mode="classification",
)
# ### Plots Using Lime
# A LIME tabular explainer plot is a visual representation that helps explain the predictions made by a machine learning model for tabular data using the LIME (Local Interpretable Model-agnostic Explanations) technique. It provides insights into how individual features contribute to the model's prediction for a specific instance.
# The plot typically consists of a horizontal bar chart where each bar represents a feature. The length or height of the bar indicates the magnitude of the feature's contribution to the prediction. Positive values indicate that the feature positively influenced the prediction, while negative values suggest a negative influence.
# The LIME tabular explainer plot allows users to understand which features are the most important in driving the model's decision for a particular data point. By visually examining the bar lengths, users can easily identify the significant factors that influenced the prediction and gain valuable insights into the model's decision-making process for that specific instance.
# obtaining the explanation
explanation = lime_explainer.explain_instance(
data_row=xTest.iloc[38],
predict_fn=model.predict_proba,
top_labels=6,
num_features=13,
)
# printing out the explanation
explanation.show_in_notebook()
explanation.as_pyplot_figure()
# . -->
# ### Plots using SHAP
# shape explainer for some error debug
shap_explaner = shap.Explainer(model)(xTest)
# calculating shap values
shap_value = shap.Explainer(model).shap_values(xTest)
# ### Summary Plot
# The summary plot integrates feature importance and feature effects into a unified visualization. Each data point on the summary plot represents a Shapley value associated with a specific feature and instance. The vertical position of each point corresponds to the feature, while the horizontal position represents the Shapley value. By combining these two dimensions, the summary plot offers a comprehensive view of both the relative importance of features and their corresponding effects, allowing users to understand the contribution of each feature to the overall model output.
# It provide feature importance
shap.summary_plot(shap_value, xTest)
# ### Waterfall Plot
# The waterfall plot is a visual representation specifically created to illustrate the influence of SHAP values (evidence) for each feature on shifting the model output from our initial expectation based on the background data distribution to the ultimate model prediction considering the evidence from all features. In other words, the waterfall plot visually demonstrates the cumulative impact of each feature's SHAP values in driving the model's output, providing insights into how the evidence from different features contributes to the final prediction.
# Watefall plot
exp = shap.Explanation(
shap_explaner.values[38],
shap_explaner.base_values[1],
data=xTest.values,
feature_names=xTest.columns.values,
)
idx = 0
shap.plots.waterfall(exp[idx])
# ### Decision Plot
# Decision plots are useful for visualizing SHAP interaction values, which represent the first-order interactions derived from tree-based models. While SHAP dependence plots provide a great way to visualize individual interactions, decision plots offer a way to showcase the combined impact of main effects and interactions for one or more observations. In essence, decision plots provide a comprehensive view of how various factors interact and contribute to the overall outcome, allowing for a deeper understanding of the relationships within the data.
shap.decision_plot(
shap.TreeExplainer(model).expected_value[0], shap_value[0], xTrain.columns
)
# ### Force Plot
# The SHAP force plot provides a clear depiction of the specific features that exerted the greatest influence on the model's prediction for a particular observation. It highlights the relative importance of each feature, allowing users to identify the key factors that contributed significantly to the model's decision-making process for that specific instance. By visually presenting the influential features, the SHAP force plot helps users gain a precise understanding of the factors that drove the model's prediction for a single observation.
shap.plots.force(
shap.TreeExplainer(model).expected_value[0],
shap_explaner.values[:, :, 0],
xTest.values[0],
)
# Force plot for single tuple
shap.plots.force(
shap.TreeExplainer(model).expected_value[0],
shap_explaner.values[42, :, 0],
xTest.values[0],
)
# ### Dependence Plot
# SHAP dependence plots serve as an alternative to partial dependence plots (PDP) and accumulated local effects (ALE) plots. While PDP and ALE plots depict average effects, SHAP dependence plots go a step further by showcasing both the average effects and the variance on the y-axis. This is particularly useful when examining interactions between features. In such cases, the SHAP dependence plot will exhibit greater dispersion along the y-axis, providing a more comprehensive understanding of the varying effects and their ranges. Ultimately, the SHAP dependence plot offers a valuable visualization tool that captures both the average and varying impacts of features on the model's predictions.
shap.dependence_plot("petal_width", shap_value[0], xTest)
| false | 0 | 2,977 | 3 | 2,977 | 2,977 |
||
129239252
|
# # Module Two Discussion: The Central Limit Theorem
# This notebook contains the step-by-step directions for your Module Two discussion. It is very important to run through the steps in order. Some steps depend on the outputs of earlier steps. Once you have completed the steps in this notebook, be sure to answer the questions about this activity in the Discussion for this module.
# Reminder: If you have not already reviewed the discussion prompt, please do so before beginning this activity. That will give you an idea of the questions you will need to answer with the outputs of this script.
# ## Initial post (due Thursday)
# _____________________________________________________________________________________________________________________________________________________
# ### Step 1: Generating population data
# This block of Python code will generate unique TPCP population data of size 500 observations. You will use this data set in this week's discussion. The numpy module in Python can be used to create datasets with a skewed distribution by randomly generating data from a gamma distribution. You do not need to know what a gamma distribution is or how a dataset is drawn from it. The dataset will be saved in a Python dataframe that you will use in later calculations.
# Click the block of code below and hit the **Run** button above.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
# use gamma distribution to randomly generate 500 observations.
shape, scale = 1.95, 2.5
tpcp = 100 * np.random.gamma(shape, scale, 500)
# pandas library can be used to convert the array into a dataframe of rounded figures with the column name TPCP.
tpcp_df = pd.DataFrame(tpcp, columns=["TPCP"])
tpcp_df = tpcp_df.round(0)
# print the dataframe to see the first 5 and last 5 observations (note that the index of dataframe starts at 0).
print("TPCP data frame\n")
print(tpcp_df)
#
# ### Step 2: Creating a histogram plot of population data
# You will use the matplotlib module in Python to create a histogram plot of the population data from Step 1. This plot allows you to visualize the population data distribution and confirm that it is skewed. You will use 50 bins in the histogram to display the distribution.
# Click the block of code below and hit the **Run** button above.
# NOTE: If the graph is not created, click the code section and hit the **Run** button again.
# create a figure for the plot.
fig, ax = plt.subplots()
# create a histogram plot with 50 bins of TPCP population data.
plt.hist(tpcp_df["TPCP"], bins=50)
# set a title for the plot, x-axis, and y-axis.
plt.title("TPCP population distribution", fontsize=20)
ax.set_xlabel("TPCP")
ax.set_ylabel("Frequency")
# show the plot.
plt.show()
#
# ### Step 3: Calculating the population mean
# This step will calculate the mean for the population data.
# Click the block of code below and hit the **Run** button above.
# You can use the "mean" method of a pandas dataframe.
pop_mean = tpcp_df["TPCP"].mean()
print("Population mean =", round(pop_mean, 2))
#
# ### Step 4: Drawing one random sample from the population data and calculating the sample mean
# This block of code randomly selects one sample (with replacement) of 50 data points from the population data. Then it calculates the sample mean. You will use the "sample" method of the dataframe to select the sample.
# Click the block of code below and hit the **Run** button above.
# use sample method of the dataframe to select a random sample, with replacement, of size 50.
tpcp_sample_df = tpcp_df.sample(50, replace=True)
# print the sample mean.
sample_mean = tpcp_sample_df["TPCP"].mean()
print("Sample mean =", round(sample_mean, 2))
#
# ### Step 5: Repeatedly drawing samples and saving the sample mean for each sample
# You will now essentially repeat Step 4 one thousand times to select 1,000 random samples, with replacement, of size 50 from the population data. The code below contains a loop so that you can do this selection with just one click! You will save the sample mean for each sample in a Python dataframe.
# Click the block of code below and hit the **Run** button above.
# run a for loop to repeat the process Step 4 one thousand times to select one thousand samples.
# save the mean of each sample that was drawn in a Python list called means_list.
means_list = []
for i in range(1000):
tpcp_sample_df = tpcp_df.sample(50, replace=True)
sample_mean = tpcp_sample_df["TPCP"].mean()
means_list.append(sample_mean)
# create a Python dataframe of means.
means_df = pd.DataFrame(means_list, columns=["means"])
print("Dataframe of 1000 sample means\n")
print(means_df)
#
# ### Step 6: Creating a histogram plot of the sample means from Step 5
# Now you will plot the data distribution of the 1,000 means from Step 5. View the plot to confirm that it approximates a Normal distribution (bell-shaped curve). Note that the original data distribution in Step 2 was skewed. However, the distribution of sample means, calculated by repeatedly drawing large samples, is approximately Normally distributed.
# Click the block of code below and hit the **Run** button above.
# NOTE: If the graph is not created, click the code section and hit the **Run** button again.
# create a figure for the plot.
fig, ax = plt.subplots()
# create a histogram plot with 50 bins of 1,000 means.
plt.hist(means_df["means"], bins=50)
# set a title for the plot, x-axis and y-axis.
plt.title("Distribution of 1000 sample means", fontsize=20) # title
ax.set_xlabel("Means")
ax.set_ylabel("Frequency")
# show the plot.
plt.show()
#
# ### Step 7: Mean and the standard deviation of the sample mean distribution
# Now you will calculate the "grand" mean ("grand" because it is the mean of the 1,000 means) and the standard deviation of 1,000 sample means. Note that the distribution of sample means was approximately Normal (bell-shaped) in Step 6. Therefore, calculating the mean and the standard deviation of this distribution will allow us to calculate probabilities and critical values.
# Click the block of code below and hit the **Run** button above.
# calculate mean of the 1,000 sample means (this is called the grand mean or mean of the means).
mean1000 = means_df["means"].mean()
print("Grand Mean (Mean of 1000 sample means) =", round(mean1000, 2))
# calculate standard deviation of the 1,000 sample means.
std1000 = means_df["means"].std()
print("Std Deviation of 1000 sample means =", round(std1000, 2))
# print the probability that a specific mean is 450 or less for a Normal distribution with mean and standard deviation of 1,000 sample means.
prob_450_less_or_equal = st.norm.cdf(450, mean1000, std1000)
print(
"Probability that a specific mean is 450 or less =",
round(prob_450_less_or_equal, 4),
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/239/129239252.ipynb
| null | null |
[{"Id": 129239252, "ScriptId": 38424030, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9743747, "CreationDate": "05/12/2023 05:20:33", "VersionNumber": 1.0, "Title": "Module Two Discussion: The Central Limit Theorem", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 149.0, "LinesInsertedFromPrevious": 149.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Module Two Discussion: The Central Limit Theorem
# This notebook contains the step-by-step directions for your Module Two discussion. It is very important to run through the steps in order. Some steps depend on the outputs of earlier steps. Once you have completed the steps in this notebook, be sure to answer the questions about this activity in the Discussion for this module.
# Reminder: If you have not already reviewed the discussion prompt, please do so before beginning this activity. That will give you an idea of the questions you will need to answer with the outputs of this script.
# ## Initial post (due Thursday)
# _____________________________________________________________________________________________________________________________________________________
# ### Step 1: Generating population data
# This block of Python code will generate unique TPCP population data of size 500 observations. You will use this data set in this week's discussion. The numpy module in Python can be used to create datasets with a skewed distribution by randomly generating data from a gamma distribution. You do not need to know what a gamma distribution is or how a dataset is drawn from it. The dataset will be saved in a Python dataframe that you will use in later calculations.
# Click the block of code below and hit the **Run** button above.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
# use gamma distribution to randomly generate 500 observations.
shape, scale = 1.95, 2.5
tpcp = 100 * np.random.gamma(shape, scale, 500)
# pandas library can be used to convert the array into a dataframe of rounded figures with the column name TPCP.
tpcp_df = pd.DataFrame(tpcp, columns=["TPCP"])
tpcp_df = tpcp_df.round(0)
# print the dataframe to see the first 5 and last 5 observations (note that the index of dataframe starts at 0).
print("TPCP data frame\n")
print(tpcp_df)
#
# ### Step 2: Creating a histogram plot of population data
# You will use the matplotlib module in Python to create a histogram plot of the population data from Step 1. This plot allows you to visualize the population data distribution and confirm that it is skewed. You will use 50 bins in the histogram to display the distribution.
# Click the block of code below and hit the **Run** button above.
# NOTE: If the graph is not created, click the code section and hit the **Run** button again.
# create a figure for the plot.
fig, ax = plt.subplots()
# create a histogram plot with 50 bins of TPCP population data.
plt.hist(tpcp_df["TPCP"], bins=50)
# set a title for the plot, x-axis, and y-axis.
plt.title("TPCP population distribution", fontsize=20)
ax.set_xlabel("TPCP")
ax.set_ylabel("Frequency")
# show the plot.
plt.show()
#
# ### Step 3: Calculating the population mean
# This step will calculate the mean for the population data.
# Click the block of code below and hit the **Run** button above.
# You can use the "mean" method of a pandas dataframe.
pop_mean = tpcp_df["TPCP"].mean()
print("Population mean =", round(pop_mean, 2))
#
# ### Step 4: Drawing one random sample from the population data and calculating the sample mean
# This block of code randomly selects one sample (with replacement) of 50 data points from the population data. Then it calculates the sample mean. You will use the "sample" method of the dataframe to select the sample.
# Click the block of code below and hit the **Run** button above.
# use sample method of the dataframe to select a random sample, with replacement, of size 50.
tpcp_sample_df = tpcp_df.sample(50, replace=True)
# print the sample mean.
sample_mean = tpcp_sample_df["TPCP"].mean()
print("Sample mean =", round(sample_mean, 2))
#
# ### Step 5: Repeatedly drawing samples and saving the sample mean for each sample
# You will now essentially repeat Step 4 one thousand times to select 1,000 random samples, with replacement, of size 50 from the population data. The code below contains a loop so that you can do this selection with just one click! You will save the sample mean for each sample in a Python dataframe.
# Click the block of code below and hit the **Run** button above.
# run a for loop to repeat the process Step 4 one thousand times to select one thousand samples.
# save the mean of each sample that was drawn in a Python list called means_list.
means_list = []
for i in range(1000):
tpcp_sample_df = tpcp_df.sample(50, replace=True)
sample_mean = tpcp_sample_df["TPCP"].mean()
means_list.append(sample_mean)
# create a Python dataframe of means.
means_df = pd.DataFrame(means_list, columns=["means"])
print("Dataframe of 1000 sample means\n")
print(means_df)
#
# ### Step 6: Creating a histogram plot of the sample means from Step 5
# Now you will plot the data distribution of the 1,000 means from Step 5. View the plot to confirm that it approximates a Normal distribution (bell-shaped curve). Note that the original data distribution in Step 2 was skewed. However, the distribution of sample means, calculated by repeatedly drawing large samples, is approximately Normally distributed.
# Click the block of code below and hit the **Run** button above.
# NOTE: If the graph is not created, click the code section and hit the **Run** button again.
# create a figure for the plot.
fig, ax = plt.subplots()
# create a histogram plot with 50 bins of 1,000 means.
plt.hist(means_df["means"], bins=50)
# set a title for the plot, x-axis and y-axis.
plt.title("Distribution of 1000 sample means", fontsize=20) # title
ax.set_xlabel("Means")
ax.set_ylabel("Frequency")
# show the plot.
plt.show()
#
# ### Step 7: Mean and the standard deviation of the sample mean distribution
# Now you will calculate the "grand" mean ("grand" because it is the mean of the 1,000 means) and the standard deviation of 1,000 sample means. Note that the distribution of sample means was approximately Normal (bell-shaped) in Step 6. Therefore, calculating the mean and the standard deviation of this distribution will allow us to calculate probabilities and critical values.
# Click the block of code below and hit the **Run** button above.
# calculate mean of the 1,000 sample means (this is called the grand mean or mean of the means).
mean1000 = means_df["means"].mean()
print("Grand Mean (Mean of 1000 sample means) =", round(mean1000, 2))
# calculate standard deviation of the 1,000 sample means.
std1000 = means_df["means"].std()
print("Std Deviation of 1000 sample means =", round(std1000, 2))
# print the probability that a specific mean is 450 or less for a Normal distribution with mean and standard deviation of 1,000 sample means.
prob_450_less_or_equal = st.norm.cdf(450, mean1000, std1000)
print(
"Probability that a specific mean is 450 or less =",
round(prob_450_less_or_equal, 4),
)
| false | 0 | 1,843 | 0 | 1,843 | 1,843 |
||
129334394
|
<jupyter_start><jupyter_text>Keratoconus detection
Train Validation set ( each has 7 corneal maps)
1. 150 NOR
2. 150 KCN
3. 123 Suspect
Test set ( each has 7 corneal maps)
1. 50 NOR
2. 50 KCN
3. 50 Suspect
Datasets and pre-processing
The protocol of the study (0094/2020) was approved by the Institutional Review Board of Federal University of São Paulo - UNIFESP/EPM as coordinator center and Hospital de Olhos-CRO, Guarulhos, as side center. Corresponding data use agreements were signed among parties to use the data. The study was conducted in accordance with ethical standards in the declaration of Helsinki and its later amendments. If required, respective informed consent was obtained from participants and the data was de-identified in Brazil before any further processing.
Three corneal specialists (including RMH) conducted vision tests and ophthalmic examinations under standard conditions and collected corneal images using Scheimpflug imaging systems (Pentacam, Oculus Optikgera¨te GmbH). There were three corneal trained specialists who performed eye classification. We dealt with disagreements favoring two versus one vote. The clinicians were instructed to grade each eye as normal, suspected KCN, or KCN. Eyes were labeled as a KCN suspect based on standard criteria in earlier studies. More specifically, eyes were labeled as suspected KCN if corneal topography included atypical, localized steepening or an asymmetrical bowtie pattern. Eyes were labeled as suspected KCN if the keratometric curvature was greater than 47.00 D, oblique cylinder more than 1.50 D or central corneal thickness below 500 microns. Each eye of the patient was evaluated independently. Furthermore, raw data on the elevation maps, including Belin –Ambrosio Ectasia Display (BAD-D) indices, Progression Thickness Increase (PTI) represented by corneal thickness spatial profile (CTSP) and percentage of PTI. The Belin ABCD progression display was also examined. Eyes were labeled as suspected KCN if there was abnormal front elevation, high PTI, or abnormal BAD-D.
The development (training) dataset included corneal images collected using different Pentacam instruments with different settings (different color scale steps of the maps compared to the previous subset). All color scales were based on decimal scale grading using microns for corneal thickness, and elevation maps and diopters for axial/sagittal curvature maps. Additional independent dataset, collected from a different clinic in Brazil, was also used to validate the proposed hybrid DL approach.
A total of 204 eyes of 104 patients were normal (the group was represented as NOR), 215 eyes of 113 patients had KCN, and 123 eyes of 63 patients were suspected KCN (SUSPECT). The mean age (± SD) of the subjects in the normal, KCN, and suspected KCN were 33.4 (±10.1), 29.0 (±9.3), and 28.6 (±9.4) years, respectively. Images from 56 normal eyes and 58 eyes with KCN were collected from a Pentacam instrument with settings different from others.
The independent validation subset included 150 eyes of 85 patients collected from de Olhos-CRO private hospital (Guarulhos, SP, Brazil). This dataset included 50 normal eyes from 29 subjects, 50 KCN eyes from 31 patients, and 50 suspect KCN eyes from 25 patients. The mean age (± SD) of the subjects in the normal, KCN, and suspected KCN were 29.5 (±4.7), 26.3 (±6.8), and 29.1 (±5.3) years, respectively.
Kaggle dataset identifier: keratoconus-detection
<jupyter_script># - As each class of input has mutiple images of different views, Here I am training a model for each view with pre-trained resnet18 model
# - The training process is done using K-fold cross validation
# - The final output class value has made using majority voting method meaning the class which will get most vote from differnet view we get the priority.
import os
import re
import cv2
import json
import time
import shutil
import random
import pandas as pd
import numpy as np
from PIL import Image
from tqdm import tqdm
from sklearn import metrics
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.image as img
from sklearn.model_selection import StratifiedKFold
# torch libraries
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, Dataset
from torchvision import transforms
from torch.optim import Adam
from torchvision import models
from torch.autograd import Variable
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# this is the version number to track different experiments data
version = "v1"
for col_name in ["CT_A", "EC_A", "EC_P", "Elv_A", "Elv_P", "Sag_A", "Sag_P"]:
# to hold experiment data
os.makedirs(
f"/kaggle/working/experiments/run_{version}/acc_figure/{col_name}",
exist_ok=True,
)
os.makedirs(
f"/kaggle/working/experiments/run_{version}/loss_figure/{col_name}",
exist_ok=True,
)
os.makedirs(
f"/kaggle/working/experiments/run_{version}/saved_model/{col_name}",
exist_ok=True,
)
for type_ in ["kcn", "nor", "susp"]:
os.makedirs(f"/kaggle/working/hist/{type_}", exist_ok=True)
def set_seed(seed=42):
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# When running on the CuDNN backend, two further options must be set
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Set a fixed value for the hash seed
os.environ["PYTHONHASHSEED"] = str(seed)
print(f"Random seed set as {seed}")
set_seed()
cls_dict = {"Keratoconus": 0, "Normal": 1, "Suspect": 2}
class_list = list(cls_dict.keys())
# ## Pre-processing for easier data loading
def metadata(parnt_dir):
data_dict = {
"CT_A": [],
"EC_A": [],
"EC_P": [],
"Elv_A": [],
"Elv_P": [],
"Sag_A": [],
"Sag_P": [],
"label": [],
}
columns = list(data_dict.keys())
for cls_ in class_list:
cases_in_pth = os.listdir(os.path.join(parnt_dir, cls_))
for case in cases_in_pth:
case_list = os.listdir(os.path.join(parnt_dir, cls_, case))
for col in columns[:-1]:
r = re.compile(f".*{col}")
filename = list(filter(r.match, case_list))[0]
data_dict[col].append(os.path.join(parnt_dir, cls_, case, filename))
data_dict["label"].append(cls_dict[cls_])
df = pd.DataFrame.from_dict(data_dict)
return df
df_train = metadata(
"/kaggle/input/keratoconus-detection/Train_Validation sets/Train_Validation sets"
)
df_test = metadata(
"/kaggle/input/keratoconus-detection/Independent Test Set/Independent Test Set"
)
df_train.head()
df_test.head()
kcn_paths = list(df_train.iloc[1])[:-1]
fig = plt.figure(figsize=(15, 25))
print("Sample images of Keratoconus class...")
for i in range(7):
ax = fig.add_subplot(1, 7, i + 1)
pil_img = Image.open(kcn_paths[i])
ax.imshow(pil_img)
nor_paths = list(df_train.iloc[180])[:-1]
fig = plt.figure(figsize=(15, 25))
print("Sample images of Normal class...")
for i in range(7):
ax = fig.add_subplot(1, 7, i + 1)
pil_img = Image.open(nor_paths[i])
ax.imshow(pil_img)
susp_paths = list(df_train.iloc[310])[:-1]
fig = plt.figure(figsize=(15, 25))
print("Sample images of Suspected class...")
for i in range(7):
ax = fig.add_subplot(1, 7, i + 1)
pil_img = Image.open(susp_paths[i])
ax.imshow(pil_img)
# ## Histogram plotting
def plot_hist_and_save(paths, type_):
for pth in paths:
# Load the image into an array: image
image = plt.imread(pth)
file_name = pth.split("/")[-1]
# Extract 2-D arrays of the RGB channels: red, green, blue
red, green, blue = image[:, :, 0], image[:, :, 1], image[:, :, 2]
# Flatten the 2-D arrays of the RGB channels into 1-D
red_pixels = red.flatten()
green_pixels = green.flatten()
blue_pixels = blue.flatten()
# Overlay histograms of the pixels of each color in the bottom subplot
plt.figure(figsize=(4, 4))
plt.hist(red_pixels, bins=256, density=False, color="red", alpha=0.5)
plt.hist(green_pixels, bins=256, density=False, color="green", alpha=0.4)
plt.hist(blue_pixels, bins=256, density=False, color="blue", alpha=0.3)
# set labels and ticks
plt.xticks(ticks=np.linspace(0, 1, 17), labels=range(0, 257, 16))
plt.title(f"{file_name}")
plt.ylabel("Counts")
plt.xlabel("Intensity")
plt.savefig(f"hist/{type_}/{file_name}")
plt.close()
plot_hist_and_save(kcn_paths, "kcn")
plot_hist_and_save(nor_paths, "nor")
plot_hist_and_save(susp_paths, "susp")
fig = plt.figure(figsize=(15, 25))
kcn_his = ["hist/kcn/" + str(i) for i in os.listdir("hist/kcn/")]
kcn_his.sort()
nor_his = ["hist/nor/" + i for i in os.listdir("hist/nor/")]
nor_his.sort()
susp_his = ["hist/susp/" + i for i in os.listdir("hist/susp/")]
susp_his.sort()
print("Plotting histogram of different classes...")
p = 0
for i in range(21):
if (i + 1) % 3 == 1:
ax = fig.add_subplot(7, 3, i + 1)
pil_img = Image.open(kcn_his[p])
ax.imshow(pil_img)
elif (i + 1) % 3 == 2:
ax = fig.add_subplot(7, 3, i + 1)
pil_img = Image.open(nor_his[p])
ax.imshow(pil_img)
elif (i + 1) % 3 == 0:
ax = fig.add_subplot(7, 3, i + 1)
pil_img = Image.open(susp_his[p])
ax.imshow(pil_img)
if (i + 1) % 3 == 0:
p = p + 1
# ## Pytorch data loader
class PrepareDataset(Dataset):
def __init__(self, df, col, transform=None):
self.df = df
self.col = col
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, indx):
pth = self.df.loc[indx, self.col]
image = Image.open(pth)
label = torch.tensor(self.df.label.iloc[indx])
if self.transform:
image = self.transform(image)
return image, label
def train_loader(df_train, col):
train_transform = transforms.Compose(
[
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
]
)
# train dataloader
training_loader = DataLoader(
PrepareDataset(df_train, col, transform=train_transform),
batch_size=BATCH_SIZE,
shuffle=True,
)
return training_loader
def val_test_loader(df_test, col):
test_transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
]
)
# validation/test dataloader
test_loader = DataLoader(
PrepareDataset(df_test, col, transform=test_transform), batch_size=BATCH_SIZE
)
return test_loader
# define training hyperparameters and number of training epoch
INIT_LR = 0.0001
BATCH_SIZE = 16
EPOCHS = 2
def get_model():
model = models.resnet18(weights="IMAGENET1K_V1")
for param in model.parameters():
param.requires_grad = False
fc_features = model.fc.in_features
model.fc = nn.Linear(fc_features, 3)
return model
# ## Function for early stopping
# define the early stopping function
class EarlyStopper:
def __init__(self, patience=1, min_delta=0):
self.patience = patience
self.min_delta = min_delta
self.counter = 0
self.min_validation_loss = np.inf
def early_stop(self, validation_loss):
if validation_loss < self.min_validation_loss:
self.min_validation_loss = validation_loss
self.counter = 0
elif validation_loss > (self.min_validation_loss + self.min_delta):
self.counter += 1
if self.counter >= self.patience:
return True
return False
def plot_and_save_accuracy(H, fold, col_name):
# plot the training and validation accuracy
plt.style.use("ggplot")
plt.figure()
plt.plot(H["train_acc"], label="train_acc")
plt.plot(H["val_acc"], label="val_acc")
plt.title(f"Accuracy on Dataset at fold {fold}")
plt.xlabel("Epoch #")
plt.ylabel("Accuracy")
plt.legend(loc="lower left")
plt.savefig(
f"/kaggle/working/experiments/run_{version}/acc_figure/{col_name}/accuracy at fold {fold}.png"
)
plt.show()
def plot_and_save_loss(H, fold, col_name):
# plot the training and validation loss
plt.style.use("ggplot")
plt.figure()
plt.plot(H["train_loss"], label="train_loss")
plt.plot(H["val_loss"], label="val_loss")
plt.title(f"Loss on Dataset at fold {fold}")
plt.xlabel("Epoch #")
plt.ylabel("Loss")
plt.legend(loc="lower left")
plt.savefig(
f"/kaggle/working/experiments/run_{version}/loss_figure/{col_name}/loss at fold {fold}.png"
)
plt.show()
# ## Training loop
def manage_training(fold, col_name, df_train_fold, df_val_fold):
model = get_model()
model = model.to(device)
# initialize our optimizer with l2 regularization and loss function
opt = Adam(model.parameters(), lr=INIT_LR, weight_decay=0.001)
lossFn = nn.CrossEntropyLoss()
# measure how long training is going to take
print(f"[INFO] Training the network with fold {fold} with column {col_name}")
print("[INFO] Start Time =", datetime.now().strftime("%H:%M:%S"))
best_accuracy = 0.0
early_stopper = EarlyStopper(patience=50)
# initialize a dictionary to store training history
H = {"train_loss": [], "train_acc": [], "val_loss": [], "val_acc": []}
tr_lo = train_loader(df_train_fold, col_name)
te_lo = val_test_loader(df_val_fold, col_name)
# getting number of steps
trainSteps = len(tr_lo.dataset) // BATCH_SIZE
valSteps = len(te_lo.dataset) // BATCH_SIZE
for e in tqdm(range(0, EPOCHS)):
# set the model in training mode
model.train()
# initialize the total training and validation loss
totalTrainLoss = 0
totalValLoss = 0
# initialize the number of correct predictions in the training
# and validation step
trainCorrect = 0
valCorrect = 0
for tr_data in tr_lo:
_batch, label = tr_data
_batch = Variable(_batch.to(device))
label = Variable(label.to(device))
pred = model(_batch)
# print(label,pred)
loss = lossFn(pred, label)
# zero out the gradients, perform the backpropagation step,
# and update the weights
opt.zero_grad()
loss.backward()
opt.step()
# add the loss to the total training loss so far and
# calculate the number of correct predictions
totalTrainLoss += loss
trainCorrect += (pred.argmax(1) == label).type(torch.float).sum().item()
with torch.no_grad():
# set the model in evaluation mode
model.eval()
# loop over the validation set
for te_data in te_lo:
_batch, label = te_data
_batch = Variable(_batch.to(device))
label = Variable(label.to(device))
pred = model(_batch)
totalValLoss += lossFn(pred, label)
# calculate the number of correct predictions
valCorrect += (pred.argmax(1) == label).type(torch.float).sum().item()
# calculate the average training and validation loss
avgTrainLoss = totalTrainLoss / trainSteps
avgValLoss = totalValLoss / valSteps
# calculate the training and validation accuracy
trainCorrect = trainCorrect / len(tr_lo.dataset)
valCorrect = valCorrect / len(te_lo.dataset)
# update our training history
H["train_loss"].append(avgTrainLoss.cpu().detach().numpy())
H["train_acc"].append(trainCorrect)
H["val_loss"].append(avgValLoss.cpu().detach().numpy())
H["val_acc"].append(valCorrect)
# print the model training and validation information
print(
"Train loss: {:.6f}, Train accuracy: {:.4f}".format(
avgTrainLoss, trainCorrect
)
)
print("Val loss: {:.6f}, Val accuracy: {:.4f}\n".format(avgValLoss, valCorrect))
# saving best model
if valCorrect > best_accuracy:
torch.save(
model.state_dict(),
f"/kaggle/working/experiments/run_{version}/saved_model/{col_name}/best_model_fold_{fold}.pth",
)
best_accuracy = valCorrect
# early stopping
if early_stopper.early_stop(avgValLoss):
print("Early stopping")
break
plot_and_save_accuracy(H, fold, col_name)
plot_and_save_loss(H, fold, col_name)
return best_accuracy
skf = StratifiedKFold(n_splits=5)
target = df_train.loc[:, "label"]
# df_train_fold = {}
fold_info = {}
for col_name in ["CT_A", "EC_A", "EC_P", "Elv_A", "Elv_P", "Sag_A", "Sag_P"]:
fold_info[col_name] = -1
fold = 0
for train_index, test_index in skf.split(df_train, target):
df_train_fold = df_train.loc[train_index, :].reset_index()
df_val_fold = df_train.loc[test_index, :].reset_index()
best_acc = manage_training(fold + 1, col_name, df_train_fold, df_val_fold)
fold_info[f"{col_name}-fold_{fold + 1}"] = best_acc
if best_acc > fold_info[col_name]:
fold_info[col_name] = best_acc
fold = fold + 1
with open(f"/kaggle/working/experiments/run_{version}/fold_score_info.json", "w") as f:
json.dump(fold_info, f)
# ## Inference
# load differnt fold's traing informations
with open(f"/kaggle/working/experiments/run_{version}/fold_score_info.json", "r") as f:
data = json.load(f)
data
# get the mean accuracy
values = [
data[col_name]
for col_name in ["CT_A", "EC_A", "EC_P", "Elv_A", "Elv_P", "Sag_A", "Sag_P"]
]
print(sum(values) / len(values))
result_keys = list(data.keys())
# helper function to get each fold's best accuracy
def find_score_on_fold(data, col_name):
best_score = data[col_name]
for i in range(1, 6):
if data[f"{col_name}-fold_{i}"] == best_score:
return i
labels = df_test["label"].tolist()
def inference_by_column(col_name, fold):
dict_ = {}
te_lo = val_test_loader(df_test, col_name)
model = get_model()
model = model.to(device)
model.load_state_dict(
torch.load(
f"/kaggle/working/experiments/run_{version}/saved_model/{col_name}/best_model_fold_{fold}.pth"
)
)
model.eval()
predictions = []
with torch.no_grad():
for te_data in te_lo:
_batch, label = te_data
_batch = Variable(_batch.to(device))
label = Variable(label.to(device))
pred = model(_batch)
max_values = pred.max(1)
list_pred = pred.argmax(1).tolist()
predictions += list_pred
return predictions
# ## Vote counting for finalizing output
CT_A_pred = inference_by_column("CT_A", find_score_on_fold(data, "CT_A"))
EC_A_pred = inference_by_column("EC_A", find_score_on_fold(data, "EC_A"))
EC_P_pred = inference_by_column("EC_P", find_score_on_fold(data, "EC_P"))
Elv_A_pred = inference_by_column("Elv_A", find_score_on_fold(data, "Elv_A"))
Elv_P_pred = inference_by_column("Elv_P", find_score_on_fold(data, "Elv_P"))
Sag_A_pred = inference_by_column("Sag_A", find_score_on_fold(data, "Sag_A"))
Sag_P_pred = inference_by_column("Sag_P", find_score_on_fold(data, "Sag_P"))
predictions = []
for i in range(len(labels)):
counter = {"0": 0, "1": 0, "2": 0}
# merging different outputs and counting the total votes
# the final output is based on the maximum count
# if two count value are equal then the first class value is chosen
# comment out differnt type to add them in the count
counter[str(CT_A_pred[i])] = counter[str(CT_A_pred[i])] + 1
counter[str(EC_A_pred[i])] = counter[str(EC_A_pred[i])] + 1
counter[str(EC_P_pred[i])] = counter[str(EC_P_pred[i])] + 1
counter[str(Elv_P_pred[i])] = counter[str(Elv_P_pred[i])] + 1
counter[str(Elv_A_pred[i])] = counter[str(Elv_A_pred[i])] + 1
# counter[str(Sag_A_pred[i])] = counter[str(Sag_A_pred[i])] + 1
# counter[str(Sag_P_pred[i])] = counter[str(Sag_P_pred[i])] + 1
max_ = max(counter, key=counter.get)
predictions.append(int(max_))
# ## Final accuracy
confusion_matrix = metrics.confusion_matrix(labels, predictions)
cm_display = metrics.ConfusionMatrixDisplay(
confusion_matrix=confusion_matrix, display_labels=["KCN", "Normal", "Suspect"]
)
accuracy = metrics.accuracy_score(labels, predictions)
print("Accuracy:", accuracy * 100, "%")
precision = metrics.precision_score(labels, predictions, average=None)
print("Precision for class Keratoconus:", precision[0])
print("Precision for class Normal:", precision[1])
print("Precision for class Suspected Keratoconus:", precision[2])
recall = metrics.recall_score(labels, predictions, average=None)
print("Recall for class Keratoconus:", recall[0])
print("Recall for class Normal:", recall[1])
print("Recall for class Suspected Keratoconus:", recall[2])
f1_score = metrics.f1_score(labels, predictions, average=None)
print("F1-score for class Keratoconus:", f1_score[0])
print("F1-score for class Normal:", f1_score[1])
print("F1-score for class Suspected Keratoconus:", f1_score[2])
cm_display.plot()
plt.show()
# zip all the output for downloading
shutil.make_archive(f"/kaggle/working/experiment_{version}", "zip", "/kaggle/working/")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/334/129334394.ipynb
|
keratoconus-detection
|
elmehdi12
|
[{"Id": 129334394, "ScriptId": 38452590, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1962965, "CreationDate": "05/12/2023 22:11:57", "VersionNumber": 1.0, "Title": "Pytorch majority voting based classification", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 537.0, "LinesInsertedFromPrevious": 537.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185282397, "KernelVersionId": 129334394, "SourceDatasetVersionId": 4910184}]
|
[{"Id": 4910184, "DatasetId": 2847719, "DatasourceVersionId": 4977804, "CreatorUserId": 7286325, "LicenseName": "Other (specified in description)", "CreationDate": "01/28/2023 13:57:32", "VersionNumber": 1.0, "Title": "Keratoconus detection", "Slug": "keratoconus-detection", "Subtitle": "Keratoconus detection which each corneal have 7 maps", "Description": "Train Validation set ( each has 7 corneal maps)\n1.\t150 NOR\n2.\t150 KCN\n3.\t123 Suspect\n\nTest set ( each has 7 corneal maps)\n1.\t50 NOR\n2.\t50 KCN\n3.\t50 Suspect\n\nDatasets and pre-processing\nThe protocol of the study (0094/2020) was approved by the Institutional Review Board of Federal University of S\u00e3o Paulo - UNIFESP/EPM as coordinator center and Hospital de Olhos-CRO, Guarulhos, as side center. Corresponding data use agreements were signed among parties to use the data. The study was conducted in accordance with ethical standards in the declaration of Helsinki and its later amendments. If required, respective informed consent was obtained from participants and the data was de-identified in Brazil before any further processing. \nThree corneal specialists (including RMH) conducted vision tests and ophthalmic examinations under standard conditions and collected corneal images using Scheimpflug imaging systems (Pentacam, Oculus Optikgera\u00a8te GmbH). There were three corneal trained specialists who performed eye classification. We dealt with disagreements favoring two versus one vote. The clinicians were instructed to grade each eye as normal, suspected KCN, or KCN. Eyes were labeled as a KCN suspect based on standard criteria in earlier studies. More specifically, eyes were labeled as suspected KCN if corneal topography included atypical, localized steepening or an asymmetrical bowtie pattern. Eyes were labeled as suspected KCN if the keratometric curvature was greater than 47.00 D, oblique cylinder more than 1.50 D or central corneal thickness below 500 microns. Each eye of the patient was evaluated independently. Furthermore, raw data on the elevation maps, including Belin \u2013Ambrosio Ectasia Display (BAD-D) indices, Progression Thickness Increase (PTI) represented by corneal thickness spatial profile (CTSP) and percentage of PTI. The Belin ABCD progression display was also examined. Eyes were labeled as suspected KCN if there was abnormal front elevation, high PTI, or abnormal BAD-D. \nThe development (training) dataset included corneal images collected using different Pentacam instruments with different settings (different color scale steps of the maps compared to the previous subset). All color scales were based on decimal scale grading using microns for corneal thickness, and elevation maps and diopters for axial/sagittal curvature maps. Additional independent dataset, collected from a different clinic in Brazil, was also used to validate the proposed hybrid DL approach.\nA total of 204 eyes of 104 patients were normal (the group was represented as NOR), 215 eyes of 113 patients had KCN, and 123 eyes of 63 patients were suspected KCN (SUSPECT). The mean age (\u00b1 SD) of the subjects in the normal, KCN, and suspected KCN were 33.4 (\u00b110.1), 29.0 (\u00b19.3), and 28.6 (\u00b19.4) years, respectively. Images from 56 normal eyes and 58 eyes with KCN were collected from a Pentacam instrument with settings different from others.\nThe independent validation subset included 150 eyes of 85 patients collected from de Olhos-CRO private hospital (Guarulhos, SP, Brazil). This dataset included 50 normal eyes from 29 subjects, 50 KCN eyes from 31 patients, and 50 suspect KCN eyes from 25 patients. The mean age (\u00b1 SD) of the subjects in the normal, KCN, and suspected KCN were 29.5 (\u00b14.7), 26.3 (\u00b16.8), and 29.1 (\u00b15.3) years, respectively.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2847719, "CreatorUserId": 7286325, "OwnerUserId": 7286325.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 4910184.0, "CurrentDatasourceVersionId": 4977804.0, "ForumId": 2883372, "Type": 2, "CreationDate": "01/28/2023 13:57:32", "LastActivityDate": "01/28/2023", "TotalViews": 2101, "TotalDownloads": 221, "TotalVotes": 7, "TotalKernels": 5}]
|
[{"Id": 7286325, "UserName": "elmehdi12", "DisplayName": "ELMEHDI HAMMOUCH", "RegisterDate": "04/27/2021", "PerformanceTier": 0}]
|
# - As each class of input has mutiple images of different views, Here I am training a model for each view with pre-trained resnet18 model
# - The training process is done using K-fold cross validation
# - The final output class value has made using majority voting method meaning the class which will get most vote from differnet view we get the priority.
import os
import re
import cv2
import json
import time
import shutil
import random
import pandas as pd
import numpy as np
from PIL import Image
from tqdm import tqdm
from sklearn import metrics
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.image as img
from sklearn.model_selection import StratifiedKFold
# torch libraries
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, Dataset
from torchvision import transforms
from torch.optim import Adam
from torchvision import models
from torch.autograd import Variable
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# this is the version number to track different experiments data
version = "v1"
for col_name in ["CT_A", "EC_A", "EC_P", "Elv_A", "Elv_P", "Sag_A", "Sag_P"]:
# to hold experiment data
os.makedirs(
f"/kaggle/working/experiments/run_{version}/acc_figure/{col_name}",
exist_ok=True,
)
os.makedirs(
f"/kaggle/working/experiments/run_{version}/loss_figure/{col_name}",
exist_ok=True,
)
os.makedirs(
f"/kaggle/working/experiments/run_{version}/saved_model/{col_name}",
exist_ok=True,
)
for type_ in ["kcn", "nor", "susp"]:
os.makedirs(f"/kaggle/working/hist/{type_}", exist_ok=True)
def set_seed(seed=42):
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# When running on the CuDNN backend, two further options must be set
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Set a fixed value for the hash seed
os.environ["PYTHONHASHSEED"] = str(seed)
print(f"Random seed set as {seed}")
set_seed()
cls_dict = {"Keratoconus": 0, "Normal": 1, "Suspect": 2}
class_list = list(cls_dict.keys())
# ## Pre-processing for easier data loading
def metadata(parnt_dir):
data_dict = {
"CT_A": [],
"EC_A": [],
"EC_P": [],
"Elv_A": [],
"Elv_P": [],
"Sag_A": [],
"Sag_P": [],
"label": [],
}
columns = list(data_dict.keys())
for cls_ in class_list:
cases_in_pth = os.listdir(os.path.join(parnt_dir, cls_))
for case in cases_in_pth:
case_list = os.listdir(os.path.join(parnt_dir, cls_, case))
for col in columns[:-1]:
r = re.compile(f".*{col}")
filename = list(filter(r.match, case_list))[0]
data_dict[col].append(os.path.join(parnt_dir, cls_, case, filename))
data_dict["label"].append(cls_dict[cls_])
df = pd.DataFrame.from_dict(data_dict)
return df
df_train = metadata(
"/kaggle/input/keratoconus-detection/Train_Validation sets/Train_Validation sets"
)
df_test = metadata(
"/kaggle/input/keratoconus-detection/Independent Test Set/Independent Test Set"
)
df_train.head()
df_test.head()
kcn_paths = list(df_train.iloc[1])[:-1]
fig = plt.figure(figsize=(15, 25))
print("Sample images of Keratoconus class...")
for i in range(7):
ax = fig.add_subplot(1, 7, i + 1)
pil_img = Image.open(kcn_paths[i])
ax.imshow(pil_img)
nor_paths = list(df_train.iloc[180])[:-1]
fig = plt.figure(figsize=(15, 25))
print("Sample images of Normal class...")
for i in range(7):
ax = fig.add_subplot(1, 7, i + 1)
pil_img = Image.open(nor_paths[i])
ax.imshow(pil_img)
susp_paths = list(df_train.iloc[310])[:-1]
fig = plt.figure(figsize=(15, 25))
print("Sample images of Suspected class...")
for i in range(7):
ax = fig.add_subplot(1, 7, i + 1)
pil_img = Image.open(susp_paths[i])
ax.imshow(pil_img)
# ## Histogram plotting
def plot_hist_and_save(paths, type_):
for pth in paths:
# Load the image into an array: image
image = plt.imread(pth)
file_name = pth.split("/")[-1]
# Extract 2-D arrays of the RGB channels: red, green, blue
red, green, blue = image[:, :, 0], image[:, :, 1], image[:, :, 2]
# Flatten the 2-D arrays of the RGB channels into 1-D
red_pixels = red.flatten()
green_pixels = green.flatten()
blue_pixels = blue.flatten()
# Overlay histograms of the pixels of each color in the bottom subplot
plt.figure(figsize=(4, 4))
plt.hist(red_pixels, bins=256, density=False, color="red", alpha=0.5)
plt.hist(green_pixels, bins=256, density=False, color="green", alpha=0.4)
plt.hist(blue_pixels, bins=256, density=False, color="blue", alpha=0.3)
# set labels and ticks
plt.xticks(ticks=np.linspace(0, 1, 17), labels=range(0, 257, 16))
plt.title(f"{file_name}")
plt.ylabel("Counts")
plt.xlabel("Intensity")
plt.savefig(f"hist/{type_}/{file_name}")
plt.close()
plot_hist_and_save(kcn_paths, "kcn")
plot_hist_and_save(nor_paths, "nor")
plot_hist_and_save(susp_paths, "susp")
fig = plt.figure(figsize=(15, 25))
kcn_his = ["hist/kcn/" + str(i) for i in os.listdir("hist/kcn/")]
kcn_his.sort()
nor_his = ["hist/nor/" + i for i in os.listdir("hist/nor/")]
nor_his.sort()
susp_his = ["hist/susp/" + i for i in os.listdir("hist/susp/")]
susp_his.sort()
print("Plotting histogram of different classes...")
p = 0
for i in range(21):
if (i + 1) % 3 == 1:
ax = fig.add_subplot(7, 3, i + 1)
pil_img = Image.open(kcn_his[p])
ax.imshow(pil_img)
elif (i + 1) % 3 == 2:
ax = fig.add_subplot(7, 3, i + 1)
pil_img = Image.open(nor_his[p])
ax.imshow(pil_img)
elif (i + 1) % 3 == 0:
ax = fig.add_subplot(7, 3, i + 1)
pil_img = Image.open(susp_his[p])
ax.imshow(pil_img)
if (i + 1) % 3 == 0:
p = p + 1
# ## Pytorch data loader
class PrepareDataset(Dataset):
def __init__(self, df, col, transform=None):
self.df = df
self.col = col
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, indx):
pth = self.df.loc[indx, self.col]
image = Image.open(pth)
label = torch.tensor(self.df.label.iloc[indx])
if self.transform:
image = self.transform(image)
return image, label
def train_loader(df_train, col):
train_transform = transforms.Compose(
[
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
]
)
# train dataloader
training_loader = DataLoader(
PrepareDataset(df_train, col, transform=train_transform),
batch_size=BATCH_SIZE,
shuffle=True,
)
return training_loader
def val_test_loader(df_test, col):
test_transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
]
)
# validation/test dataloader
test_loader = DataLoader(
PrepareDataset(df_test, col, transform=test_transform), batch_size=BATCH_SIZE
)
return test_loader
# define training hyperparameters and number of training epoch
INIT_LR = 0.0001
BATCH_SIZE = 16
EPOCHS = 2
def get_model():
model = models.resnet18(weights="IMAGENET1K_V1")
for param in model.parameters():
param.requires_grad = False
fc_features = model.fc.in_features
model.fc = nn.Linear(fc_features, 3)
return model
# ## Function for early stopping
# define the early stopping function
class EarlyStopper:
def __init__(self, patience=1, min_delta=0):
self.patience = patience
self.min_delta = min_delta
self.counter = 0
self.min_validation_loss = np.inf
def early_stop(self, validation_loss):
if validation_loss < self.min_validation_loss:
self.min_validation_loss = validation_loss
self.counter = 0
elif validation_loss > (self.min_validation_loss + self.min_delta):
self.counter += 1
if self.counter >= self.patience:
return True
return False
def plot_and_save_accuracy(H, fold, col_name):
# plot the training and validation accuracy
plt.style.use("ggplot")
plt.figure()
plt.plot(H["train_acc"], label="train_acc")
plt.plot(H["val_acc"], label="val_acc")
plt.title(f"Accuracy on Dataset at fold {fold}")
plt.xlabel("Epoch #")
plt.ylabel("Accuracy")
plt.legend(loc="lower left")
plt.savefig(
f"/kaggle/working/experiments/run_{version}/acc_figure/{col_name}/accuracy at fold {fold}.png"
)
plt.show()
def plot_and_save_loss(H, fold, col_name):
# plot the training and validation loss
plt.style.use("ggplot")
plt.figure()
plt.plot(H["train_loss"], label="train_loss")
plt.plot(H["val_loss"], label="val_loss")
plt.title(f"Loss on Dataset at fold {fold}")
plt.xlabel("Epoch #")
plt.ylabel("Loss")
plt.legend(loc="lower left")
plt.savefig(
f"/kaggle/working/experiments/run_{version}/loss_figure/{col_name}/loss at fold {fold}.png"
)
plt.show()
# ## Training loop
def manage_training(fold, col_name, df_train_fold, df_val_fold):
model = get_model()
model = model.to(device)
# initialize our optimizer with l2 regularization and loss function
opt = Adam(model.parameters(), lr=INIT_LR, weight_decay=0.001)
lossFn = nn.CrossEntropyLoss()
# measure how long training is going to take
print(f"[INFO] Training the network with fold {fold} with column {col_name}")
print("[INFO] Start Time =", datetime.now().strftime("%H:%M:%S"))
best_accuracy = 0.0
early_stopper = EarlyStopper(patience=50)
# initialize a dictionary to store training history
H = {"train_loss": [], "train_acc": [], "val_loss": [], "val_acc": []}
tr_lo = train_loader(df_train_fold, col_name)
te_lo = val_test_loader(df_val_fold, col_name)
# getting number of steps
trainSteps = len(tr_lo.dataset) // BATCH_SIZE
valSteps = len(te_lo.dataset) // BATCH_SIZE
for e in tqdm(range(0, EPOCHS)):
# set the model in training mode
model.train()
# initialize the total training and validation loss
totalTrainLoss = 0
totalValLoss = 0
# initialize the number of correct predictions in the training
# and validation step
trainCorrect = 0
valCorrect = 0
for tr_data in tr_lo:
_batch, label = tr_data
_batch = Variable(_batch.to(device))
label = Variable(label.to(device))
pred = model(_batch)
# print(label,pred)
loss = lossFn(pred, label)
# zero out the gradients, perform the backpropagation step,
# and update the weights
opt.zero_grad()
loss.backward()
opt.step()
# add the loss to the total training loss so far and
# calculate the number of correct predictions
totalTrainLoss += loss
trainCorrect += (pred.argmax(1) == label).type(torch.float).sum().item()
with torch.no_grad():
# set the model in evaluation mode
model.eval()
# loop over the validation set
for te_data in te_lo:
_batch, label = te_data
_batch = Variable(_batch.to(device))
label = Variable(label.to(device))
pred = model(_batch)
totalValLoss += lossFn(pred, label)
# calculate the number of correct predictions
valCorrect += (pred.argmax(1) == label).type(torch.float).sum().item()
# calculate the average training and validation loss
avgTrainLoss = totalTrainLoss / trainSteps
avgValLoss = totalValLoss / valSteps
# calculate the training and validation accuracy
trainCorrect = trainCorrect / len(tr_lo.dataset)
valCorrect = valCorrect / len(te_lo.dataset)
# update our training history
H["train_loss"].append(avgTrainLoss.cpu().detach().numpy())
H["train_acc"].append(trainCorrect)
H["val_loss"].append(avgValLoss.cpu().detach().numpy())
H["val_acc"].append(valCorrect)
# print the model training and validation information
print(
"Train loss: {:.6f}, Train accuracy: {:.4f}".format(
avgTrainLoss, trainCorrect
)
)
print("Val loss: {:.6f}, Val accuracy: {:.4f}\n".format(avgValLoss, valCorrect))
# saving best model
if valCorrect > best_accuracy:
torch.save(
model.state_dict(),
f"/kaggle/working/experiments/run_{version}/saved_model/{col_name}/best_model_fold_{fold}.pth",
)
best_accuracy = valCorrect
# early stopping
if early_stopper.early_stop(avgValLoss):
print("Early stopping")
break
plot_and_save_accuracy(H, fold, col_name)
plot_and_save_loss(H, fold, col_name)
return best_accuracy
skf = StratifiedKFold(n_splits=5)
target = df_train.loc[:, "label"]
# df_train_fold = {}
fold_info = {}
for col_name in ["CT_A", "EC_A", "EC_P", "Elv_A", "Elv_P", "Sag_A", "Sag_P"]:
fold_info[col_name] = -1
fold = 0
for train_index, test_index in skf.split(df_train, target):
df_train_fold = df_train.loc[train_index, :].reset_index()
df_val_fold = df_train.loc[test_index, :].reset_index()
best_acc = manage_training(fold + 1, col_name, df_train_fold, df_val_fold)
fold_info[f"{col_name}-fold_{fold + 1}"] = best_acc
if best_acc > fold_info[col_name]:
fold_info[col_name] = best_acc
fold = fold + 1
with open(f"/kaggle/working/experiments/run_{version}/fold_score_info.json", "w") as f:
json.dump(fold_info, f)
# ## Inference
# load differnt fold's traing informations
with open(f"/kaggle/working/experiments/run_{version}/fold_score_info.json", "r") as f:
data = json.load(f)
data
# get the mean accuracy
values = [
data[col_name]
for col_name in ["CT_A", "EC_A", "EC_P", "Elv_A", "Elv_P", "Sag_A", "Sag_P"]
]
print(sum(values) / len(values))
result_keys = list(data.keys())
# helper function to get each fold's best accuracy
def find_score_on_fold(data, col_name):
best_score = data[col_name]
for i in range(1, 6):
if data[f"{col_name}-fold_{i}"] == best_score:
return i
labels = df_test["label"].tolist()
def inference_by_column(col_name, fold):
dict_ = {}
te_lo = val_test_loader(df_test, col_name)
model = get_model()
model = model.to(device)
model.load_state_dict(
torch.load(
f"/kaggle/working/experiments/run_{version}/saved_model/{col_name}/best_model_fold_{fold}.pth"
)
)
model.eval()
predictions = []
with torch.no_grad():
for te_data in te_lo:
_batch, label = te_data
_batch = Variable(_batch.to(device))
label = Variable(label.to(device))
pred = model(_batch)
max_values = pred.max(1)
list_pred = pred.argmax(1).tolist()
predictions += list_pred
return predictions
# ## Vote counting for finalizing output
CT_A_pred = inference_by_column("CT_A", find_score_on_fold(data, "CT_A"))
EC_A_pred = inference_by_column("EC_A", find_score_on_fold(data, "EC_A"))
EC_P_pred = inference_by_column("EC_P", find_score_on_fold(data, "EC_P"))
Elv_A_pred = inference_by_column("Elv_A", find_score_on_fold(data, "Elv_A"))
Elv_P_pred = inference_by_column("Elv_P", find_score_on_fold(data, "Elv_P"))
Sag_A_pred = inference_by_column("Sag_A", find_score_on_fold(data, "Sag_A"))
Sag_P_pred = inference_by_column("Sag_P", find_score_on_fold(data, "Sag_P"))
predictions = []
for i in range(len(labels)):
counter = {"0": 0, "1": 0, "2": 0}
# merging different outputs and counting the total votes
# the final output is based on the maximum count
# if two count value are equal then the first class value is chosen
# comment out differnt type to add them in the count
counter[str(CT_A_pred[i])] = counter[str(CT_A_pred[i])] + 1
counter[str(EC_A_pred[i])] = counter[str(EC_A_pred[i])] + 1
counter[str(EC_P_pred[i])] = counter[str(EC_P_pred[i])] + 1
counter[str(Elv_P_pred[i])] = counter[str(Elv_P_pred[i])] + 1
counter[str(Elv_A_pred[i])] = counter[str(Elv_A_pred[i])] + 1
# counter[str(Sag_A_pred[i])] = counter[str(Sag_A_pred[i])] + 1
# counter[str(Sag_P_pred[i])] = counter[str(Sag_P_pred[i])] + 1
max_ = max(counter, key=counter.get)
predictions.append(int(max_))
# ## Final accuracy
confusion_matrix = metrics.confusion_matrix(labels, predictions)
cm_display = metrics.ConfusionMatrixDisplay(
confusion_matrix=confusion_matrix, display_labels=["KCN", "Normal", "Suspect"]
)
accuracy = metrics.accuracy_score(labels, predictions)
print("Accuracy:", accuracy * 100, "%")
precision = metrics.precision_score(labels, predictions, average=None)
print("Precision for class Keratoconus:", precision[0])
print("Precision for class Normal:", precision[1])
print("Precision for class Suspected Keratoconus:", precision[2])
recall = metrics.recall_score(labels, predictions, average=None)
print("Recall for class Keratoconus:", recall[0])
print("Recall for class Normal:", recall[1])
print("Recall for class Suspected Keratoconus:", recall[2])
f1_score = metrics.f1_score(labels, predictions, average=None)
print("F1-score for class Keratoconus:", f1_score[0])
print("F1-score for class Normal:", f1_score[1])
print("F1-score for class Suspected Keratoconus:", f1_score[2])
cm_display.plot()
plt.show()
# zip all the output for downloading
shutil.make_archive(f"/kaggle/working/experiment_{version}", "zip", "/kaggle/working/")
| false | 0 | 5,615 | 0 | 6,593 | 5,615 |
||
129334351
|
<jupyter_start><jupyter_text>Credit Card Approval Prediction
# A Credit Card Dataset for Machine Learning!
**Don't ask me where this data come from, the answer is I don't know!**
### Context
Credit score cards are a common risk control method in the financial industry. It uses personal information and data submitted by credit card applicants to predict the probability of future defaults and credit card borrowings. The bank is able to decide whether to issue a credit card to the applicant. Credit scores can objectively quantify the magnitude of risk.
Generally speaking, credit score cards are based on historical data. Once encountering large economic fluctuations. Past models may lose their original predictive power. Logistic model is a common method for credit scoring. Because Logistic is suitable for binary classification tasks and can calculate the coefficients of each feature. In order to facilitate understanding and operation, the score card will multiply the logistic regression coefficient by a certain value (such as 100) and round it.
At present, with the development of machine learning algorithms. More predictive methods such as Boosting, Random Forest, and Support Vector Machines have been introduced into credit card scoring. However, these methods often do not have good transparency. It may be difficult to provide customers and regulators with a reason for rejection or acceptance.
### Task
Build a machine learning model to predict if an applicant is 'good' or 'bad' client, different from other tasks, the definition of 'good' or 'bad' is not given. You should use some techique, such as [vintage analysis](https://www.kaggle.com/rikdifos/eda-vintage-analysis) to construct you label. Also, unbalance data problem is a big problem in this task.
### Content & Explanation
There're two tables could be merged by `ID`:
| application_record.csv | | |
|:-----------------------:|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Feature name | Explanation | Remarks |
| `ID` | Client number | |
| `CODE_GENDER` | Gender | |
| `FLAG_OWN_CAR` | Is there a car | |
| `FLAG_OWN_REALTY` | Is there a property | |
| `CNT_CHILDREN` | Number of children | |
| `AMT_INCOME_TOTAL` | Annual income | |
| `NAME_INCOME_TYPE` | Income category | |
| `NAME_EDUCATION_TYPE` | Education level | |
| `NAME_FAMILY_STATUS` | Marital status | |
| `NAME_HOUSING_TYPE` | Way of living | |
| `DAYS_BIRTH` | Birthday | Count backwards from current day (0), -1 means yesterday |
| `DAYS_EMPLOYED` | Start date of employment | Count backwards from current day(0). If positive, it means the person currently unemployed. |
| `FLAG_MOBIL` | Is there a mobile phone | |
| `FLAG_WORK_PHONE` | Is there a work phone | |
| `FLAG_PHONE` | Is there a phone | |
| `FLAG_EMAIL` | Is there an email | |
| `OCCUPATION_TYPE` | Occupation | |
| `CNT_FAM_MEMBERS` | Family size | |
-------------------
| credit_record.csv | | |
|:-----------------------:|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Feature name | Explanation | Remarks |
| `ID` | Client number | |
| `MONTHS_BALANCE` | Record month | The month of the extracted data is the starting point, backwards, 0 is the current month, -1 is the previous month, and so on |
| `STATUS` | Status | 0: 1-29 days past due 1: 30-59 days past due 2: 60-89 days overdue 3: 90-119 days overdue 4: 120-149 days overdue 5: Overdue or bad debts, write-offs for more than 150 days C: paid off that month X: No loan for the month |
Related data : [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud)
Related competition: [Home Credit Default Risk](https://www.kaggle.com/c/home-credit-default-risk)
Kaggle dataset identifier: credit-card-approval-prediction
<jupyter_script># # Importing
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
from sklearn.feature_selection import f_classif
from sklearn.feature_selection import SelectKBest
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import (
accuracy_score,
recall_score,
precision_score,
f1_score,
confusion_matrix,
)
app = pd.read_csv(r"application_record.csv")
rec = pd.read_csv(r"credit_record.csv")
def data_info(data):
cols = []
unique = []
n_uniques = []
dtypes = []
nulls = []
for col in data.columns:
cols.append(col)
dtypes.append(data[col].dtype)
n_uniques.append(data[col].nunique())
unique.append(data[col].unique())
nulls.append(data[col].isna().sum())
return pd.DataFrame(
{
"Col": cols,
"n_uniques": n_uniques,
"unique": unique,
"dtypes": dtypes,
"NULLS": nulls,
}
)
data_info(app)
data_info(rec)
app.info()
app.head()
rec.head()
# # 2.Data Preprocessing
# #### Check How Many times Customer ID become 1 or 0
rec["STATUS"].unique()
rec["STATUS"].replace(
["C", "X", "0", "1", "2", "3", "4", "5"], [1, 1, 1, 0, 0, 0, 0, 0], inplace=True
)
final_result = rec.value_counts(subset=["ID", "STATUS"]).unstack(fill_value=0)
final_result
final_result["target"] = None
final_result["target"][final_result[0] > 0] = 0
final_result["target"].unique()
final_result["target"].unique()
final_result["target"].fillna(1, inplace=True)
final_result
new_target = pd.DataFrame(final_result["target"].astype(int))
data = app.merge(new_target, how="inner", on="ID")
data
# ### Check Missing Values
#
data.isna().sum()
data.fillna("other_type", inplace=True)
data.set_index("ID", inplace=True)
data.duplicated().sum()
data.drop_duplicates(inplace=True)
data.info()
data.reset_index("ID", inplace=True)
# ### 2.1Feature Generation
# ###### 2.1.1.1 Total income for Person
income_person = []
for i in data["AMT_INCOME_TOTAL"]:
print(i)
for j in data["CNT_FAM_MEMBERS"]:
print(j)
z = i / j
income_person.append(z)
break
len(income_person)
income_per = pd.DataFrame(income_person, columns=["Person_income"])
income_per.set_index(data["ID"], inplace=True)
income_per
# ###### 2.1.1.2 How many year & Months our Customer work ?
#
month_employe = []
year_employe = []
for i in data["DAYS_EMPLOYED"]:
z = i / 30
month_employe.append(z)
y = round((z / 12), 2)
year_employe.append(y)
len(year_employe)
employee_month = pd.DataFrame(month_employe, columns=["employee_Month"]).abs()
employee_year = pd.DataFrame(year_employe, columns=["employee_year"]).abs()
employee_year.set_index(data["ID"], inplace=True)
employee_month.set_index(data["ID"], inplace=True)
employee_month, employee_year
# ###### 2.1.1.3 Age of the customer
#
age = []
for i in data["DAYS_BIRTH"]:
z = i / 30
y = round(z / 12, 3)
age.append(y)
len(age)
age_ = pd.DataFrame(age, columns=["Age"])
age_.set_index(data["ID"], inplace=True)
age_ = age_.agg(abs)
age_
# ###### Merging new Features in application csv file
#
data = data.merge(income_per, how="inner", on="ID")
data = data.merge(employee_month, how="inner", on="ID")
data = data.merge(employee_year, how="inner", on="ID")
data = data.merge(age_, how="inner", on="ID")
data.head()
# I found Number of years bigger than 60 year so we will custome that
sel = data.iloc[data["employee_year"][data["employee_year"] > 60]].index
data.drop(sel, axis=0, inplace=True)
data.drop(3, axis=0, inplace=True)
data.head()
# ### 2.1.2 Credit Record Csv file
# ###### 2.1.2.1 Account Length
account_len = pd.DataFrame(rec.groupby("ID")["MONTHS_BALANCE"].agg(["max"]))
account_len = account_len.agg(abs)
account_len
# ###### 2.1.2.2 Starting Month
account_start = pd.DataFrame(rec.groupby("ID")["MONTHS_BALANCE"].agg(["min"]))
account_start = account_start.agg(abs)
account_start
# ###### 2.1.2.5 How many Months customer pay the loan and not pay the loan ?
#
rec.value_counts(subset=["ID", "MONTHS_BALANCE"]).unstack(fill_value=0)
# No. of months pay a loan and no. of months don't pay a loan
fea_new = rec.groupby("ID").agg(sum)
pay = pd.DataFrame(fea_new["STATUS"])
pay
all_months = pd.DataFrame(rec.groupby("ID")["MONTHS_BALANCE"].count())
all_months.reset_index("ID", inplace=True)
not_pay = []
for i in all_months["MONTHS_BALANCE"]:
for j in pay["STATUS"]:
z = i - j
not_pay.append(z)
break
not_pay_ = pd.DataFrame(not_pay, columns=["Notpaying_loan"])
not_pay_.set_index(all_months["ID"], inplace=True)
not_pay_
data = data.merge(not_pay_, how="inner", on="ID")
data = data.merge(pay, how="inner", on="ID")
data = data.merge(account_len, how="inner", on="ID")
data = data.merge(account_start, how="inner", on="ID")
data.rename(
columns={"STATUS": "pay_loan", "max": "account_len", "min": "account_start"},
inplace=True,
)
data.head()
# data.rename(columns = {0:'year_employee'}, inplace = True)
# ### Check Outliers
sns.distplot(x=data["AMT_INCOME_TOTAL"])
sns.boxplot(data["AMT_INCOME_TOTAL"])
# By Applying IQR theory because the distribution is SKewed
q1 = data["AMT_INCOME_TOTAL"].quantile(0.25)
q3 = data["AMT_INCOME_TOTAL"].quantile(0.75)
iqr = q3 - q1
upper_whisker = q3 + 1.5 * iqr
lower_whisker = q1 - 1.5 * iqr
if lower_whisker < 0:
lower_whisker = 0
upper_whisker, lower_whisker
filt2 = data["AMT_INCOME_TOTAL"] > upper_whisker
filt3 = data["AMT_INCOME_TOTAL"] < lower_whisker
out2 = data[filt2].index
out3 = data[filt3].index
data.drop(out3, axis=0, inplace=True)
data.info()
data.set_index("ID", inplace=True)
data["target"].value_counts()
# ## Feature Scaling
scl = StandardScaler()
data["AMT_INCOME_TOTAL"] = scl.fit_transform(
np.array(data["AMT_INCOME_TOTAL"]).reshape(-1, 1)
)
data["CNT_CHILDREN"] = scl.fit_transform(np.array(data["CNT_CHILDREN"]).reshape(-1, 1))
data["DAYS_BIRTH"] = scl.fit_transform(np.array(data["DAYS_BIRTH"]).reshape(-1, 1))
data["DAYS_EMPLOYED"] = scl.fit_transform(
np.array(data["DAYS_EMPLOYED"]).reshape(-1, 1)
)
data["CNT_FAM_MEMBERS"] = scl.fit_transform(
np.array(data["CNT_FAM_MEMBERS"]).reshape(-1, 1)
)
data["Person_income"] = scl.fit_transform(
np.array(data["Person_income"]).reshape(-1, 1)
)
data["employee_Month"] = scl.fit_transform(
np.array(data["employee_Month"]).reshape(-1, 1)
)
data["employee_year"] = scl.fit_transform(
np.array(data["employee_year"]).reshape(-1, 1)
)
data["Age"] = scl.fit_transform(np.array(data["Age"]).reshape(-1, 1))
data["Notpaying_loan"] = scl.fit_transform(
np.array(data["Notpaying_loan"]).reshape(-1, 1)
)
data["pay_loan"] = scl.fit_transform(np.array(data["pay_loan"]).reshape(-1, 1))
data["account_len"] = scl.fit_transform(np.array(data["account_len"]).reshape(-1, 1))
# ## Encoding
# #### One Hot Encoding
data = pd.get_dummies(data, columns=["CODE_GENDER", "FLAG_OWN_CAR", "FLAG_OWN_REALTY"])
# #### Label Encoding
lb = LabelEncoder()
col = [
"NAME_INCOME_TYPE",
"NAME_EDUCATION_TYPE",
"NAME_FAMILY_STATUS",
"NAME_HOUSING_TYPE",
"OCCUPATION_TYPE",
]
for i in col:
data[i] = lb.fit_transform(data[i])
# # Splitting Data
x = data.drop("target", axis=1)
y = data["target"]
x.info()
# ## Feature Selection
# for Numerical Data we will use ANOVA as feature Selection
## from sklearn.feature_selection import SelectKBest
select = SelectKBest(f_classif, k=15)
select_up = select.fit_transform(x, y)
select_feat = select.get_support()
p_value = np.round(select.pvalues_, 4)
f_value = np.round(select.scores_, 4)
select_inde = select.get_support(indices=True)
select_inde
x = data.iloc[:, select_inde]
y = data["target"]
print("Selected Features : \n\n", x.columns)
# x = data.drop('target' , axis =1)
# y = data['target']
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=42, shuffle=True, stratify=data["target"]
)
# from imblearn.combine import SMOTETomek
# smt = SMOTETomek(random_state=42)
# X_res, y_res = smt.fit_resample(x_train, y_train)
tl = SMOTE()
X_res, y_res = tl.fit_resample(x_train, y_train)
# from imblearn.under_sampling import TomekLinks
# tl = TomekLinks()
# X_res_2, y_res_2 = tl.fit_resample(X_res, y_res)
model = []
pre_train = []
rec_train = []
f1_train = []
spe_train = []
pre_test = []
rec_test = []
f1_test = []
spe_test = []
# # KNN
knn5 = KNeighborsClassifier(n_neighbors=6)
knn5.fit(X_res, y_res)
y_pred = knn5.predict(X_res)
print(confusion_matrix(y_pred, y_res))
tn, fp, fn, tp = confusion_matrix(y_res, y_pred).ravel()
specificity_tra = round(tn / (tn + fp), 4)
acc_tra = round(accuracy_score(y_pred, y_res), 4)
rec_tra = round(recall_score(y_pred, y_res), 4)
pre_tra = round(precision_score(y_pred, y_res), 4)
f1_tra = round(f1_score(y_pred, y_res), 4)
print("accuracy_score : ", acc_tra)
print("recall_score : ", rec_tra)
print("precision_score : ", pre_tra)
print("f1_score : ", f1_tra)
print("specificity :", specificity_tra)
model.append("KNN")
pre_train.append(pre_tra)
rec_train.append(rec_tra)
f1_train.append(f1_tra)
spe_train.append(specificity_tra)
y_pred = knn5.predict(x_test)
print(confusion_matrix(y_pred, y_test))
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
specificity_tes = round(tn / (tn + fp), 4)
acc_tes = round(accuracy_score(y_pred, y_test), 4)
rec_tes = round(recall_score(y_pred, y_test), 4)
pre_tes = round(precision_score(y_pred, y_test), 4)
f1_tes = round(f1_score(y_pred, y_test), 4)
print("accuracy_score : ", acc_tes)
print("recall_score : ", rec_tes)
print("precision_score : ", pre_tes)
print("f1_score : ", f1_tes)
print("specificity :", specificity_tes)
pre_test.append(pre_tes)
rec_test.append(rec_tes)
f1_test.append(f1_tes)
spe_test.append(specificity_tes)
history = {
"precision_score": [pre_train, pre_test],
"recall_score": [rec_train, rec_test],
"f1_score": [f1_train, f1_test],
"specificity": [spe_train, spe_test],
}
classification_report = pd.DataFrame(history, index=["Train", "Test"])
classification_report
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/334/129334351.ipynb
|
credit-card-approval-prediction
|
rikdifos
|
[{"Id": 129334351, "ScriptId": 38454283, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11235793, "CreationDate": "05/12/2023 22:11:01", "VersionNumber": 2.0, "Title": "notebook952dcca237", "EvaluationDate": "05/12/2023", "IsChange": false, "TotalLines": 430.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 430.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185282354, "KernelVersionId": 129334351, "SourceDatasetVersionId": 1031720}]
|
[{"Id": 1031720, "DatasetId": 426827, "DatasourceVersionId": 1060735, "CreatorUserId": 3396171, "LicenseName": "CC0: Public Domain", "CreationDate": "03/24/2020 10:04:48", "VersionNumber": 3.0, "Title": "Credit Card Approval Prediction", "Slug": "credit-card-approval-prediction", "Subtitle": "A Credit Card Dataset for Machine Learning", "Description": "# A Credit Card Dataset for Machine Learning!\n\n**Don't ask me where this data come from, the answer is I don't know!**\n\n### Context\n\nCredit score cards are a common risk control method in the financial industry. It uses personal information and data submitted by credit card applicants to predict the probability of future defaults and credit card borrowings. The bank is able to decide whether to issue a credit card to the applicant. Credit scores can objectively quantify the magnitude of risk.\n \nGenerally speaking, credit score cards are based on historical data. Once encountering large economic fluctuations. Past models may lose their original predictive power. Logistic model is a common method for credit scoring. Because Logistic is suitable for binary classification tasks and can calculate the coefficients of each feature. In order to facilitate understanding and operation, the score card will multiply the logistic regression coefficient by a certain value (such as 100) and round it.\n \nAt present, with the development of machine learning algorithms. More predictive methods such as Boosting, Random Forest, and Support Vector Machines have been introduced into credit card scoring. However, these methods often do not have good transparency. It may be difficult to provide customers and regulators with a reason for rejection or acceptance.\n\n### Task\n\nBuild a machine learning model to predict if an applicant is 'good' or 'bad' client, different from other tasks, the definition of 'good' or 'bad' is not given. You should use some techique, such as [vintage analysis](https://www.kaggle.com/rikdifos/eda-vintage-analysis) to construct you label. Also, unbalance data problem is a big problem in this task. \n\n### Content & Explanation\n\nThere're two tables could be merged by `ID`:\n\n| application_record.csv | \u3000 | \u3000 |\n|:-----------------------:|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Feature name | Explanation | Remarks |\n| `ID` | Client number | \u3000 |\n| `CODE_GENDER` | Gender | \u3000 |\n| `FLAG_OWN_CAR` | Is there a car | \u3000 |\n| `FLAG_OWN_REALTY` | Is there a property | \u3000 |\n| `CNT_CHILDREN` | Number of children | \u3000 |\n| `AMT_INCOME_TOTAL` | Annual income | \u3000 |\n| `NAME_INCOME_TYPE` | Income category | \u3000 |\n| `NAME_EDUCATION_TYPE` | Education level | \u3000 |\n| `NAME_FAMILY_STATUS` | Marital status | \u3000 |\n| `NAME_HOUSING_TYPE` | Way of living | \u3000 |\n| `DAYS_BIRTH` | Birthday | \u3000 Count backwards from current day (0), -1 means yesterday |\n| `DAYS_EMPLOYED` | Start date of employment | Count backwards from current day(0). If positive, it means the person currently unemployed. |\n| `FLAG_MOBIL` | Is there a mobile phone | \u3000 |\n| `FLAG_WORK_PHONE` | Is there a work phone | \u3000 |\n| `FLAG_PHONE` | Is there a phone | \u3000 |\n| `FLAG_EMAIL` | Is there an email | \u3000 |\n| `OCCUPATION_TYPE` | Occupation | \u3000 |\n| `CNT_FAM_MEMBERS` | Family size | \u3000 |\n\n-------------------\n\n| credit_record.csv | \u3000 | \u3000 |\n|:-----------------------:|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Feature name | Explanation | Remarks |\n| `ID` | Client number | \u3000 |\n| `MONTHS_BALANCE` | Record month | The month of the extracted data is the starting point, backwards, 0 is the current month, -1 is the previous month, and so on |\n| `STATUS` | Status | 0: 1-29 days past due 1: 30-59 days past due 2: 60-89 days overdue 3: 90-119 days overdue 4: 120-149 days overdue 5: Overdue or bad debts, write-offs for more than 150 days C: paid off that month X: No loan for the month |\n\nRelated data : [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud)\nRelated competition: [Home Credit Default Risk](https://www.kaggle.com/c/home-credit-default-risk)", "VersionNotes": "delete dictionary version", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 426827, "CreatorUserId": 3396171, "OwnerUserId": 3396171.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1031720.0, "CurrentDatasourceVersionId": 1060735.0, "ForumId": 439379, "Type": 2, "CreationDate": "11/26/2019 09:23:20", "LastActivityDate": "11/26/2019", "TotalViews": 490941, "TotalDownloads": 54630, "TotalVotes": 691, "TotalKernels": 149}]
|
[{"Id": 3396171, "UserName": "rikdifos", "DisplayName": "Seanny", "RegisterDate": "06/26/2019", "PerformanceTier": 2}]
|
# # Importing
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
from sklearn.feature_selection import f_classif
from sklearn.feature_selection import SelectKBest
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import (
accuracy_score,
recall_score,
precision_score,
f1_score,
confusion_matrix,
)
app = pd.read_csv(r"application_record.csv")
rec = pd.read_csv(r"credit_record.csv")
def data_info(data):
cols = []
unique = []
n_uniques = []
dtypes = []
nulls = []
for col in data.columns:
cols.append(col)
dtypes.append(data[col].dtype)
n_uniques.append(data[col].nunique())
unique.append(data[col].unique())
nulls.append(data[col].isna().sum())
return pd.DataFrame(
{
"Col": cols,
"n_uniques": n_uniques,
"unique": unique,
"dtypes": dtypes,
"NULLS": nulls,
}
)
data_info(app)
data_info(rec)
app.info()
app.head()
rec.head()
# # 2.Data Preprocessing
# #### Check How Many times Customer ID become 1 or 0
rec["STATUS"].unique()
rec["STATUS"].replace(
["C", "X", "0", "1", "2", "3", "4", "5"], [1, 1, 1, 0, 0, 0, 0, 0], inplace=True
)
final_result = rec.value_counts(subset=["ID", "STATUS"]).unstack(fill_value=0)
final_result
final_result["target"] = None
final_result["target"][final_result[0] > 0] = 0
final_result["target"].unique()
final_result["target"].unique()
final_result["target"].fillna(1, inplace=True)
final_result
new_target = pd.DataFrame(final_result["target"].astype(int))
data = app.merge(new_target, how="inner", on="ID")
data
# ### Check Missing Values
#
data.isna().sum()
data.fillna("other_type", inplace=True)
data.set_index("ID", inplace=True)
data.duplicated().sum()
data.drop_duplicates(inplace=True)
data.info()
data.reset_index("ID", inplace=True)
# ### 2.1Feature Generation
# ###### 2.1.1.1 Total income for Person
income_person = []
for i in data["AMT_INCOME_TOTAL"]:
print(i)
for j in data["CNT_FAM_MEMBERS"]:
print(j)
z = i / j
income_person.append(z)
break
len(income_person)
income_per = pd.DataFrame(income_person, columns=["Person_income"])
income_per.set_index(data["ID"], inplace=True)
income_per
# ###### 2.1.1.2 How many year & Months our Customer work ?
#
month_employe = []
year_employe = []
for i in data["DAYS_EMPLOYED"]:
z = i / 30
month_employe.append(z)
y = round((z / 12), 2)
year_employe.append(y)
len(year_employe)
employee_month = pd.DataFrame(month_employe, columns=["employee_Month"]).abs()
employee_year = pd.DataFrame(year_employe, columns=["employee_year"]).abs()
employee_year.set_index(data["ID"], inplace=True)
employee_month.set_index(data["ID"], inplace=True)
employee_month, employee_year
# ###### 2.1.1.3 Age of the customer
#
age = []
for i in data["DAYS_BIRTH"]:
z = i / 30
y = round(z / 12, 3)
age.append(y)
len(age)
age_ = pd.DataFrame(age, columns=["Age"])
age_.set_index(data["ID"], inplace=True)
age_ = age_.agg(abs)
age_
# ###### Merging new Features in application csv file
#
data = data.merge(income_per, how="inner", on="ID")
data = data.merge(employee_month, how="inner", on="ID")
data = data.merge(employee_year, how="inner", on="ID")
data = data.merge(age_, how="inner", on="ID")
data.head()
# I found Number of years bigger than 60 year so we will custome that
sel = data.iloc[data["employee_year"][data["employee_year"] > 60]].index
data.drop(sel, axis=0, inplace=True)
data.drop(3, axis=0, inplace=True)
data.head()
# ### 2.1.2 Credit Record Csv file
# ###### 2.1.2.1 Account Length
account_len = pd.DataFrame(rec.groupby("ID")["MONTHS_BALANCE"].agg(["max"]))
account_len = account_len.agg(abs)
account_len
# ###### 2.1.2.2 Starting Month
account_start = pd.DataFrame(rec.groupby("ID")["MONTHS_BALANCE"].agg(["min"]))
account_start = account_start.agg(abs)
account_start
# ###### 2.1.2.5 How many Months customer pay the loan and not pay the loan ?
#
rec.value_counts(subset=["ID", "MONTHS_BALANCE"]).unstack(fill_value=0)
# No. of months pay a loan and no. of months don't pay a loan
fea_new = rec.groupby("ID").agg(sum)
pay = pd.DataFrame(fea_new["STATUS"])
pay
all_months = pd.DataFrame(rec.groupby("ID")["MONTHS_BALANCE"].count())
all_months.reset_index("ID", inplace=True)
not_pay = []
for i in all_months["MONTHS_BALANCE"]:
for j in pay["STATUS"]:
z = i - j
not_pay.append(z)
break
not_pay_ = pd.DataFrame(not_pay, columns=["Notpaying_loan"])
not_pay_.set_index(all_months["ID"], inplace=True)
not_pay_
data = data.merge(not_pay_, how="inner", on="ID")
data = data.merge(pay, how="inner", on="ID")
data = data.merge(account_len, how="inner", on="ID")
data = data.merge(account_start, how="inner", on="ID")
data.rename(
columns={"STATUS": "pay_loan", "max": "account_len", "min": "account_start"},
inplace=True,
)
data.head()
# data.rename(columns = {0:'year_employee'}, inplace = True)
# ### Check Outliers
sns.distplot(x=data["AMT_INCOME_TOTAL"])
sns.boxplot(data["AMT_INCOME_TOTAL"])
# By Applying IQR theory because the distribution is SKewed
q1 = data["AMT_INCOME_TOTAL"].quantile(0.25)
q3 = data["AMT_INCOME_TOTAL"].quantile(0.75)
iqr = q3 - q1
upper_whisker = q3 + 1.5 * iqr
lower_whisker = q1 - 1.5 * iqr
if lower_whisker < 0:
lower_whisker = 0
upper_whisker, lower_whisker
filt2 = data["AMT_INCOME_TOTAL"] > upper_whisker
filt3 = data["AMT_INCOME_TOTAL"] < lower_whisker
out2 = data[filt2].index
out3 = data[filt3].index
data.drop(out3, axis=0, inplace=True)
data.info()
data.set_index("ID", inplace=True)
data["target"].value_counts()
# ## Feature Scaling
scl = StandardScaler()
data["AMT_INCOME_TOTAL"] = scl.fit_transform(
np.array(data["AMT_INCOME_TOTAL"]).reshape(-1, 1)
)
data["CNT_CHILDREN"] = scl.fit_transform(np.array(data["CNT_CHILDREN"]).reshape(-1, 1))
data["DAYS_BIRTH"] = scl.fit_transform(np.array(data["DAYS_BIRTH"]).reshape(-1, 1))
data["DAYS_EMPLOYED"] = scl.fit_transform(
np.array(data["DAYS_EMPLOYED"]).reshape(-1, 1)
)
data["CNT_FAM_MEMBERS"] = scl.fit_transform(
np.array(data["CNT_FAM_MEMBERS"]).reshape(-1, 1)
)
data["Person_income"] = scl.fit_transform(
np.array(data["Person_income"]).reshape(-1, 1)
)
data["employee_Month"] = scl.fit_transform(
np.array(data["employee_Month"]).reshape(-1, 1)
)
data["employee_year"] = scl.fit_transform(
np.array(data["employee_year"]).reshape(-1, 1)
)
data["Age"] = scl.fit_transform(np.array(data["Age"]).reshape(-1, 1))
data["Notpaying_loan"] = scl.fit_transform(
np.array(data["Notpaying_loan"]).reshape(-1, 1)
)
data["pay_loan"] = scl.fit_transform(np.array(data["pay_loan"]).reshape(-1, 1))
data["account_len"] = scl.fit_transform(np.array(data["account_len"]).reshape(-1, 1))
# ## Encoding
# #### One Hot Encoding
data = pd.get_dummies(data, columns=["CODE_GENDER", "FLAG_OWN_CAR", "FLAG_OWN_REALTY"])
# #### Label Encoding
lb = LabelEncoder()
col = [
"NAME_INCOME_TYPE",
"NAME_EDUCATION_TYPE",
"NAME_FAMILY_STATUS",
"NAME_HOUSING_TYPE",
"OCCUPATION_TYPE",
]
for i in col:
data[i] = lb.fit_transform(data[i])
# # Splitting Data
x = data.drop("target", axis=1)
y = data["target"]
x.info()
# ## Feature Selection
# for Numerical Data we will use ANOVA as feature Selection
## from sklearn.feature_selection import SelectKBest
select = SelectKBest(f_classif, k=15)
select_up = select.fit_transform(x, y)
select_feat = select.get_support()
p_value = np.round(select.pvalues_, 4)
f_value = np.round(select.scores_, 4)
select_inde = select.get_support(indices=True)
select_inde
x = data.iloc[:, select_inde]
y = data["target"]
print("Selected Features : \n\n", x.columns)
# x = data.drop('target' , axis =1)
# y = data['target']
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=42, shuffle=True, stratify=data["target"]
)
# from imblearn.combine import SMOTETomek
# smt = SMOTETomek(random_state=42)
# X_res, y_res = smt.fit_resample(x_train, y_train)
tl = SMOTE()
X_res, y_res = tl.fit_resample(x_train, y_train)
# from imblearn.under_sampling import TomekLinks
# tl = TomekLinks()
# X_res_2, y_res_2 = tl.fit_resample(X_res, y_res)
model = []
pre_train = []
rec_train = []
f1_train = []
spe_train = []
pre_test = []
rec_test = []
f1_test = []
spe_test = []
# # KNN
knn5 = KNeighborsClassifier(n_neighbors=6)
knn5.fit(X_res, y_res)
y_pred = knn5.predict(X_res)
print(confusion_matrix(y_pred, y_res))
tn, fp, fn, tp = confusion_matrix(y_res, y_pred).ravel()
specificity_tra = round(tn / (tn + fp), 4)
acc_tra = round(accuracy_score(y_pred, y_res), 4)
rec_tra = round(recall_score(y_pred, y_res), 4)
pre_tra = round(precision_score(y_pred, y_res), 4)
f1_tra = round(f1_score(y_pred, y_res), 4)
print("accuracy_score : ", acc_tra)
print("recall_score : ", rec_tra)
print("precision_score : ", pre_tra)
print("f1_score : ", f1_tra)
print("specificity :", specificity_tra)
model.append("KNN")
pre_train.append(pre_tra)
rec_train.append(rec_tra)
f1_train.append(f1_tra)
spe_train.append(specificity_tra)
y_pred = knn5.predict(x_test)
print(confusion_matrix(y_pred, y_test))
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
specificity_tes = round(tn / (tn + fp), 4)
acc_tes = round(accuracy_score(y_pred, y_test), 4)
rec_tes = round(recall_score(y_pred, y_test), 4)
pre_tes = round(precision_score(y_pred, y_test), 4)
f1_tes = round(f1_score(y_pred, y_test), 4)
print("accuracy_score : ", acc_tes)
print("recall_score : ", rec_tes)
print("precision_score : ", pre_tes)
print("f1_score : ", f1_tes)
print("specificity :", specificity_tes)
pre_test.append(pre_tes)
rec_test.append(rec_tes)
f1_test.append(f1_tes)
spe_test.append(specificity_tes)
history = {
"precision_score": [pre_train, pre_test],
"recall_score": [rec_train, rec_test],
"f1_score": [f1_train, f1_test],
"specificity": [spe_train, spe_test],
}
classification_report = pd.DataFrame(history, index=["Train", "Test"])
classification_report
| false | 0 | 3,779 | 0 | 4,972 | 3,779 |
||
129343827
|
<jupyter_start><jupyter_text>Predictive Maintenance Dataset (AI4I 2020)
Please note that **this is the original dataset** with **additional information and proper attribution**. There is at least one other version of this dataset on Kaggle that was uploaded without permission. Please be fair and attribute the original author.
This synthetic dataset is modeled after an existing milling machine and consists of 10 000 data points from a stored as rows with 14 features in columns
1. UID: unique identifier ranging from 1 to 10000
2. product ID: consisting of a letter L, M, or H for low (50% of all products), medium (30%) and high (20%) as product quality variants and a variant-specific serial number
3. type: just the product type L, M or H from column 2
4. air temperature [K]: generated using a random walk process later normalized to a standard deviation of 2 K around 300 K
5. process temperature [K]: generated using a random walk process normalized to a standard deviation of 1 K, added to the air temperature plus 10 K.
6. rotational speed [rpm]: calculated from a power of 2860 W, overlaid with a normally distributed noise
7. torque [Nm]: torque values are normally distributed around 40 Nm with a SD = 10 Nm and no negative values.
8. tool wear [min]: The quality variants H/M/L add 5/3/2 minutes of tool wear to the used tool in the process.
9. a 'machine failure' label that indicates, whether the machine has failed in this particular datapoint for any of the following failure modes are true.
The machine failure consists of five independent failure modes
10. tool wear failure (TWF): the tool will be replaced of fail at a randomly selected tool wear time between 200 - 240 mins (120 times in our dataset). At this point in time, the tool is replaced 69 times, and fails 51 times (randomly assigned).
11. heat dissipation failure (HDF): heat dissipation causes a process failure, if the difference between air- and process temperature is below 8.6 K and the tools rotational speed is below 1380 rpm. This is the case for 115 data points.
12. power failure (PWF): the product of torque and rotational speed (in rad/s) equals the power required for the process. If this power is below 3500 W or above 9000 W, the process fails, which is the case 95 times in our dataset.
13. overstrain failure (OSF): if the product of tool wear and torque exceeds 11,000 minNm for the L product variant (12,000 M, 13,000 H), the process fails due to overstrain. This is true for 98 datapoints.
14. random failures (RNF): each process has a chance of 0,1 % to fail regardless of its process parameters. This is the case for only 5 datapoints, less than could be expected for 10,000 datapoints in our dataset.
If at least one of the above failure modes is true, the process fails and the 'machine failure' label is set to 1. It is therefore not transparent to the machine learning method, which of the failure modes has caused the process to fail.
This dataset is part of the following publication, please cite when using this dataset:
S. Matzka, "Explainable Artificial Intelligence for Predictive Maintenance Applications," 2020 Third International Conference on Artificial Intelligence for Industries (AI4I), 2020, pp. 69-74, doi: 10.1109/AI4I49448.2020.00023.
The image of the milling process is the work of Daniel Smyth @ Pexels: https://www.pexels.com/de-de/foto/industrie-herstellung-maschine-werkzeug-10406128/
Kaggle dataset identifier: predictive-maintenance-dataset-ai4i-2020
<jupyter_script># # 'Machine Failure' Notebook 2 - Classification vs Regression
# * Dataset from https://www.kaggle.com/datasets/stephanmatzka/predictive-maintenance-dataset-ai4i-2020
# * Previous explorations using this dataset
# * [Notebook 1 - Machine Failure Classification n Global Explainability
# ](https://www.kaggle.com/code/kaiquanmah/machine-failure-classification-n-global-explainab) where we explored the concept of 'explainability' using [BCG GAMMA's Facet library](https://github.com/BCG-Gamma/facet)
# * This notebook aims to **explore the performance of a classifier vs a regressor for predicting 'machine failure'**, while still being **aware that the whole dataset is imbalanced (96.6% machinesOk : 3.4% machinesFailing ratio)**
# ### Reason for this exploration
# * **Binary classification gives us binary predictions**
# * 0 (machine is ok)
# * 1 (machine is failing)
# * **Regression gives us a continuous range of prediction values** between and including 0 and 1
# * 0 (machine is in its best shape)
# * values between 0 and 1, eg
# * 0.25 (**low risk** of machine failing)
# * 0.5 (**moderate risk** of machine failing)
# * 0.75 (**high risk** of machine failing)
# * 1 (**very high risk** of machine failing. Machine is showing signs that it is probably going to fail)
# * **Modeling/Predicting 'machine failure' before it happens seem to be more of a regression problem instead of a classification problem**
# * **Whether we use human intuition or a machine's prediction (and a 'tuned' threshold) to decide it is time to repair a machine before it fails, there is a level of uncertainty involved. This could probably be best modelled as a 'regression' problem instead of a classification problem**
# * The classification problem is probably not suitable for prediction and more for recording the 'machine failure class/status' after machine failure has happened, if it actually happens
# * **Hypothesis: I expect a regressor gives a better performance than the classifier for our imbalanced machine failure prediction dataset (even though the actual 'Machine Failure' contains binary 0/1 values)**
# ### Therefore, this notebook explores the following topics
# * **Stratified Train-test split**
# * Data is split keeping the same 'machinesOk:machinesFailing' ratio in both the training and test set
# * **RepeatedStratifiedKFold on training set**
# * Each 'fold' should have approximately the same 'machinesOk:machinesFailing' ratio
# * **Train a classifier** on the whole imbalanced dataset (96.6% machinesOk : 3.4% machinesFailing ratio)
# * **Train a regressor** on the whole imbalanced dataset
# * **Compare performance of a classifier vs a regressor for predicting 'machine failure'**
# import packages outside of FACET
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import (
train_test_split,
RepeatedStratifiedKFold,
GridSearchCV,
)
# from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from xgboost import XGBClassifier, XGBRegressor
seed = 8
# # Predictive Maintenance Dataset
# load the prepared dataframe
df = pd.read_csv(
"/kaggle/input/predictive-maintenance-dataset-ai4i-2020/ai4i2020.csv",
encoding="utf-8",
)
# quick look
df.head()
# each observation has a value in each col
df.info()
df.describe()
df.columns
# # Variable Analysis (From Notebook 1)
# In terms of correlated variables
# * product ID, type: product ID 1st character = type (L/M/H)
# * type, tool wear: type L/M/H leads to addition of '2/3/5' mins to 'tool wear'
# In terms of feature contribution to targets
# * tool wear time -> tool wear time between 200 - 240 mins -> replace tool, so AFFECTS -> tool wear failure (TWF)
# * Rule is True for 69/120, False for 51/120 cases
# * air temp, process temp, rotational speed -> difference between air and process temperature is below 8.6 K AND tool rotational speed is below 1380 rpm AFFECTS -> heat dissipation failure (HDF)
# * True for 115 heat dissipation failures => BUT WHAT % of heat dissipation failures did this rule work/failed? ASSUME 100%?
# * torque, rotational speed -> product of torque and rotational speed (in rad/s) equals the power required for the process -> If this power is below 3500 W or above 9000 W, the process fails -> power failure (PWF)
# * True for 95 power failures
# * tool wear, torque, type -> product of tool wear and torque exceeds 11,000 minNm for the L product variant (12,000 M, 13,000 H), process fails due to overstrain -> overstrain failure (OSF)
# * True for 98 overstrain failures
# * 0.1% chance fail -> regardless of process parameters -> random failures (RNF)
# * True for only 5 datapoints
# * Probably not important to model this 0.1% chance as it seems to be 'random noise' which can happen due to unknown reasons
# In terms of target to target relationship
# * tool wear failure (TWF), heat dissipation failure (HDF), power failure (PWF), overstrain failure (OSF), random failures (RNF) LEADS TO -> machine failure
# Outcome we want
# * We want to predict when a machine would have 'machine failure' before it happens, if possible
# * So we have a one-class classifications problem - Machine Failure OR NOT
# # Get the relevant features and target (From Notebook 1)
# # get 'type' column's values
# label_encoder = LabelEncoder()
# df['Type'] = label_encoder.fit_transform(df['Type'])
# # one-hot encode the 'type' column
# one_hot_encoder = OneHotEncoder(sparse=False)
# # data[['junior','senior']] = one_hot_encoder.fit_transform(df['Type'].values.reshape(-1,1))
# https://stackoverflow.com/questions/52430798/onehotencoder-encoding-only-some-of-categorical-variable-columns
df = pd.concat((df, pd.get_dummies(df.Type)), 1)
df.head()
df.info()
df.columns
# features
# feature_names = ['Product ID', 'Type', 'Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]']
# drop 'product id' column as it is a categorical column
# drop 'Type' -> use 'L/M/H' instead
feature_names = [
"Air temperature [K]",
"Process temperature [K]",
"Rotational speed [rpm]",
"Torque [Nm]",
"Tool wear [min]",
"L",
"M",
"H",
]
print(f"num features: {len(feature_names)}")
X = df[feature_names]
X.head()
# target
target_name = "Machine failure"
y = df[target_name]
y.head()
df_relevantcols = pd.concat([X, pd.DataFrame(y)], axis=1)
df_relevantcols.head()
# because of an issue with feature names later, we fix feature/col names now
# ValueError: XGBClassifierDF.fit: feature_names must be string, and may not contain [, ] or <
# https://stackoverflow.com/questions/19758364/rename-specific-columns-in-pandas
dict_renamed_cols = {
"Air temperature [K]": "air_temp",
"Process temperature [K]": "process_temp",
"Rotational speed [rpm]": "rotation_spd",
"Torque [Nm]": "torque",
"Tool wear [min]": "tool_wear",
}
df_relevantcols.rename(columns=dict_renamed_cols, inplace=True)
df_relevantcols
df_relevantcols.columns
# split back into X and y
X, y = (
df_relevantcols[
[
"air_temp",
"process_temp",
"rotation_spd",
"torque",
"tool_wear",
"L",
"M",
"H",
]
],
df_relevantcols[["Machine failure"]],
)
X.head()
y = y.squeeze()
y.head()
# # Stratified Train-test split
# Data is split keeping the same 'machinesOk:machinesFailing' ratio in both the training and test set
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
# https://stackoverflow.com/questions/34842405/parameter-stratify-from-method-train-test-split-scikit-learn
# full dataset 10k records
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=seed, shuffle=True, stratify=y
)
# sanity check row count and machinesOk:machinesFailing ratio
print(f"num rows in X_train: {len(X_train)}")
print(f"num rows in X_test: {len(X_test)}")
print(f"num rows in y_train: {len(y_train)}")
print(f"% rows in y_train with machine failure: {sum(y_train)/len(y_train)}")
print(f"num rows in y_test: {len(y_test)}")
print(f"% rows in y_test with machine failure: {sum(y_test)/len(y_test)}")
# # RepeatedStratifiedKFold on training set
# Each 'fold' should have approximately the same 'machinesOk:machinesFailing' ratio
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RepeatedStratifiedKFold.html#sklearn.model_selection.RepeatedStratifiedKFold
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html#sklearn.model_selection.StratifiedKFold
rskf = RepeatedStratifiedKFold(n_splits=5, n_repeats=10, random_state=seed)
# 5 splits x 10 times (repeated)
rskf.get_n_splits(X_train, y_train)
# # Train a classifier on the whole imbalanced dataset
for i in range(50, 1050, 50):
print(i)
np.linspace(0.1, 0.3, 12)
# https://xgboost.readthedocs.io/en/stable/python/python_api.html#xgboost.XGBClassifier
clf_model = XGBClassifier(random_state=seed)
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
# https://machinelearningmastery.com/tune-number-size-decision-trees-xgboost-python/#:~:text=Quickly%2C%20the%20model%20reaches%20a,the%20XGBoost%20library%20is%20100.
# https://mljar.com/blog/xgboost-early-stopping/
# clf_paramgrid = {'n_estimators': range(50, 1050, 50),
# 'max_depth': [2, 3, 4, 5],
# 'learning_rate': np.linspace(0.1, 0.3, 12),
# 'early_stopping_rounds': range(5, 105, 5), # early_stopping_rounds 10% of n_estimators
# 'reg_lambda ': [0.001, 0.01, 0.1, 1, 3, 5, 10]}
clf_paramgrid = {
"n_estimators": [500, 750, 1000],
"max_depth": [2, 3, 4, 5],
"learning_rate": [0.001, 0.01],
# 'early_stopping_rounds': [50, 100], # must have validation set for early stopping, so we explore this param
"reg_lambda": [0.001],
}
temp_clf = XGBClassifier(
random_state=seed,
n_estimators=750,
max_depth=2,
learning_rate=0.01,
reg_lambda=0.001,
)
temp_clf.fit(X_train, y_train)
# count number of fits
# 5 n_splits
# 10 n_repeats
# 2 n_estimators
# 3 max_depth
# 1 learning_rate
# 1 reg_lambda
5 * 10 * 2 * 3 * 1 * 1
clf_gridsearch = GridSearchCV(
clf_model,
clf_paramgrid,
# scoring="neg_log_loss",
cv=rskf,
verbose=2,
)
clf_gridsearch_result = clf_gridsearch.fit(X_train, y_train)
# summarize results
print(
f"Best score: {clf_gridsearch_result.best_score_,}, using: {clf_gridsearch_result.best_params_}"
)
means = clf_gridsearch_result.cv_results_["mean_test_score"]
stds = clf_gridsearch_result.cv_results_["std_test_score"]
params = clf_gridsearch_result.cv_results_["params"]
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# # Train a regressor on the whole imbalanced dataset
#
# https://xgboost.readthedocs.io/en/stable/python/python_api.html#xgboost.XGBRegressor
# # Compare performance of a classifier vs a regressor
# # set-up and run a simulation
# list_sim_feature = ['air_temp', 'process_temp', 'rotation_spd', 'torque', 'tool_wear', 'L', 'M', 'H']
# def reusable_simulator(SIM_FEATURE):
# temp_bins = ContinuousRangePartitioner()
# temp_simulator = UnivariateProbabilitySimulator(crossfit=boot_crossfit, n_jobs=3)
# temp_simulation = temp_simulator.simulate_feature(feature_name=SIM_FEATURE, partitioner=temp_bins)
# # plot how different values of the selected feature affects the target
# return SimulationDrawer().draw(data=temp_simulation, title=SIM_FEATURE)
# for SIM_FEATURE in list_sim_feature:
# reusable_simulator(SIM_FEATURE)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/343/129343827.ipynb
|
predictive-maintenance-dataset-ai4i-2020
|
stephanmatzka
|
[{"Id": 129343827, "ScriptId": 38418218, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3485410, "CreationDate": "05/13/2023 01:35:26", "VersionNumber": 6.0, "Title": "Machine Failure Classification vs Regression", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 262.0, "LinesInsertedFromPrevious": 2.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 260.0, "LinesInsertedFromFork": 163.0, "LinesDeletedFromFork": 376.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 99.0, "TotalVotes": 0}]
|
[{"Id": 185300420, "KernelVersionId": 129343827, "SourceDatasetVersionId": 4458097}]
|
[{"Id": 4458097, "DatasetId": 2609801, "DatasourceVersionId": 4517974, "CreatorUserId": 10008882, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "11/06/2022 11:06:47", "VersionNumber": 2.0, "Title": "Predictive Maintenance Dataset (AI4I 2020)", "Slug": "predictive-maintenance-dataset-ai4i-2020", "Subtitle": "The original dataset of a synthetic milling process for classification and XAI.", "Description": "Please note that **this is the original dataset** with **additional information and proper attribution**. There is at least one other version of this dataset on Kaggle that was uploaded without permission. Please be fair and attribute the original author.\nThis synthetic dataset is modeled after an existing milling machine and consists of 10 000 data points from a stored as rows with 14 features in columns\n\n1.\tUID: unique identifier ranging from 1 to 10000\n2.\tproduct ID: consisting of a letter L, M, or H for low (50% of all products), medium (30%) and high (20%) as product quality variants and a variant-specific serial number\n3.\ttype: just the product type L, M or H from column 2\n4.\tair temperature [K]: generated using a random walk process later normalized to a standard deviation of 2 K around 300 K\n5.\tprocess temperature [K]: generated using a random walk process normalized to a standard deviation of 1 K, added to the air temperature plus 10 K.\n6.\trotational speed [rpm]: calculated from a power of 2860 W, overlaid with a normally distributed noise\n7.\ttorque [Nm]: torque values are normally distributed around 40 Nm with a SD = 10 Nm and no negative values.\n8.\ttool wear [min]: The quality variants H/M/L add 5/3/2 minutes of tool wear to the used tool in the process. \n9.\ta 'machine failure' label that indicates, whether the machine has failed in this particular datapoint for any of the following failure modes are true.\n\nThe machine failure consists of five independent failure modes\n10.\ttool wear failure (TWF): the tool will be replaced of fail at a randomly selected tool wear time between 200 - 240 mins (120 times in our dataset). At this point in time, the tool is replaced 69 times, and fails 51 times (randomly assigned).\n11.\theat dissipation failure (HDF): heat dissipation causes a process failure, if the difference between air- and process temperature is below 8.6 K and the tools rotational speed is below 1380 rpm. This is the case for 115 data points.\n12.\tpower failure (PWF): the product of torque and rotational speed (in rad/s) equals the power required for the process. If this power is below 3500 W or above 9000 W, the process fails, which is the case 95 times in our dataset.\n13.\toverstrain failure (OSF): if the product of tool wear and torque exceeds 11,000 minNm for the L product variant (12,000 M, 13,000 H), the process fails due to overstrain. This is true for 98 datapoints.\n14.\trandom failures (RNF): each process has a chance of 0,1 % to fail regardless of its process parameters. This is the case for only 5 datapoints, less than could be expected for 10,000 datapoints in our dataset.\nIf at least one of the above failure modes is true, the process fails and the 'machine failure' label is set to 1. It is therefore not transparent to the machine learning method, which of the failure modes has caused the process to fail.\n\nThis dataset is part of the following publication, please cite when using this dataset:\nS. Matzka, \"Explainable Artificial Intelligence for Predictive Maintenance Applications,\" 2020 Third International Conference on Artificial Intelligence for Industries (AI4I), 2020, pp. 69-74, doi: 10.1109/AI4I49448.2020.00023.\n\nThe image of the milling process is the work of Daniel Smyth @ Pexels: https://www.pexels.com/de-de/foto/industrie-herstellung-maschine-werkzeug-10406128/", "VersionNotes": "Data Update 2022/11/06", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2609801, "CreatorUserId": 10008882, "OwnerUserId": 10008882.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 4458097.0, "CurrentDatasourceVersionId": 4517974.0, "ForumId": 2640480, "Type": 2, "CreationDate": "11/06/2022 11:00:11", "LastActivityDate": "11/06/2022", "TotalViews": 17147, "TotalDownloads": 1355, "TotalVotes": 35, "TotalKernels": 16}]
|
[{"Id": 10008882, "UserName": "stephanmatzka", "DisplayName": "Stephan Matzka", "RegisterDate": "03/22/2022", "PerformanceTier": 0}]
|
# # 'Machine Failure' Notebook 2 - Classification vs Regression
# * Dataset from https://www.kaggle.com/datasets/stephanmatzka/predictive-maintenance-dataset-ai4i-2020
# * Previous explorations using this dataset
# * [Notebook 1 - Machine Failure Classification n Global Explainability
# ](https://www.kaggle.com/code/kaiquanmah/machine-failure-classification-n-global-explainab) where we explored the concept of 'explainability' using [BCG GAMMA's Facet library](https://github.com/BCG-Gamma/facet)
# * This notebook aims to **explore the performance of a classifier vs a regressor for predicting 'machine failure'**, while still being **aware that the whole dataset is imbalanced (96.6% machinesOk : 3.4% machinesFailing ratio)**
# ### Reason for this exploration
# * **Binary classification gives us binary predictions**
# * 0 (machine is ok)
# * 1 (machine is failing)
# * **Regression gives us a continuous range of prediction values** between and including 0 and 1
# * 0 (machine is in its best shape)
# * values between 0 and 1, eg
# * 0.25 (**low risk** of machine failing)
# * 0.5 (**moderate risk** of machine failing)
# * 0.75 (**high risk** of machine failing)
# * 1 (**very high risk** of machine failing. Machine is showing signs that it is probably going to fail)
# * **Modeling/Predicting 'machine failure' before it happens seem to be more of a regression problem instead of a classification problem**
# * **Whether we use human intuition or a machine's prediction (and a 'tuned' threshold) to decide it is time to repair a machine before it fails, there is a level of uncertainty involved. This could probably be best modelled as a 'regression' problem instead of a classification problem**
# * The classification problem is probably not suitable for prediction and more for recording the 'machine failure class/status' after machine failure has happened, if it actually happens
# * **Hypothesis: I expect a regressor gives a better performance than the classifier for our imbalanced machine failure prediction dataset (even though the actual 'Machine Failure' contains binary 0/1 values)**
# ### Therefore, this notebook explores the following topics
# * **Stratified Train-test split**
# * Data is split keeping the same 'machinesOk:machinesFailing' ratio in both the training and test set
# * **RepeatedStratifiedKFold on training set**
# * Each 'fold' should have approximately the same 'machinesOk:machinesFailing' ratio
# * **Train a classifier** on the whole imbalanced dataset (96.6% machinesOk : 3.4% machinesFailing ratio)
# * **Train a regressor** on the whole imbalanced dataset
# * **Compare performance of a classifier vs a regressor for predicting 'machine failure'**
# import packages outside of FACET
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import (
train_test_split,
RepeatedStratifiedKFold,
GridSearchCV,
)
# from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from xgboost import XGBClassifier, XGBRegressor
seed = 8
# # Predictive Maintenance Dataset
# load the prepared dataframe
df = pd.read_csv(
"/kaggle/input/predictive-maintenance-dataset-ai4i-2020/ai4i2020.csv",
encoding="utf-8",
)
# quick look
df.head()
# each observation has a value in each col
df.info()
df.describe()
df.columns
# # Variable Analysis (From Notebook 1)
# In terms of correlated variables
# * product ID, type: product ID 1st character = type (L/M/H)
# * type, tool wear: type L/M/H leads to addition of '2/3/5' mins to 'tool wear'
# In terms of feature contribution to targets
# * tool wear time -> tool wear time between 200 - 240 mins -> replace tool, so AFFECTS -> tool wear failure (TWF)
# * Rule is True for 69/120, False for 51/120 cases
# * air temp, process temp, rotational speed -> difference between air and process temperature is below 8.6 K AND tool rotational speed is below 1380 rpm AFFECTS -> heat dissipation failure (HDF)
# * True for 115 heat dissipation failures => BUT WHAT % of heat dissipation failures did this rule work/failed? ASSUME 100%?
# * torque, rotational speed -> product of torque and rotational speed (in rad/s) equals the power required for the process -> If this power is below 3500 W or above 9000 W, the process fails -> power failure (PWF)
# * True for 95 power failures
# * tool wear, torque, type -> product of tool wear and torque exceeds 11,000 minNm for the L product variant (12,000 M, 13,000 H), process fails due to overstrain -> overstrain failure (OSF)
# * True for 98 overstrain failures
# * 0.1% chance fail -> regardless of process parameters -> random failures (RNF)
# * True for only 5 datapoints
# * Probably not important to model this 0.1% chance as it seems to be 'random noise' which can happen due to unknown reasons
# In terms of target to target relationship
# * tool wear failure (TWF), heat dissipation failure (HDF), power failure (PWF), overstrain failure (OSF), random failures (RNF) LEADS TO -> machine failure
# Outcome we want
# * We want to predict when a machine would have 'machine failure' before it happens, if possible
# * So we have a one-class classifications problem - Machine Failure OR NOT
# # Get the relevant features and target (From Notebook 1)
# # get 'type' column's values
# label_encoder = LabelEncoder()
# df['Type'] = label_encoder.fit_transform(df['Type'])
# # one-hot encode the 'type' column
# one_hot_encoder = OneHotEncoder(sparse=False)
# # data[['junior','senior']] = one_hot_encoder.fit_transform(df['Type'].values.reshape(-1,1))
# https://stackoverflow.com/questions/52430798/onehotencoder-encoding-only-some-of-categorical-variable-columns
df = pd.concat((df, pd.get_dummies(df.Type)), 1)
df.head()
df.info()
df.columns
# features
# feature_names = ['Product ID', 'Type', 'Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]']
# drop 'product id' column as it is a categorical column
# drop 'Type' -> use 'L/M/H' instead
feature_names = [
"Air temperature [K]",
"Process temperature [K]",
"Rotational speed [rpm]",
"Torque [Nm]",
"Tool wear [min]",
"L",
"M",
"H",
]
print(f"num features: {len(feature_names)}")
X = df[feature_names]
X.head()
# target
target_name = "Machine failure"
y = df[target_name]
y.head()
df_relevantcols = pd.concat([X, pd.DataFrame(y)], axis=1)
df_relevantcols.head()
# because of an issue with feature names later, we fix feature/col names now
# ValueError: XGBClassifierDF.fit: feature_names must be string, and may not contain [, ] or <
# https://stackoverflow.com/questions/19758364/rename-specific-columns-in-pandas
dict_renamed_cols = {
"Air temperature [K]": "air_temp",
"Process temperature [K]": "process_temp",
"Rotational speed [rpm]": "rotation_spd",
"Torque [Nm]": "torque",
"Tool wear [min]": "tool_wear",
}
df_relevantcols.rename(columns=dict_renamed_cols, inplace=True)
df_relevantcols
df_relevantcols.columns
# split back into X and y
X, y = (
df_relevantcols[
[
"air_temp",
"process_temp",
"rotation_spd",
"torque",
"tool_wear",
"L",
"M",
"H",
]
],
df_relevantcols[["Machine failure"]],
)
X.head()
y = y.squeeze()
y.head()
# # Stratified Train-test split
# Data is split keeping the same 'machinesOk:machinesFailing' ratio in both the training and test set
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
# https://stackoverflow.com/questions/34842405/parameter-stratify-from-method-train-test-split-scikit-learn
# full dataset 10k records
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=seed, shuffle=True, stratify=y
)
# sanity check row count and machinesOk:machinesFailing ratio
print(f"num rows in X_train: {len(X_train)}")
print(f"num rows in X_test: {len(X_test)}")
print(f"num rows in y_train: {len(y_train)}")
print(f"% rows in y_train with machine failure: {sum(y_train)/len(y_train)}")
print(f"num rows in y_test: {len(y_test)}")
print(f"% rows in y_test with machine failure: {sum(y_test)/len(y_test)}")
# # RepeatedStratifiedKFold on training set
# Each 'fold' should have approximately the same 'machinesOk:machinesFailing' ratio
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RepeatedStratifiedKFold.html#sklearn.model_selection.RepeatedStratifiedKFold
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html#sklearn.model_selection.StratifiedKFold
rskf = RepeatedStratifiedKFold(n_splits=5, n_repeats=10, random_state=seed)
# 5 splits x 10 times (repeated)
rskf.get_n_splits(X_train, y_train)
# # Train a classifier on the whole imbalanced dataset
for i in range(50, 1050, 50):
print(i)
np.linspace(0.1, 0.3, 12)
# https://xgboost.readthedocs.io/en/stable/python/python_api.html#xgboost.XGBClassifier
clf_model = XGBClassifier(random_state=seed)
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
# https://machinelearningmastery.com/tune-number-size-decision-trees-xgboost-python/#:~:text=Quickly%2C%20the%20model%20reaches%20a,the%20XGBoost%20library%20is%20100.
# https://mljar.com/blog/xgboost-early-stopping/
# clf_paramgrid = {'n_estimators': range(50, 1050, 50),
# 'max_depth': [2, 3, 4, 5],
# 'learning_rate': np.linspace(0.1, 0.3, 12),
# 'early_stopping_rounds': range(5, 105, 5), # early_stopping_rounds 10% of n_estimators
# 'reg_lambda ': [0.001, 0.01, 0.1, 1, 3, 5, 10]}
clf_paramgrid = {
"n_estimators": [500, 750, 1000],
"max_depth": [2, 3, 4, 5],
"learning_rate": [0.001, 0.01],
# 'early_stopping_rounds': [50, 100], # must have validation set for early stopping, so we explore this param
"reg_lambda": [0.001],
}
temp_clf = XGBClassifier(
random_state=seed,
n_estimators=750,
max_depth=2,
learning_rate=0.01,
reg_lambda=0.001,
)
temp_clf.fit(X_train, y_train)
# count number of fits
# 5 n_splits
# 10 n_repeats
# 2 n_estimators
# 3 max_depth
# 1 learning_rate
# 1 reg_lambda
5 * 10 * 2 * 3 * 1 * 1
clf_gridsearch = GridSearchCV(
clf_model,
clf_paramgrid,
# scoring="neg_log_loss",
cv=rskf,
verbose=2,
)
clf_gridsearch_result = clf_gridsearch.fit(X_train, y_train)
# summarize results
print(
f"Best score: {clf_gridsearch_result.best_score_,}, using: {clf_gridsearch_result.best_params_}"
)
means = clf_gridsearch_result.cv_results_["mean_test_score"]
stds = clf_gridsearch_result.cv_results_["std_test_score"]
params = clf_gridsearch_result.cv_results_["params"]
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# # Train a regressor on the whole imbalanced dataset
#
# https://xgboost.readthedocs.io/en/stable/python/python_api.html#xgboost.XGBRegressor
# # Compare performance of a classifier vs a regressor
# # set-up and run a simulation
# list_sim_feature = ['air_temp', 'process_temp', 'rotation_spd', 'torque', 'tool_wear', 'L', 'M', 'H']
# def reusable_simulator(SIM_FEATURE):
# temp_bins = ContinuousRangePartitioner()
# temp_simulator = UnivariateProbabilitySimulator(crossfit=boot_crossfit, n_jobs=3)
# temp_simulation = temp_simulator.simulate_feature(feature_name=SIM_FEATURE, partitioner=temp_bins)
# # plot how different values of the selected feature affects the target
# return SimulationDrawer().draw(data=temp_simulation, title=SIM_FEATURE)
# for SIM_FEATURE in list_sim_feature:
# reusable_simulator(SIM_FEATURE)
| false | 1 | 3,702 | 0 | 4,693 | 3,702 |
||
129343031
|
import sys
sys.path.append("/kaggle/input/amp-pd")
import pandas as pd
import numpy as np
import pickle
import amp_pd_peptide
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import StratifiedKFold
import warnings
warnings.filterwarnings("ignore")
def preprocess_train_df(train_clin_df, train_prot_df, train_pep_df):
"""
Takes in the train_clinical_data.csv, train_peptides.csv, train_proteins.csv as pandas dataframes
Combines the protein and peptide data names and the joins with the train clinical data
The dataframes are stratified kfold based on the target
The function creates one dataframe for each target (updrs_1, updrs_2, updrs_3, updrs_4) stored in the final_df dictionary
Returns a dictionary of the dataframes for each updrs target
"""
# drop the medication column
train_clin_df = train_clin_df.drop(columns=["upd23b_clinical_state_on_medication"])
# create a column with the UniProt and Peptide name combined
train_pep_df["peptide_uniprot"] = (
train_pep_df["Peptide"] + "_" + train_pep_df["UniProt"]
)
# create a table with the visit_id as the index and the proteins or peptides as the feature and the abundance as the values
train_prot_pivot = train_prot_df.pivot(
index="visit_id", values="NPX", columns="UniProt"
)
train_pep_pivot = train_pep_df.pivot(
index="visit_id", values="PeptideAbundance", columns="peptide_uniprot"
)
# combine the two tables on the visit_id
full_prot_train_df = train_prot_pivot.join(train_pep_pivot)
# fill nan with 0 for this first round
full_prot_train_df = full_prot_train_df.fillna(0)
full_train_df = train_clin_df.merge(
full_prot_train_df, how="inner", left_on="visit_id", right_on="visit_id"
)
full_train_df = full_train_df.sample(frac=1).reset_index(drop=True)
updrs = ["updrs_1", "updrs_2", "updrs_3", "updrs_4"]
final_dfs = dict()
for target in updrs:
to_remove = [updr for updr in updrs if updr != target]
temp_train_df = full_train_df.drop(to_remove, axis=1)
temp_train_df = temp_train_df.dropna()
# calculate the number of bins by Sturge's rule
num_bins = int(np.floor(1 + np.log2(len(full_train_df))))
temp_train_df.loc[:, "bins"] = pd.cut(
temp_train_df[target], bins=num_bins, labels=False
)
temp_train_df = temp_train_df.dropna().reset_index(drop=True)
# initiate the kfold class from sklearn
kf = StratifiedKFold(n_splits=5)
# create a kfold column
temp_train_df["kfold"] = -1
# fill the kfold column
for f, (t_, v_) in enumerate(
kf.split(X=temp_train_df, y=temp_train_df["bins"].values)
):
temp_train_df.loc[v_, "kfold"] = f
# drop the bins column
temp_train_df = temp_train_df.drop("bins", axis=1)
final_dfs[target] = temp_train_df
return final_dfs
# read the training data with folds
train_df = pd.read_csv("/kaggle/input/amp-pd/train_clinical_data.csv")
train_prot_df = pd.read_csv("/kaggle/input/amp-pd/train_proteins.csv")
train_pep_df = pd.read_csv("/kaggle/input/amp-pd/train_peptides.csv")
train_df_dict = preprocess_train_df(train_df, train_prot_df, train_pep_df)
train_df_dict["updrs_1"].head()
train_df = train_df_dict["updrs_1"].drop(
columns=["patient_id", "visit_month", "updrs_1", "kfold"]
)
model = {}
target = ["updrs_1", "updrs_2", "updrs_3", "updrs_4"]
train = pd.read_csv("/kaggle/input/amp-pd/train_clinical_data.csv")
train = train.merge(train_df, how="left", on="visit_id")
train.head()
for u in target:
# Drop NAs
temp = train.dropna(subset=[u])
# Train data
X = temp["visit_month"]
y = temp[u]
if u == "updrs_1":
trained = Ridge(alpha=1.0)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
if u == "updrs_2":
trained = Ridge(alpha=10)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
if u == "updrs_3":
trained = Ridge(alpha=10)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
if u == "updrs_4":
trained = Ridge(alpha=0.1)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
def get_predictions(my_train, pro, model):
# Forecast
my_train = my_train.fillna(0)
for u in target:
# Here is where we will save the final results
my_train["result_" + str(u)] = 0
# Predict
X = my_train["visit_month"]
if u == "updrs_4":
my_train["result_" + str(u)] = 0
else:
my_train["result_" + str(u)] = np.ceil(
model[u].predict(X.values.reshape(-1, 1))
)
# Format for final submission
result = pd.DataFrame()
for m in [0, 6, 12, 24]:
for u in [1, 2, 3, 4]:
temp = my_train[["visit_id", "result_updrs_" + str(u)]]
temp["prediction_id"] = (
temp["visit_id"] + "_updrs_" + str(u) + "_plus_" + str(m) + "_months"
)
temp["rating"] = temp["result_updrs_" + str(u)]
temp = temp[["prediction_id", "rating"]]
result = result.append(temp)
result = result.drop_duplicates(subset=["prediction_id", "rating"])
return result
# Run once to check results
get_predictions(train, None, model)
env = amp_pd_peptide.make_env() # initialize the environment
iter_test = env.iter_test()
for test, test_peptides, test_proteins, sample_submission in iter_test:
result = get_predictions(test, test_proteins, model)
env.predict(result) # register your predictions
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/343/129343031.ipynb
| null | null |
[{"Id": 129343031, "ScriptId": 37605297, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5543978, "CreationDate": "05/13/2023 01:20:34", "VersionNumber": 45.0, "Title": "Parkinsons Submission V1", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 184.0, "LinesInsertedFromPrevious": 81.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 103.0, "LinesInsertedFromFork": 178.0, "LinesDeletedFromFork": 12.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 6.0, "TotalVotes": 0}]
| null | null | null | null |
import sys
sys.path.append("/kaggle/input/amp-pd")
import pandas as pd
import numpy as np
import pickle
import amp_pd_peptide
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import StratifiedKFold
import warnings
warnings.filterwarnings("ignore")
def preprocess_train_df(train_clin_df, train_prot_df, train_pep_df):
"""
Takes in the train_clinical_data.csv, train_peptides.csv, train_proteins.csv as pandas dataframes
Combines the protein and peptide data names and the joins with the train clinical data
The dataframes are stratified kfold based on the target
The function creates one dataframe for each target (updrs_1, updrs_2, updrs_3, updrs_4) stored in the final_df dictionary
Returns a dictionary of the dataframes for each updrs target
"""
# drop the medication column
train_clin_df = train_clin_df.drop(columns=["upd23b_clinical_state_on_medication"])
# create a column with the UniProt and Peptide name combined
train_pep_df["peptide_uniprot"] = (
train_pep_df["Peptide"] + "_" + train_pep_df["UniProt"]
)
# create a table with the visit_id as the index and the proteins or peptides as the feature and the abundance as the values
train_prot_pivot = train_prot_df.pivot(
index="visit_id", values="NPX", columns="UniProt"
)
train_pep_pivot = train_pep_df.pivot(
index="visit_id", values="PeptideAbundance", columns="peptide_uniprot"
)
# combine the two tables on the visit_id
full_prot_train_df = train_prot_pivot.join(train_pep_pivot)
# fill nan with 0 for this first round
full_prot_train_df = full_prot_train_df.fillna(0)
full_train_df = train_clin_df.merge(
full_prot_train_df, how="inner", left_on="visit_id", right_on="visit_id"
)
full_train_df = full_train_df.sample(frac=1).reset_index(drop=True)
updrs = ["updrs_1", "updrs_2", "updrs_3", "updrs_4"]
final_dfs = dict()
for target in updrs:
to_remove = [updr for updr in updrs if updr != target]
temp_train_df = full_train_df.drop(to_remove, axis=1)
temp_train_df = temp_train_df.dropna()
# calculate the number of bins by Sturge's rule
num_bins = int(np.floor(1 + np.log2(len(full_train_df))))
temp_train_df.loc[:, "bins"] = pd.cut(
temp_train_df[target], bins=num_bins, labels=False
)
temp_train_df = temp_train_df.dropna().reset_index(drop=True)
# initiate the kfold class from sklearn
kf = StratifiedKFold(n_splits=5)
# create a kfold column
temp_train_df["kfold"] = -1
# fill the kfold column
for f, (t_, v_) in enumerate(
kf.split(X=temp_train_df, y=temp_train_df["bins"].values)
):
temp_train_df.loc[v_, "kfold"] = f
# drop the bins column
temp_train_df = temp_train_df.drop("bins", axis=1)
final_dfs[target] = temp_train_df
return final_dfs
# read the training data with folds
train_df = pd.read_csv("/kaggle/input/amp-pd/train_clinical_data.csv")
train_prot_df = pd.read_csv("/kaggle/input/amp-pd/train_proteins.csv")
train_pep_df = pd.read_csv("/kaggle/input/amp-pd/train_peptides.csv")
train_df_dict = preprocess_train_df(train_df, train_prot_df, train_pep_df)
train_df_dict["updrs_1"].head()
train_df = train_df_dict["updrs_1"].drop(
columns=["patient_id", "visit_month", "updrs_1", "kfold"]
)
model = {}
target = ["updrs_1", "updrs_2", "updrs_3", "updrs_4"]
train = pd.read_csv("/kaggle/input/amp-pd/train_clinical_data.csv")
train = train.merge(train_df, how="left", on="visit_id")
train.head()
for u in target:
# Drop NAs
temp = train.dropna(subset=[u])
# Train data
X = temp["visit_month"]
y = temp[u]
if u == "updrs_1":
trained = Ridge(alpha=1.0)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
if u == "updrs_2":
trained = Ridge(alpha=10)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
if u == "updrs_3":
trained = Ridge(alpha=10)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
if u == "updrs_4":
trained = Ridge(alpha=0.1)
trained.fit(X.values.reshape(-1, 1), y.values)
# Save model
model[u] = trained
def get_predictions(my_train, pro, model):
# Forecast
my_train = my_train.fillna(0)
for u in target:
# Here is where we will save the final results
my_train["result_" + str(u)] = 0
# Predict
X = my_train["visit_month"]
if u == "updrs_4":
my_train["result_" + str(u)] = 0
else:
my_train["result_" + str(u)] = np.ceil(
model[u].predict(X.values.reshape(-1, 1))
)
# Format for final submission
result = pd.DataFrame()
for m in [0, 6, 12, 24]:
for u in [1, 2, 3, 4]:
temp = my_train[["visit_id", "result_updrs_" + str(u)]]
temp["prediction_id"] = (
temp["visit_id"] + "_updrs_" + str(u) + "_plus_" + str(m) + "_months"
)
temp["rating"] = temp["result_updrs_" + str(u)]
temp = temp[["prediction_id", "rating"]]
result = result.append(temp)
result = result.drop_duplicates(subset=["prediction_id", "rating"])
return result
# Run once to check results
get_predictions(train, None, model)
env = amp_pd_peptide.make_env() # initialize the environment
iter_test = env.iter_test()
for test, test_peptides, test_proteins, sample_submission in iter_test:
result = get_predictions(test, test_proteins, model)
env.predict(result) # register your predictions
| false | 0 | 1,895 | 0 | 1,895 | 1,895 |
||
129343317
|
<jupyter_start><jupyter_text>Blood Cell Images
### Context
The diagnosis of blood-based diseases often involves identifying and characterizing patient blood samples. Automated methods to detect and classify blood cell subtypes have important medical applications.
### Content
This dataset contains 12,500 augmented images of blood cells (JPEG) with accompanying cell type labels (CSV). There are approximately 3,000 images for each of 4 different cell types grouped into 4 different folders (according to cell type). The cell types are Eosinophil, Lymphocyte, Monocyte, and Neutrophil. This dataset is accompanied by an additional dataset containing the original 410 images (pre-augmentation) as well as two additional subtype labels (WBC vs WBC) and also bounding boxes for each cell in each of these 410 images (JPEG + XML metadata). More specifically, the folder 'dataset-master' contains 410 images of blood cells with subtype labels and bounding boxes (JPEG + XML), while the folder 'dataset2-master' contains 2,500 augmented images as well as 4 additional subtype labels (JPEG + CSV). There are approximately 3,000 augmented images for each class of the 4 classes as compared to 88, 33, 21, and 207 images of each in folder 'dataset-master'.
Kaggle dataset identifier: blood-cells
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
from tensorflow import keras
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import cv2
train_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TRAIN/"
test_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TEST/"
valid_path = (
"/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TEST_SIMPLE/"
)
from keras.preprocessing.image import ImageDataGenerator
train_batches = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
).flow_from_directory(
directory=train_path, target_size=(224, 224), batch_size=32, shuffle=True
)
test_batches = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
).flow_from_directory(
directory=test_path, target_size=(224, 224), batch_size=32, shuffle=False
)
valid_batches = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
).flow_from_directory(
directory=valid_path, target_size=(224, 224), batch_size=32, shuffle=True
)
mobile = tf.keras.applications.MobileNet()
mobile.summary()
model = keras.Sequential()
for layer in mobile.layers[:-1]:
model.add(layer)
for layer in model.layers:
layer.trainable = False
model.add(keras.layers.Dense(4, activation="softmax"))
model.summary()
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(
train_batches,
validation_data=valid_batches,
epochs=15,
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/343/129343317.ipynb
|
blood-cells
|
paultimothymooney
|
[{"Id": 129343317, "ScriptId": 38450938, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14857973, "CreationDate": "05/13/2023 01:25:33", "VersionNumber": 1.0, "Title": "notebook6f85f41e51", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 76.0, "LinesInsertedFromPrevious": 76.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185299130, "KernelVersionId": 129343317, "SourceDatasetVersionId": 29380}]
|
[{"Id": 29380, "DatasetId": 9232, "DatasourceVersionId": 29448, "CreatorUserId": 1314380, "LicenseName": "Other (specified in description)", "CreationDate": "04/21/2018 21:06:13", "VersionNumber": 6.0, "Title": "Blood Cell Images", "Slug": "blood-cells", "Subtitle": "12,500 images: 4 different cell types", "Description": "### Context\n\nThe diagnosis of blood-based diseases often involves identifying and characterizing patient blood samples. Automated methods to detect and classify blood cell subtypes have important medical applications.\n\n### Content\n\nThis dataset contains 12,500 augmented images of blood cells (JPEG) with accompanying cell type labels (CSV). There are approximately 3,000 images for each of 4 different cell types grouped into 4 different folders (according to cell type). The cell types are Eosinophil, Lymphocyte, Monocyte, and Neutrophil. This dataset is accompanied by an additional dataset containing the original 410 images (pre-augmentation) as well as two additional subtype labels (WBC vs WBC) and also bounding boxes for each cell in each of these 410 images (JPEG + XML metadata). More specifically, the folder 'dataset-master' contains 410 images of blood cells with subtype labels and bounding boxes (JPEG + XML), while the folder 'dataset2-master' contains 2,500 augmented images as well as 4 additional subtype labels (JPEG + CSV). There are approximately 3,000 augmented images for each class of the 4 classes as compared to 88, 33, 21, and 207 images of each in folder 'dataset-master'.\n\n### Acknowledgements\n\nhttps://github.com/Shenggan/BCCD_Dataset\nMIT License\n\n### Inspiration\n\nThe diagnosis of blood-based diseases often involves identifying and characterizing patient blood samples.\nAutomated methods to detect and classify blood cell subtypes have important medical applications.", "VersionNotes": "moved labels to a more obvious location", "TotalCompressedBytes": 118062345.0, "TotalUncompressedBytes": 118062345.0}]
|
[{"Id": 9232, "CreatorUserId": 1314380, "OwnerUserId": 1314380.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 29380.0, "CurrentDatasourceVersionId": 29448.0, "ForumId": 16499, "Type": 2, "CreationDate": "01/10/2018 16:35:45", "LastActivityDate": "02/05/2018", "TotalViews": 296194, "TotalDownloads": 35574, "TotalVotes": 793, "TotalKernels": 137}]
|
[{"Id": 1314380, "UserName": "paultimothymooney", "DisplayName": "Paul Mooney", "RegisterDate": "10/05/2017", "PerformanceTier": 5}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
from tensorflow import keras
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import cv2
train_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TRAIN/"
test_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TEST/"
valid_path = (
"/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TEST_SIMPLE/"
)
from keras.preprocessing.image import ImageDataGenerator
train_batches = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
).flow_from_directory(
directory=train_path, target_size=(224, 224), batch_size=32, shuffle=True
)
test_batches = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
).flow_from_directory(
directory=test_path, target_size=(224, 224), batch_size=32, shuffle=False
)
valid_batches = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
).flow_from_directory(
directory=valid_path, target_size=(224, 224), batch_size=32, shuffle=True
)
mobile = tf.keras.applications.MobileNet()
mobile.summary()
model = keras.Sequential()
for layer in mobile.layers[:-1]:
model.add(layer)
for layer in model.layers:
layer.trainable = False
model.add(keras.layers.Dense(4, activation="softmax"))
model.summary()
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(
train_batches,
validation_data=valid_batches,
epochs=15,
)
| false | 0 | 646 | 0 | 974 | 646 |
||
129233870
|
# # Wild Blueberry Yield Prediction by comparing multiple regression models
# * **Problem Statement** : Wild Blueberry Yield Prediction
# * **Solution/aim** : First comparing different SK-Learn regression models and then making a prediction on the test-data
# standard libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter("ignore")
import gc
from tabulate import tabulate
import pickle
# sklearn libraries
from sklearn.preprocessing import (
StandardScaler,
MinMaxScaler,
MaxAbsScaler,
RobustScaler,
Normalizer,
)
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import (
mean_squared_error,
mean_absolute_error,
r2_score,
make_scorer,
)
from sklearn.pipeline import Pipeline
# sklearn Regression Models
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import LinearSVR, NuSVR, SVR
from sklearn.tree import DecisionTreeRegressor, ExtraTreeRegressor
from sklearn.ensemble import (
RandomForestRegressor,
AdaBoostRegressor,
GradientBoostingRegressor,
)
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet
# ## Data Load & Preparation
#
# load
TRAIN_PATH = "/kaggle/input/playground-series-s3e14/train.csv"
TEST_PATH = "/kaggle/input/playground-series-s3e14/test.csv"
df_train = pd.read_csv(TRAIN_PATH)
df_test = pd.read_csv(TEST_PATH)
# view
print(
f"Training sample size: {df_train.shape[0]} | Testing sample size: {df_test.shape[0]}"
)
# feature engineering
x, y = df_train.drop("yield", axis=1), df_train[["yield"]]
# split the data
X_train, X_val, y_train, y_val = train_test_split(
x, y, test_size=0.2, random_state=42, stratify=y
)
# view
print(f"training shape: {X_train.shape} | validation shape: {X_val.shape}")
# list of all the regression models
regressors = [
KNeighborsRegressor(3),
LinearSVR(C=0.5, random_state=42),
NuSVR(C=0.5, kernel="linear", verbose=False),
SVR(kernel="rbf", C=0.5, verbose=False),
DecisionTreeRegressor(criterion="absolute_error"),
ExtraTreeRegressor(max_features=None),
RandomForestRegressor(),
AdaBoostRegressor(),
GradientBoostingRegressor(),
LinearRegression(),
Lasso(),
Ridge(),
ElasticNet(),
]
# logs for visualisation
log_cols = ["Regressor", "Score", "MSE", "MAE", "Scaler"]
log = pd.DataFrame(columns=log_cols)
# helper function for iterating through models with different scaler functions
def model_compare(scaler):
"""
purpose: iterating through various regression
models with specific scaler
"""
global log
scaler_name = scaler.__class__.__name__
table = []
print("#" * 100)
print(f"Scaler Function: {scaler_name}\n")
for reg in regressors:
reg_name = reg.__class__.__name__
# create pipeline
pipe = Pipeline([("scaler", scaler), (reg_name, reg)])
pipe.fit(X_train, y_train)
score = pipe.score(X_val, y_val)
# print('****Results****')
train_predictions = pipe.predict(X_val)
mse = mean_squared_error(y_val, train_predictions)
mae = mean_absolute_error(y_val, train_predictions)
mse = round(mse, 3)
mae = round(mae, 3)
score = round(score, 3)
# print("{}: Score: {:.3f} | MSE: {:.3f} | MAE: {:.3f}".format(reg_name, score, mse, mae))
log_row = [reg_name, score, mse, mae, scaler_name]
log_entry = pd.DataFrame([log_row], columns=log_cols)
log = log.append(log_entry)
log_row.pop(4)
table.append(log_row)
# print(f"{reg_name} done...")
print(
tabulate(
table,
headers=["Regressor", "Score", "MSE", "MAE"],
tablefmt="fancy_grid",
numalign="center",
floatfmt=".3f",
)
)
print("#" * 100)
# helper function for plotting comparison chart
def comparison_chart(scaler):
"""
purpose: draw a nice bar plot for visualising the
results of different regression models
"""
# define subplot
fig, ax = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(20, 8))
# filter on the scaler
scaler_name = scaler.__class__.__name__
temp = log[log["Scaler"] == scaler_name]
# plot-1 (mse)
plt.subplot(1, 2, 1)
sns.set_color_codes("muted")
se = sns.barplot(
x="MSE",
y="Regressor",
data=temp,
color="mistyrose",
order=temp.sort_values("MSE").Regressor,
)
se.set(ylabel=None)
se.bar_label(se.containers[0], padding=-60, fmt="%.2f")
plt.xlabel("MSE")
plt.title("Regressor MSE")
# plot-2 (mae)
plt.subplot(1, 2, 2)
sns.set_color_codes("muted")
ae = sns.barplot(
x="MAE",
y="Regressor",
data=temp,
color="salmon",
order=temp.sort_values("MAE").Regressor,
)
ae.set(ylabel=None)
ae.bar_label(ae.containers[0], padding=-35, fmt="%.2f")
plt.xlabel("MAE")
plt.title("Regressor MAE")
fig.supylabel("Regressors")
plt.show()
# ## Model Comparison with Standard Scaler
# instantiate scaler
ss = StandardScaler()
# regressor model comparison
model_compare(ss)
# plot the chart
comparison_chart(ss)
# ## Model Comparison with Min Max Scaler
# instantiate scaler
mms = MinMaxScaler()
# regressor model comparison
model_compare(mms)
# plot the chart
comparison_chart(mms)
# ## Model Comparison with Max Abs Scaler
# instantiate scaler
mas = MaxAbsScaler()
# regressor model comparison
model_compare(mas)
# plot the chart
comparison_chart(mas)
# ## Model Comparison with Robust Scaler
# instantiate scaler
rs = RobustScaler()
# regressor model comparison
model_compare(rs)
# plot the chart
comparison_chart(rs)
# ## Model Comparison with Normalizer
# instantiate scaler
norm = Normalizer()
# regressor model comparison
model_compare(norm)
# plot the chart
comparison_chart(norm)
# **Looking at the values and the charts, following are the observations**
# 1. **Different scaler function doesn't have much of a difference on the model, so any one can be chose**
# 2. **The top 3 regressor algorithms/models for this dataset are: Gradient Booster,Random Forest and Linear (in that order)**
# **Based on these observations,my choices are:**
# 1. **Scaler = Standard Scaler**
# 2. **Model/Algorithm = Gradient Boosting Regressor**
# ## Build & Tune Model
# define scaler function
ss = StandardScaler()
# scaling the data
X_train_ss = ss.fit_transform(X_train)
y_train_ss = ss.fit_transform(y_train)
# define the model
gbr = GradientBoostingRegressor()
# define regressor params
params = [{"n_estimators": [100, 500], "max_depth": [3, 6]}]
# define the scoring for the Grid Search
scoring = make_scorer(mean_absolute_error)
# search for the best params
gsc_reg = GridSearchCV(gbr, params, cv=10, scoring=scoring)
gsc_reg.fit(X_train_ss, y_train_ss)
print("#" * 50)
print("**** Best Params ****")
print(gsc_reg.best_params_)
print("#" * 50)
# build model based on the best params
gbr_tune = GradientBoostingRegressor(max_depth=6, n_estimators=500)
# train the model
gbr_tune.fit(X_train_ss, y_train_ss)
# file name to save the trained model
filename = "gradientboost_regressor_tuned.pickle"
# save model
pickle.dump(gbr_tune, open(filename, "wb"))
# ## Prediction & Submission
# scaling the test-data
test_ss = ss.fit_transform(df_test)
# prediction on TEST data
pred = gbr_tune.predict(test_ss)
# submission
submission = pd.DataFrame()
submission["yield"] = pred
submission.index += 15289
# save the submission
submission.to_csv("submission.csv", index=True, header=True, index_label="id")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/233/129233870.ipynb
| null | null |
[{"Id": 129233870, "ScriptId": 38379659, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3828601, "CreationDate": "05/12/2023 04:06:29", "VersionNumber": 1.0, "Title": "Yield Prediction - Compare Multiple Models", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 274.0, "LinesInsertedFromPrevious": 274.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 7}]
| null | null | null | null |
# # Wild Blueberry Yield Prediction by comparing multiple regression models
# * **Problem Statement** : Wild Blueberry Yield Prediction
# * **Solution/aim** : First comparing different SK-Learn regression models and then making a prediction on the test-data
# standard libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter("ignore")
import gc
from tabulate import tabulate
import pickle
# sklearn libraries
from sklearn.preprocessing import (
StandardScaler,
MinMaxScaler,
MaxAbsScaler,
RobustScaler,
Normalizer,
)
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import (
mean_squared_error,
mean_absolute_error,
r2_score,
make_scorer,
)
from sklearn.pipeline import Pipeline
# sklearn Regression Models
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import LinearSVR, NuSVR, SVR
from sklearn.tree import DecisionTreeRegressor, ExtraTreeRegressor
from sklearn.ensemble import (
RandomForestRegressor,
AdaBoostRegressor,
GradientBoostingRegressor,
)
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet
# ## Data Load & Preparation
#
# load
TRAIN_PATH = "/kaggle/input/playground-series-s3e14/train.csv"
TEST_PATH = "/kaggle/input/playground-series-s3e14/test.csv"
df_train = pd.read_csv(TRAIN_PATH)
df_test = pd.read_csv(TEST_PATH)
# view
print(
f"Training sample size: {df_train.shape[0]} | Testing sample size: {df_test.shape[0]}"
)
# feature engineering
x, y = df_train.drop("yield", axis=1), df_train[["yield"]]
# split the data
X_train, X_val, y_train, y_val = train_test_split(
x, y, test_size=0.2, random_state=42, stratify=y
)
# view
print(f"training shape: {X_train.shape} | validation shape: {X_val.shape}")
# list of all the regression models
regressors = [
KNeighborsRegressor(3),
LinearSVR(C=0.5, random_state=42),
NuSVR(C=0.5, kernel="linear", verbose=False),
SVR(kernel="rbf", C=0.5, verbose=False),
DecisionTreeRegressor(criterion="absolute_error"),
ExtraTreeRegressor(max_features=None),
RandomForestRegressor(),
AdaBoostRegressor(),
GradientBoostingRegressor(),
LinearRegression(),
Lasso(),
Ridge(),
ElasticNet(),
]
# logs for visualisation
log_cols = ["Regressor", "Score", "MSE", "MAE", "Scaler"]
log = pd.DataFrame(columns=log_cols)
# helper function for iterating through models with different scaler functions
def model_compare(scaler):
"""
purpose: iterating through various regression
models with specific scaler
"""
global log
scaler_name = scaler.__class__.__name__
table = []
print("#" * 100)
print(f"Scaler Function: {scaler_name}\n")
for reg in regressors:
reg_name = reg.__class__.__name__
# create pipeline
pipe = Pipeline([("scaler", scaler), (reg_name, reg)])
pipe.fit(X_train, y_train)
score = pipe.score(X_val, y_val)
# print('****Results****')
train_predictions = pipe.predict(X_val)
mse = mean_squared_error(y_val, train_predictions)
mae = mean_absolute_error(y_val, train_predictions)
mse = round(mse, 3)
mae = round(mae, 3)
score = round(score, 3)
# print("{}: Score: {:.3f} | MSE: {:.3f} | MAE: {:.3f}".format(reg_name, score, mse, mae))
log_row = [reg_name, score, mse, mae, scaler_name]
log_entry = pd.DataFrame([log_row], columns=log_cols)
log = log.append(log_entry)
log_row.pop(4)
table.append(log_row)
# print(f"{reg_name} done...")
print(
tabulate(
table,
headers=["Regressor", "Score", "MSE", "MAE"],
tablefmt="fancy_grid",
numalign="center",
floatfmt=".3f",
)
)
print("#" * 100)
# helper function for plotting comparison chart
def comparison_chart(scaler):
"""
purpose: draw a nice bar plot for visualising the
results of different regression models
"""
# define subplot
fig, ax = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(20, 8))
# filter on the scaler
scaler_name = scaler.__class__.__name__
temp = log[log["Scaler"] == scaler_name]
# plot-1 (mse)
plt.subplot(1, 2, 1)
sns.set_color_codes("muted")
se = sns.barplot(
x="MSE",
y="Regressor",
data=temp,
color="mistyrose",
order=temp.sort_values("MSE").Regressor,
)
se.set(ylabel=None)
se.bar_label(se.containers[0], padding=-60, fmt="%.2f")
plt.xlabel("MSE")
plt.title("Regressor MSE")
# plot-2 (mae)
plt.subplot(1, 2, 2)
sns.set_color_codes("muted")
ae = sns.barplot(
x="MAE",
y="Regressor",
data=temp,
color="salmon",
order=temp.sort_values("MAE").Regressor,
)
ae.set(ylabel=None)
ae.bar_label(ae.containers[0], padding=-35, fmt="%.2f")
plt.xlabel("MAE")
plt.title("Regressor MAE")
fig.supylabel("Regressors")
plt.show()
# ## Model Comparison with Standard Scaler
# instantiate scaler
ss = StandardScaler()
# regressor model comparison
model_compare(ss)
# plot the chart
comparison_chart(ss)
# ## Model Comparison with Min Max Scaler
# instantiate scaler
mms = MinMaxScaler()
# regressor model comparison
model_compare(mms)
# plot the chart
comparison_chart(mms)
# ## Model Comparison with Max Abs Scaler
# instantiate scaler
mas = MaxAbsScaler()
# regressor model comparison
model_compare(mas)
# plot the chart
comparison_chart(mas)
# ## Model Comparison with Robust Scaler
# instantiate scaler
rs = RobustScaler()
# regressor model comparison
model_compare(rs)
# plot the chart
comparison_chart(rs)
# ## Model Comparison with Normalizer
# instantiate scaler
norm = Normalizer()
# regressor model comparison
model_compare(norm)
# plot the chart
comparison_chart(norm)
# **Looking at the values and the charts, following are the observations**
# 1. **Different scaler function doesn't have much of a difference on the model, so any one can be chose**
# 2. **The top 3 regressor algorithms/models for this dataset are: Gradient Booster,Random Forest and Linear (in that order)**
# **Based on these observations,my choices are:**
# 1. **Scaler = Standard Scaler**
# 2. **Model/Algorithm = Gradient Boosting Regressor**
# ## Build & Tune Model
# define scaler function
ss = StandardScaler()
# scaling the data
X_train_ss = ss.fit_transform(X_train)
y_train_ss = ss.fit_transform(y_train)
# define the model
gbr = GradientBoostingRegressor()
# define regressor params
params = [{"n_estimators": [100, 500], "max_depth": [3, 6]}]
# define the scoring for the Grid Search
scoring = make_scorer(mean_absolute_error)
# search for the best params
gsc_reg = GridSearchCV(gbr, params, cv=10, scoring=scoring)
gsc_reg.fit(X_train_ss, y_train_ss)
print("#" * 50)
print("**** Best Params ****")
print(gsc_reg.best_params_)
print("#" * 50)
# build model based on the best params
gbr_tune = GradientBoostingRegressor(max_depth=6, n_estimators=500)
# train the model
gbr_tune.fit(X_train_ss, y_train_ss)
# file name to save the trained model
filename = "gradientboost_regressor_tuned.pickle"
# save model
pickle.dump(gbr_tune, open(filename, "wb"))
# ## Prediction & Submission
# scaling the test-data
test_ss = ss.fit_transform(df_test)
# prediction on TEST data
pred = gbr_tune.predict(test_ss)
# submission
submission = pd.DataFrame()
submission["yield"] = pred
submission.index += 15289
# save the submission
submission.to_csv("submission.csv", index=True, header=True, index_label="id")
| false | 0 | 2,314 | 7 | 2,314 | 2,314 |
||
129233619
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are3 available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import zipfile
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso, Ridge, ElasticNet
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_log_error
from sklearn.ensemble import RandomForestRegressor
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/train.csv.zip")
z.extractall()
treino = pd.read_csv("/kaggle/working/train.csv")
treino.info()
treino.head()
# Separa quais categorias tem valores numericos ou objetos
num_cols = treino.select_dtypes(include=["number"]).columns
cat_cols = treino.select_dtypes(include=["object"]).columns
print(num_cols)
print(cat_cols)
treino[num_cols].describe()
treino[cat_cols].describe()
# verifica quais categorias possuem valores nulos
for x in num_cols:
if treino[x].isna().mean() > 0:
print(x, " \t \t", treino[x].isna().mean() * 100)
print("\n\n")
for x in cat_cols:
if treino[x].isna().mean() > 0:
print(x, " \t \t", treino[x].isna().mean() * 100)
# verifica quais tem mais que 30% nulos
treino2 = treino
for x in num_cols:
if treino2[x].isna().mean() > 0.3:
print(x, " \t \t", treino2[x].isna().mean() * 100)
treino2 = treino2.drop(x, axis=1)
treino2.head()
num_cols2 = treino2.select_dtypes(include=["number"]).columns
for x in range(len(num_cols2)):
treino2[num_cols2[x]].fillna(treino2[num_cols2[x]].mean(), inplace=True)
for x in num_cols2:
if treino2[x].isna().mean() > 0:
print(x, " \t \t", treino2[x].isna().mean() * 100)
# Coloca os valores nulos com a media da categoria para nao ficar com valor nulos
treino2.info()
for x in cat_cols:
treino2[x] = LabelEncoder().fit_transform(treino2[x].astype(str))
treino2[x] = treino2[x] * 1
treino2[cat_cols].head()
# Coloca os valores em numeros indexados
treino2[(treino2["floor"]) == 33]
# verifica o ponto fora da curva
treino2.drop(treino2.index[7457], inplace=True)
# remove o ponto fora da curva
treino2.info()
treino2.head()
treino3 = treino2[cat_cols]
x = treino3
y = np.log(treino2.price_doc)
# Apos alguns testes concluis que , ficou mais seguro fazer a regressão com as categorias de valores
# Modifica os valores em logaritimo para normalizar na hora de gerar a regressão
treino3.info()
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.5, random_state=42
)
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
# Faz previsão por regressão
modelo = ElasticNet(alpha=100)
modelo.fit(x_train, y_train)
y_pred = modelo.predict(x_test)
rmsle = mean_squared_log_error(y_train, y_pred) ** 0.5
print("RMSLE:", rmsle)
# Verifica que o RMSLE
# Limpeza da base de dados
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/test.csv.zip")
z.extractall()
teste = pd.read_csv("/kaggle/working/test.csv")
num_cals2 = teste.select_dtypes(include=["number"]).columns
cat_cols2 = teste.select_dtypes(include=["object"]).columns
num_cals2 = teste.select_dtypes(include=["number"]).columns
for x in range(len(num_cals2)):
teste[num_cals2[x]].fillna(teste[num_cals2[x]].mean(), inplace=True)
for x in cat_cols:
teste[x] = LabelEncoder().fit_transform(teste[x].astype(str))
teste[x] = teste[x] * 1
y_pred = modelo.predict(teste[cat_cols2])
y_pred = np.exp(y_pred)
# Com os dados prontos , voltar aos valores reias pois esta em valores de logaritimo.
z = zipfile.ZipFile(
"/kaggle/input/sberbank-russian-housing-market/sample_submission.csv.zip"
)
z.extractall()
enviar = pd.read_csv("/kaggle/working/sample_submission.csv")
enviar["price_doc"] = y_pred
enviar.to_csv("/kaggle/working/submission.csv", index=False)
enviar.head()
# faz o arquivo para enviar
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/233/129233619.ipynb
| null | null |
[{"Id": 129233619, "ScriptId": 38281692, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14612488, "CreationDate": "05/12/2023 04:02:22", "VersionNumber": 5.0, "Title": "ac2 bora", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 159.0, "LinesInsertedFromPrevious": 97.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 62.0, "LinesInsertedFromFork": 144.0, "LinesDeletedFromFork": 158.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 15.0, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are3 available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import zipfile
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso, Ridge, ElasticNet
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_log_error
from sklearn.ensemble import RandomForestRegressor
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/train.csv.zip")
z.extractall()
treino = pd.read_csv("/kaggle/working/train.csv")
treino.info()
treino.head()
# Separa quais categorias tem valores numericos ou objetos
num_cols = treino.select_dtypes(include=["number"]).columns
cat_cols = treino.select_dtypes(include=["object"]).columns
print(num_cols)
print(cat_cols)
treino[num_cols].describe()
treino[cat_cols].describe()
# verifica quais categorias possuem valores nulos
for x in num_cols:
if treino[x].isna().mean() > 0:
print(x, " \t \t", treino[x].isna().mean() * 100)
print("\n\n")
for x in cat_cols:
if treino[x].isna().mean() > 0:
print(x, " \t \t", treino[x].isna().mean() * 100)
# verifica quais tem mais que 30% nulos
treino2 = treino
for x in num_cols:
if treino2[x].isna().mean() > 0.3:
print(x, " \t \t", treino2[x].isna().mean() * 100)
treino2 = treino2.drop(x, axis=1)
treino2.head()
num_cols2 = treino2.select_dtypes(include=["number"]).columns
for x in range(len(num_cols2)):
treino2[num_cols2[x]].fillna(treino2[num_cols2[x]].mean(), inplace=True)
for x in num_cols2:
if treino2[x].isna().mean() > 0:
print(x, " \t \t", treino2[x].isna().mean() * 100)
# Coloca os valores nulos com a media da categoria para nao ficar com valor nulos
treino2.info()
for x in cat_cols:
treino2[x] = LabelEncoder().fit_transform(treino2[x].astype(str))
treino2[x] = treino2[x] * 1
treino2[cat_cols].head()
# Coloca os valores em numeros indexados
treino2[(treino2["floor"]) == 33]
# verifica o ponto fora da curva
treino2.drop(treino2.index[7457], inplace=True)
# remove o ponto fora da curva
treino2.info()
treino2.head()
treino3 = treino2[cat_cols]
x = treino3
y = np.log(treino2.price_doc)
# Apos alguns testes concluis que , ficou mais seguro fazer a regressão com as categorias de valores
# Modifica os valores em logaritimo para normalizar na hora de gerar a regressão
treino3.info()
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.5, random_state=42
)
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
# Faz previsão por regressão
modelo = ElasticNet(alpha=100)
modelo.fit(x_train, y_train)
y_pred = modelo.predict(x_test)
rmsle = mean_squared_log_error(y_train, y_pred) ** 0.5
print("RMSLE:", rmsle)
# Verifica que o RMSLE
# Limpeza da base de dados
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/test.csv.zip")
z.extractall()
teste = pd.read_csv("/kaggle/working/test.csv")
num_cals2 = teste.select_dtypes(include=["number"]).columns
cat_cols2 = teste.select_dtypes(include=["object"]).columns
num_cals2 = teste.select_dtypes(include=["number"]).columns
for x in range(len(num_cals2)):
teste[num_cals2[x]].fillna(teste[num_cals2[x]].mean(), inplace=True)
for x in cat_cols:
teste[x] = LabelEncoder().fit_transform(teste[x].astype(str))
teste[x] = teste[x] * 1
y_pred = modelo.predict(teste[cat_cols2])
y_pred = np.exp(y_pred)
# Com os dados prontos , voltar aos valores reias pois esta em valores de logaritimo.
z = zipfile.ZipFile(
"/kaggle/input/sberbank-russian-housing-market/sample_submission.csv.zip"
)
z.extractall()
enviar = pd.read_csv("/kaggle/working/sample_submission.csv")
enviar["price_doc"] = y_pred
enviar.to_csv("/kaggle/working/submission.csv", index=False)
enviar.head()
# faz o arquivo para enviar
| false | 0 | 1,668 | 0 | 1,668 | 1,668 |
||
129233005
|
import pandas as pd
df = pd.read_csv("/kaggle/input/graduation-project/kaggle-dataset.csv")
df.sample(5)
df = df.dropna(subset=["Sentence"])
df.Sentence = [str(text) for text in df.Sentence]
df["subreddit"].value_counts()
df["subreddit"] = "__label__" + df["subreddit"].astype(str)
df.tail(3)
df["Subreddit_description"] = df["subreddit"] + " " + df["Sentence"]
df.head(3)
df["subreddit"].value_counts()
from sklearn.model_selection import train_test_split
train, test = train_test_split(df, test_size=0.2)
test, valid = train_test_split(test, test_size=0.5)
train.to_csv(
"mental_health.train", columns=["Subreddit_description"], index=False, header=False
)
test.to_csv(
"mental_health.test", columns=["Subreddit_description"], index=False, header=False
)
valid.to_csv(
"mental_health.valid", columns=["Subreddit_description"], index=False, header=False
)
import fasttext
import os
# Define the range of n-gram sizes to try
# Load the training and validation data
train_data = "mental_health.train"
valid_data = "mental_health.valid"
model = fasttext.train_supervised(
input=train_data,
autotuneValidationFile=valid_data,
autotuneMetric="f1",
autotuneModelSize="200M",
autotuneDuration=7200,
autotunePredictions=len(valid),
)
print("Best n-gram size:", model.wordNgrams)
print("Best dimension:", model.dim)
print("Epoch:", model.epoch)
print("Epoch:", model.bucket)
print("Epoch:", model.lr)
model.test("mental_health.test")
from sklearn.metrics import classification_report
# load the test data
test_data = []
with open("mental_health.test", encoding="utf8") as f:
for line in f:
test_data.append(line.strip().split(" "))
# make predictions on the test data
y_true = []
y_pred = []
for line in test_data:
y_true.append(line[0].replace("__label__", ""))
y_pred.append(model.predict(" ".join(line[1:]))[0][0].replace("__label__", ""))
# print the classification report
print(classification_report(y_true, y_pred))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/233/129233005.ipynb
| null | null |
[{"Id": 129233005, "ScriptId": 38420771, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14384095, "CreationDate": "05/12/2023 03:52:44", "VersionNumber": 1.0, "Title": "notebookbbd4055344", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 71.0, "LinesInsertedFromPrevious": 71.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
df = pd.read_csv("/kaggle/input/graduation-project/kaggle-dataset.csv")
df.sample(5)
df = df.dropna(subset=["Sentence"])
df.Sentence = [str(text) for text in df.Sentence]
df["subreddit"].value_counts()
df["subreddit"] = "__label__" + df["subreddit"].astype(str)
df.tail(3)
df["Subreddit_description"] = df["subreddit"] + " " + df["Sentence"]
df.head(3)
df["subreddit"].value_counts()
from sklearn.model_selection import train_test_split
train, test = train_test_split(df, test_size=0.2)
test, valid = train_test_split(test, test_size=0.5)
train.to_csv(
"mental_health.train", columns=["Subreddit_description"], index=False, header=False
)
test.to_csv(
"mental_health.test", columns=["Subreddit_description"], index=False, header=False
)
valid.to_csv(
"mental_health.valid", columns=["Subreddit_description"], index=False, header=False
)
import fasttext
import os
# Define the range of n-gram sizes to try
# Load the training and validation data
train_data = "mental_health.train"
valid_data = "mental_health.valid"
model = fasttext.train_supervised(
input=train_data,
autotuneValidationFile=valid_data,
autotuneMetric="f1",
autotuneModelSize="200M",
autotuneDuration=7200,
autotunePredictions=len(valid),
)
print("Best n-gram size:", model.wordNgrams)
print("Best dimension:", model.dim)
print("Epoch:", model.epoch)
print("Epoch:", model.bucket)
print("Epoch:", model.lr)
model.test("mental_health.test")
from sklearn.metrics import classification_report
# load the test data
test_data = []
with open("mental_health.test", encoding="utf8") as f:
for line in f:
test_data.append(line.strip().split(" "))
# make predictions on the test data
y_true = []
y_pred = []
for line in test_data:
y_true.append(line[0].replace("__label__", ""))
y_pred.append(model.predict(" ".join(line[1:]))[0][0].replace("__label__", ""))
# print the classification report
print(classification_report(y_true, y_pred))
| false | 0 | 643 | 0 | 643 | 643 |
||
129349672
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # **Enter Column List Below**
# ---
# **How to get column list:**
# 
# `select COLUMN_NAME, DATA_TYPE from ALL_TAB_COLUMNS where TABLE_NAME='ADDRESS' order by COLUMN_NAME asc;
# `
table_name = "address"
# replace with col list
cols = """ADDRESS_1
ADDRESS_2
ADDRESS_3
ADDRESS_ID
ADDRESS_TYPE
CITY
CORRESPONDENCE_IND
COUNTRY
DEFUNCT_IND
DISTRICT
FAX_NO
LAST_UPDATED_BY
LAST_UPDATED_DATETIME
PHONE_NO
POSTAL_CODE
PREV_UPDATED_BY
PREV_UPDATED_DATETIME
STAKEHOLDER_ID
STATE
"""
# replace with type list
coltypes = """VARCHAR2
VARCHAR2
VARCHAR2
NUMBER
VARCHAR2
VARCHAR2
CHAR
VARCHAR2
CHAR
VARCHAR2
VARCHAR2
NUMBER
DATE
VARCHAR2
VARCHAR2
NUMBER
DATE
NUMBER
VARCHAR2
"""
import re
cols = cols.lower()
col_split = cols.split()
for i, x in enumerate(col_split):
col_split[i] = col_split[i].strip()
coltypes = coltypes.replace("VARCHAR2", "String")
coltypes = coltypes.replace("CHAR", "String")
coltypes = coltypes.replace("DATE", "Timestamp")
type_split = coltypes.split()
for i, x in enumerate(type_split):
type_split[i] = type_split[i].strip()
cols_list = {"column_name": col_split, "type": type_split}
cols_df = pd.DataFrame(cols_list)
cols_df.index += 1
cols_df
# *Note: Please manually change NUMBER datatype to either Long or BigDecimal as fit
# # **Get Models**
# ---
from re import sub
# creating a function which will convert string to camelcase
def convert_to_camelCase(my_string):
my_string = sub(r"(_|-)+", " ", my_string).title().replace(" ", "")
return my_string[0].lower() + my_string[1:]
print("Model:")
for i, col in enumerate(col_split):
print("\tprivate " + type_split[i] + " " + col_split[i] + ";")
cols_dto = col_split.copy()
for i, x in enumerate(cols_dto):
cols_dto[i] = convert_to_camelCase(cols_dto[i])
print("\nModel Dto:")
for i, col in enumerate(cols_dto):
print("\tprivate " + type_split[i] + " " + cols_dto[i] + ";")
# # **Generate DtoMapper**
# ---
#
def capitalize1(word):
word = word[0].capitalize() + word[1:]
return word
print(
"\tpublic static "
+ capitalize1(table_name)
+ " to"
+ capitalize1(table_name)
+ "("
+ capitalize1(table_name)
+ "Dto "
+ table_name
+ "Dto){"
)
print("\t\treturn new " + capitalize1(table_name) + "()")
for i, col in enumerate(cols_dto):
print(
"\t\t\t.set"
+ capitalize1(col_split[i])
+ "("
+ table_name
+ "Dto.get"
+ capitalize1(col)
+ "())"
)
print("\t\t;\n\t}\n")
print(
"\tpublic static "
+ capitalize1(table_name)
+ "Dto to"
+ capitalize1(table_name)
+ "Dto("
+ capitalize1(table_name)
+ " "
+ table_name
+ "){"
)
print("\t\treturn new " + capitalize1(table_name) + "Dto()")
for i, col in enumerate(cols_dto):
print(
"\t\t\t.set"
+ capitalize1(col)
+ "("
+ table_name
+ ".get"
+ capitalize1(col_split[i])
+ "())"
)
print("\t\t;\n\t}\n")
print(
"\tpublic static List<"
+ capitalize1(table_name)
+ "> to"
+ capitalize1(table_name)
+ "List(List<"
+ capitalize1(table_name)
+ "Dto> "
+ table_name
+ "DtoList){"
)
print(
"\t\tList<"
+ capitalize1(table_name)
+ "> "
+ table_name
+ "List = new ArrayList<>();"
)
print("\t\t" + table_name + "DtoList.stream().forEach(" + table_name + "Dto ->")
print(
f"\t\t\t{table_name}List.add({capitalize1(table_name)}DtoMapper.to{capitalize1(table_name)}({table_name}Dto))"
)
print(f"\t\t);\n\t\treturn {table_name}List;")
print("\t}\n")
print(
"\tpublic static List<"
+ capitalize1(table_name)
+ "Dto> to"
+ capitalize1(table_name)
+ "DtoList(List<"
+ capitalize1(table_name)
+ "> "
+ table_name
+ "List){"
)
print(
"\t\tList<"
+ capitalize1(table_name)
+ "Dto> "
+ table_name
+ "DtoList = new ArrayList<>();"
)
print("\t\t" + table_name + "List.stream().forEach(" + table_name + " ->")
print(
f"\t\t\t{table_name}DtoList.add({capitalize1(table_name)}DtoMapper.to{capitalize1(table_name)}Dto({table_name}))"
)
print(f"\t\t);\n\t\treturn {table_name}DtoList;")
print("\t}\n")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/349/129349672.ipynb
| null | null |
[{"Id": 129349672, "ScriptId": 38457998, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3329953, "CreationDate": "05/13/2023 03:13:13", "VersionNumber": 3.0, "Title": "Model and Mapper Generator", "EvaluationDate": "05/13/2023", "IsChange": false, "TotalLines": 157.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 157.0, "LinesInsertedFromFork": 123.0, "LinesDeletedFromFork": 57.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 34.0, "TotalVotes": 1}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # **Enter Column List Below**
# ---
# **How to get column list:**
# 
# `select COLUMN_NAME, DATA_TYPE from ALL_TAB_COLUMNS where TABLE_NAME='ADDRESS' order by COLUMN_NAME asc;
# `
table_name = "address"
# replace with col list
cols = """ADDRESS_1
ADDRESS_2
ADDRESS_3
ADDRESS_ID
ADDRESS_TYPE
CITY
CORRESPONDENCE_IND
COUNTRY
DEFUNCT_IND
DISTRICT
FAX_NO
LAST_UPDATED_BY
LAST_UPDATED_DATETIME
PHONE_NO
POSTAL_CODE
PREV_UPDATED_BY
PREV_UPDATED_DATETIME
STAKEHOLDER_ID
STATE
"""
# replace with type list
coltypes = """VARCHAR2
VARCHAR2
VARCHAR2
NUMBER
VARCHAR2
VARCHAR2
CHAR
VARCHAR2
CHAR
VARCHAR2
VARCHAR2
NUMBER
DATE
VARCHAR2
VARCHAR2
NUMBER
DATE
NUMBER
VARCHAR2
"""
import re
cols = cols.lower()
col_split = cols.split()
for i, x in enumerate(col_split):
col_split[i] = col_split[i].strip()
coltypes = coltypes.replace("VARCHAR2", "String")
coltypes = coltypes.replace("CHAR", "String")
coltypes = coltypes.replace("DATE", "Timestamp")
type_split = coltypes.split()
for i, x in enumerate(type_split):
type_split[i] = type_split[i].strip()
cols_list = {"column_name": col_split, "type": type_split}
cols_df = pd.DataFrame(cols_list)
cols_df.index += 1
cols_df
# *Note: Please manually change NUMBER datatype to either Long or BigDecimal as fit
# # **Get Models**
# ---
from re import sub
# creating a function which will convert string to camelcase
def convert_to_camelCase(my_string):
my_string = sub(r"(_|-)+", " ", my_string).title().replace(" ", "")
return my_string[0].lower() + my_string[1:]
print("Model:")
for i, col in enumerate(col_split):
print("\tprivate " + type_split[i] + " " + col_split[i] + ";")
cols_dto = col_split.copy()
for i, x in enumerate(cols_dto):
cols_dto[i] = convert_to_camelCase(cols_dto[i])
print("\nModel Dto:")
for i, col in enumerate(cols_dto):
print("\tprivate " + type_split[i] + " " + cols_dto[i] + ";")
# # **Generate DtoMapper**
# ---
#
def capitalize1(word):
word = word[0].capitalize() + word[1:]
return word
print(
"\tpublic static "
+ capitalize1(table_name)
+ " to"
+ capitalize1(table_name)
+ "("
+ capitalize1(table_name)
+ "Dto "
+ table_name
+ "Dto){"
)
print("\t\treturn new " + capitalize1(table_name) + "()")
for i, col in enumerate(cols_dto):
print(
"\t\t\t.set"
+ capitalize1(col_split[i])
+ "("
+ table_name
+ "Dto.get"
+ capitalize1(col)
+ "())"
)
print("\t\t;\n\t}\n")
print(
"\tpublic static "
+ capitalize1(table_name)
+ "Dto to"
+ capitalize1(table_name)
+ "Dto("
+ capitalize1(table_name)
+ " "
+ table_name
+ "){"
)
print("\t\treturn new " + capitalize1(table_name) + "Dto()")
for i, col in enumerate(cols_dto):
print(
"\t\t\t.set"
+ capitalize1(col)
+ "("
+ table_name
+ ".get"
+ capitalize1(col_split[i])
+ "())"
)
print("\t\t;\n\t}\n")
print(
"\tpublic static List<"
+ capitalize1(table_name)
+ "> to"
+ capitalize1(table_name)
+ "List(List<"
+ capitalize1(table_name)
+ "Dto> "
+ table_name
+ "DtoList){"
)
print(
"\t\tList<"
+ capitalize1(table_name)
+ "> "
+ table_name
+ "List = new ArrayList<>();"
)
print("\t\t" + table_name + "DtoList.stream().forEach(" + table_name + "Dto ->")
print(
f"\t\t\t{table_name}List.add({capitalize1(table_name)}DtoMapper.to{capitalize1(table_name)}({table_name}Dto))"
)
print(f"\t\t);\n\t\treturn {table_name}List;")
print("\t}\n")
print(
"\tpublic static List<"
+ capitalize1(table_name)
+ "Dto> to"
+ capitalize1(table_name)
+ "DtoList(List<"
+ capitalize1(table_name)
+ "> "
+ table_name
+ "List){"
)
print(
"\t\tList<"
+ capitalize1(table_name)
+ "Dto> "
+ table_name
+ "DtoList = new ArrayList<>();"
)
print("\t\t" + table_name + "List.stream().forEach(" + table_name + " ->")
print(
f"\t\t\t{table_name}DtoList.add({capitalize1(table_name)}DtoMapper.to{capitalize1(table_name)}Dto({table_name}))"
)
print(f"\t\t);\n\t\treturn {table_name}DtoList;")
print("\t}\n")
| false | 0 | 1,681 | 1 | 1,681 | 1,681 |
||
129349505
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import sys
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
import pandas as pd
import seaborn as sns
# from scipy.stats import expon, gamma, erlang
import scipy.stats as stats
import matplotlib.pyplot as plt
## Step 1: load data
df = pd.read_csv("/kaggle/input/data-304-proj/full_data.csv")
actual_service_time = df.Actual_Service_Time_Seconds.dropna()
## Step 2: histogram
sns.histplot(actual_service_time, stat="density")
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
ax = plt.axes()
# create 100 bins with existing data
size = 100
count, bins, ignored = plt.hist(actual_service_time, size, density=True)
### Gamma distribution
## fit data
shape_g, location_g, scale_g = stats.gamma.fit(actual_service_time)
## plot the data along with it's Gamma fitted curve
x = bins
y = stats.gamma.pdf(x, a=shape_g, scale=scale_g)
# plot Bins along with continuous function
plt.plot(x, y, linewidth=1, color="r", label="Gamma")
## Erlang distribution: a particular case of the Gamma distribution
shape_e, location_e, scale_e = stats.erlang.fit(actual_service_time)
y = stats.erlang.pdf(x, a=shape_e, scale=scale_e)
plt.plot(x, y, linewidth=1, color="y", label="Erlang")
## Exponential distribution
location_ex, scale_ex = stats.expon.fit(actual_service_time)
y = stats.expon.pdf(x, loc=location_ex, scale=scale_ex)
plt.plot(x, y, linewidth=1, color="b", label="Exponetial")
plt.xlabel("Actual Service Time")
plt.ylabel("Probability Density")
plt.title(
"Probability Density Function of Exponential, Gamma, and Erlang distributions"
)
ax.legend()
plt.show()
#### Step 2: Kolmogorov-Smirnov test for goodness of fit of each distribution
gamma_data = stats.gamma.rvs(a=shape_g, loc=location_g, scale=scale_g, size=size * 1000)
gamma_test = stats.kstest(gamma_data, "gamma", args=(shape_g, location_g, scale_g))
erlang_data = stats.erlang.rvs(
a=shape_e, loc=location_e, scale=scale_e, size=size * 1000
)
erlang_test = stats.kstest(erlang_data, "erlang", args=(shape_e, location_e, scale_e))
expon_data = stats.expon.rvs(loc=location_ex, scale=scale_ex, size=size * 1000)
expon_test = stats.kstest(expon_data, "expon", args=(location_ex, scale_ex))
print("Gamma:", gamma_test)
print("Erlang:", erlang_test)
print("Exponetial:", expon_test)
import sys
from distfit import distfit
import matplotlib.pyplot as plt
dfit = distfit(distr=["norm", "expon", "erlang", "gamma"])
# find best theoretical distribution for empirical data X
dfit.fit_transform(actual_service_time)
dfit.plot()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/349/129349505.ipynb
| null | null |
[{"Id": 129349505, "ScriptId": 38335375, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10113082, "CreationDate": "05/13/2023 03:10:52", "VersionNumber": 3.0, "Title": "Data304_project_group6", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 102.0, "LinesInsertedFromPrevious": 16.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 86.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import sys
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
import pandas as pd
import seaborn as sns
# from scipy.stats import expon, gamma, erlang
import scipy.stats as stats
import matplotlib.pyplot as plt
## Step 1: load data
df = pd.read_csv("/kaggle/input/data-304-proj/full_data.csv")
actual_service_time = df.Actual_Service_Time_Seconds.dropna()
## Step 2: histogram
sns.histplot(actual_service_time, stat="density")
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
ax = plt.axes()
# create 100 bins with existing data
size = 100
count, bins, ignored = plt.hist(actual_service_time, size, density=True)
### Gamma distribution
## fit data
shape_g, location_g, scale_g = stats.gamma.fit(actual_service_time)
## plot the data along with it's Gamma fitted curve
x = bins
y = stats.gamma.pdf(x, a=shape_g, scale=scale_g)
# plot Bins along with continuous function
plt.plot(x, y, linewidth=1, color="r", label="Gamma")
## Erlang distribution: a particular case of the Gamma distribution
shape_e, location_e, scale_e = stats.erlang.fit(actual_service_time)
y = stats.erlang.pdf(x, a=shape_e, scale=scale_e)
plt.plot(x, y, linewidth=1, color="y", label="Erlang")
## Exponential distribution
location_ex, scale_ex = stats.expon.fit(actual_service_time)
y = stats.expon.pdf(x, loc=location_ex, scale=scale_ex)
plt.plot(x, y, linewidth=1, color="b", label="Exponetial")
plt.xlabel("Actual Service Time")
plt.ylabel("Probability Density")
plt.title(
"Probability Density Function of Exponential, Gamma, and Erlang distributions"
)
ax.legend()
plt.show()
#### Step 2: Kolmogorov-Smirnov test for goodness of fit of each distribution
gamma_data = stats.gamma.rvs(a=shape_g, loc=location_g, scale=scale_g, size=size * 1000)
gamma_test = stats.kstest(gamma_data, "gamma", args=(shape_g, location_g, scale_g))
erlang_data = stats.erlang.rvs(
a=shape_e, loc=location_e, scale=scale_e, size=size * 1000
)
erlang_test = stats.kstest(erlang_data, "erlang", args=(shape_e, location_e, scale_e))
expon_data = stats.expon.rvs(loc=location_ex, scale=scale_ex, size=size * 1000)
expon_test = stats.kstest(expon_data, "expon", args=(location_ex, scale_ex))
print("Gamma:", gamma_test)
print("Erlang:", erlang_test)
print("Exponetial:", expon_test)
import sys
from distfit import distfit
import matplotlib.pyplot as plt
dfit = distfit(distr=["norm", "expon", "erlang", "gamma"])
# find best theoretical distribution for empirical data X
dfit.fit_transform(actual_service_time)
dfit.plot()
| false | 0 | 1,042 | 0 | 1,042 | 1,042 |
||
129674045
|
import keras
import pandas as pd
dataset_path = keras.utils.get_file(
"auto_mpg.data",
"https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data",
)
column_names = [
"MPG",
"Cylinders",
"Dispalcement",
"Horsepower",
"Weight",
"Accelerartion",
"Model Year",
"Origin",
]
data = pd.read_csv(
dataset_path,
names=column_names,
comment="\t",
na_values="?",
sep=" ",
skipinitialspace=True,
)
data.head(20)
data_copy = data.copy()
data_copy.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/674/129674045.ipynb
| null | null |
[{"Id": 129674045, "ScriptId": 38559177, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10655496, "CreationDate": "05/15/2023 16:35:58", "VersionNumber": 1.0, "Title": "notebook581ec5cd2b", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 18.0, "LinesInsertedFromPrevious": 18.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import keras
import pandas as pd
dataset_path = keras.utils.get_file(
"auto_mpg.data",
"https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data",
)
column_names = [
"MPG",
"Cylinders",
"Dispalcement",
"Horsepower",
"Weight",
"Accelerartion",
"Model Year",
"Origin",
]
data = pd.read_csv(
dataset_path,
names=column_names,
comment="\t",
na_values="?",
sep=" ",
skipinitialspace=True,
)
data.head(20)
data_copy = data.copy()
data_copy.head()
| false | 0 | 186 | 0 | 186 | 186 |
||
129674592
|
<jupyter_start><jupyter_text>UCF Crime Dataset
## Context
The dataset contains extracted images from the UCF crime dataset used for Real-world Anomaly Detection in Surveillance Videos
## Content
The dataset contains images extracted from every video from the UCF Crime Dataset.
Every 10th frame is extracted from each full-length video and combined for every video in that class.
All the images are of size 64*64 and in ```.png``` format
The dataset has a total of 14 Classes :
```
1. Abuse
2. Arrest'
3. Arson
4. Assault
5. Burglary
6. Explosion
7. Fighting
8. Normal Videos
9. RoadAccidents
10. Robbery
11. Shooting
12. Shoplifting
13. Stealing
14. Vandalism
```
The total image count for the train subset is 1,266,345.
The total image count for the test subset is 111,308.
## Acknowledgements
All videos used for frame extraction were obtained from [UCF CRIME OFFICIAL DATASET](https://www.crcv.ucf.edu/projects/real-world).
Official Dropbox link for all videos can be found [here](https://www.dropbox.com/sh/75v5ehq4cdg5g5g/AABvnJSwZI7zXb8_myBA0CLHa?dl=0).
Kaggle dataset identifier: ucf-crime-dataset
<jupyter_script>import os
import cv2
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Hyperparameters
train_dir = "../input/ucf-crime-dataset/Train"
test_dir = "../input/ucf-crime-dataset/Test"
SEED = 12
IMG_HEIGHT = 64
IMG_WIDTH = 64
BATCH_SIZE = 64
EPOCHS = 1
LR = 0.00003
NUM_CLASSES = 14
CLASS_LABELS = [
"Abuse",
"Arrest",
"Arson",
"Assault",
"Burglary",
"Explosion",
"Fighting",
"Normal",
"RoadAccidents",
"Robbery",
"Shooting",
"Shoplifting",
"Stealing",
"Vandalism",
]
import tensorflow as tf
from tensorflow.keras.applications.densenet import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Create data generators for training and validation
train_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.05,
rescale=1.0 / 255,
preprocessing_function=preprocess_input,
)
test_datagen = ImageDataGenerator(
rescale=1.0 / 255, preprocessing_function=preprocess_input
)
train_generator = train_datagen.flow_from_directory(
directory=train_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE,
shuffle=True,
color_mode="rgb",
class_mode="categorical",
seed=SEED,
)
test_generator = test_datagen.flow_from_directory(
directory=test_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE,
shuffle=False,
color_mode="rgb",
class_mode="categorical",
seed=SEED,
)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
# Define the model architecture
model = Sequential()
model.add(Conv2D(32, (3, 3), activation="relu", input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation="relu"))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(NUM_CLASSES, activation="softmax"))
# Compile the model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=LR),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
# Train the model
model.fit(
train_data_gen,
steps_per_epoch=train_X.shape[0] // BATCH_SIZE,
epochs=EPOCHS,
validation_data=val_data_gen,
validation_steps=val_X.shape[0] // BATCH_SIZE,
)
from sklearn.metrics import classification_report, confusion_matrix
# Predict on the validation set
val_predictions = model.predict(val_data_gen)
val_pred_labels = np.argmax(val_predictions, axis=1)
# Compute performance metrics
print("Validation Set Metrics:")
print(classification_report(val_y, val_pred_labels))
print("Confusion Matrix:")
print(confusion_matrix(val_y, val_pred_labels))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/674/129674592.ipynb
|
ucf-crime-dataset
|
odins0n
|
[{"Id": 129674592, "ScriptId": 38561015, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9110609, "CreationDate": "05/15/2023 16:41:11", "VersionNumber": 1.0, "Title": "notebook0a74b2ebb9", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 104.0, "LinesInsertedFromPrevious": 104.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185989818, "KernelVersionId": 129674592, "SourceDatasetVersionId": 2799594}]
|
[{"Id": 2799594, "DatasetId": 1710176, "DatasourceVersionId": 2845646, "CreatorUserId": 6693435, "LicenseName": "CC0: Public Domain", "CreationDate": "11/11/2021 13:16:24", "VersionNumber": 1.0, "Title": "UCF Crime Dataset", "Slug": "ucf-crime-dataset", "Subtitle": "Real-world Anomaly Detection in Surveillance Videos", "Description": "## Context\n\nThe dataset contains extracted images from the UCF crime dataset used for Real-world Anomaly Detection in Surveillance Videos\n\n## Content\n\nThe dataset contains images extracted from every video from the UCF Crime Dataset.\nEvery 10th frame is extracted from each full-length video and combined for every video in that class.\nAll the images are of size 64*64 and in ```.png``` format\n\nThe dataset has a total of 14 Classes : \n```\n1. Abuse \n2. Arrest'\n3. Arson\n4. Assault\n5. Burglary\n6. Explosion\n7. Fighting\n8. Normal Videos\n9. RoadAccidents\n10. Robbery\n11. Shooting\n12. Shoplifting\n13. Stealing\n14. Vandalism\n```\n\nThe total image count for the train subset is 1,266,345.\nThe total image count for the test subset is 111,308.\n\n## Acknowledgements\n\nAll videos used for frame extraction were obtained from [UCF CRIME OFFICIAL DATASET](https://www.crcv.ucf.edu/projects/real-world).\nOfficial Dropbox link for all videos can be found [here](https://www.dropbox.com/sh/75v5ehq4cdg5g5g/AABvnJSwZI7zXb8_myBA0CLHa?dl=0).", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1710176, "CreatorUserId": 6693435, "OwnerUserId": 6693435.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2799594.0, "CurrentDatasourceVersionId": 2845646.0, "ForumId": 1731849, "Type": 2, "CreationDate": "11/11/2021 13:16:24", "LastActivityDate": "11/11/2021", "TotalViews": 47919, "TotalDownloads": 6522, "TotalVotes": 115, "TotalKernels": 15}]
|
[{"Id": 6693435, "UserName": "odins0n", "DisplayName": "Sanskar Hasija", "RegisterDate": "02/09/2021", "PerformanceTier": 4}]
|
import os
import cv2
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Hyperparameters
train_dir = "../input/ucf-crime-dataset/Train"
test_dir = "../input/ucf-crime-dataset/Test"
SEED = 12
IMG_HEIGHT = 64
IMG_WIDTH = 64
BATCH_SIZE = 64
EPOCHS = 1
LR = 0.00003
NUM_CLASSES = 14
CLASS_LABELS = [
"Abuse",
"Arrest",
"Arson",
"Assault",
"Burglary",
"Explosion",
"Fighting",
"Normal",
"RoadAccidents",
"Robbery",
"Shooting",
"Shoplifting",
"Stealing",
"Vandalism",
]
import tensorflow as tf
from tensorflow.keras.applications.densenet import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Create data generators for training and validation
train_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.05,
rescale=1.0 / 255,
preprocessing_function=preprocess_input,
)
test_datagen = ImageDataGenerator(
rescale=1.0 / 255, preprocessing_function=preprocess_input
)
train_generator = train_datagen.flow_from_directory(
directory=train_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE,
shuffle=True,
color_mode="rgb",
class_mode="categorical",
seed=SEED,
)
test_generator = test_datagen.flow_from_directory(
directory=test_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE,
shuffle=False,
color_mode="rgb",
class_mode="categorical",
seed=SEED,
)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
# Define the model architecture
model = Sequential()
model.add(Conv2D(32, (3, 3), activation="relu", input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation="relu"))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(NUM_CLASSES, activation="softmax"))
# Compile the model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=LR),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
# Train the model
model.fit(
train_data_gen,
steps_per_epoch=train_X.shape[0] // BATCH_SIZE,
epochs=EPOCHS,
validation_data=val_data_gen,
validation_steps=val_X.shape[0] // BATCH_SIZE,
)
from sklearn.metrics import classification_report, confusion_matrix
# Predict on the validation set
val_predictions = model.predict(val_data_gen)
val_pred_labels = np.argmax(val_predictions, axis=1)
# Compute performance metrics
print("Validation Set Metrics:")
print(classification_report(val_y, val_pred_labels))
print("Confusion Matrix:")
print(confusion_matrix(val_y, val_pred_labels))
| false | 0 | 917 | 0 | 1,290 | 917 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.